Last modified: September 24 2024.
Contact: hpchelp [at] iitd.ac.in

PADUM: Hybrid High Performance Computing Facility at IITD

How to Use and setup the environment.

You will need an ssh client to connect to the cluster through your IITD kerberos credentials. CPU login is available through ssh hpc.iitd.ac.in (Use IITD credentials). To copy data use scp to hpc.iitd.ac.in. GPU or Mic (Xeon Phi) nodes can be directly accessed through gpu.hpc.iitd.ac.in and mic.hpc.iitd.ac.in respectively. Please avoid using gpu and mic for large data transfer.

Once logged in to the system, you have access to your home (backed up) and scratch (not backed up) directories. Please generate an ssh key pair in your .ssh directory to start using PBS. Please report issues to hpchelp[@]iitd.ac.in

 

Hardware Specifications

    Total number of Nodes: 634
    CPU only nodes: 417
    GPU accelerated nodes: 217

    Icelake Nodes (December 2022 onward):
  • Total number of nodes: 32
    CPU nodes: 16 (256GB RAM)
    GPU nodes: 16 (512GB RAM)
    (with two Nvidia A100 card on each GPU node)

  • Basic Configuration:
    GPU: NVIDIA A100 (40GB)
    CPU: 2x Intel Xeon Platinum 8358 (32 cores 2.6 GHz) "Ice Lake"
  • All connected over 200G HDR Infiniband.

  • Skylake Nodes (July, 2019 onward):
  • Total number of nodes: 184
    CPU nodes: 144
    GPU nodes: 40
    (with one Nvidia V100 cards on each GPU node: 17)
    (with two Nvidia V100 card on each GPU node: 23)

  • Basic Configuration:
    GPU: NVIDIA V100 (32GB 5120 CUDA cores)
    CPU: 2x Intel Xeon G-6148 (20 cores 2.4 GHz) "Skylake"
    RAM: 96GB
  • 8 CPU, 40 GPU nodes have 192 GB RAM each



  • Haswell Nodes (November, 2015 onward):
  • Total number of compute nodes: 420
    CPU nodes: 259
    GPU accelerated nodes: 161
  • Basic configuration:
    GPU: 2x NVIDIA K40 (12GB, 2880 CUDA cores)
    [Please contact hpchelp[@]iitd.ac.in]
    CPU: 2x E5-2680 v3 2.5GHz/12-Core "Haswell"
    RAM: 62 GB

  • 12 CPU and 8 GPU nodes have 500 GB RAM each


  • The cluster can be accessed through 4 general login nodes, 2 GPU login nodes.


  • Storage:
    HOME space 1.2 PB
    SCRATCH space 4.5 PB


  • Primary network: Fully non-blocking FDR 56Gbps infiniband. Latency ~700ns.