Last modified: August 26 2019.
Contact: hpchelp [at]

PADUM: Hybrid High Performance Computing Facility at IITD

How to Use and setup the environment.

You will need an ssh client to connect to the cluster through your IITD kerberos credentials. CPU login is available through ssh (Use IITD credentials). To copy data use scp to GPU or Mic (Xeon Phi) nodes can be directly accessed through and respectively. Please avoid using gpu and mic for large data transfer.

Once logged in to the system, you have access to your home (backed up) and scratch (not backed up) directories. Please generate an ssh key pair in your .ssh directory to start using PBS. Please report issues to hpchelp[@]

HPC under planned maintenance from 25th September to 10th October, 2019


Hardware Specifications

    PHASE 1:
  • Total number of compute nodes: 422
    CPU nodes: 238 + 23
    GPU accelerated nodes: 161
    Xeon Phi co-processor nodes: 23
  • Basic configuration:
    GPU: 2x NVIDIA K40 (12GB, 2880 CUDA cores)
    Xeon Phi: 2x Intel Xeon Phi 7120P (16GB, 1.238 GHz, 61 cores)[Please contact hpchelp[@]]
    CPU: 2x E5-2680 v3 2.5GHz/12-Core "Haswell"
    RAM: 62 GB

  • 8 CPU, 8 GPU and 4 Xeon Phi nodes have 505 GB RAM each

  • The cluster can be accessed through 4 general login nodes, 2 GPU login nodes and 2 Xeon Phi Login nodes.

  • Storage:
    Home space 678 TB
    Scratch space 3430 TB