Last modified: October 31 2016.

hpchelp@iitd.ac.in

PADUM: Hybrid High Performance Computing Facility at IITD

How to Use and setup the environment.

You will need an ssh client to connect to the cluster. CPU login is available through ssh hpc.iitd.ac.in (Use IITD credentials). To copy data use scp to hpc.iitd.ac.in. GPU or Mic (Xeon Phi) nodes can be directly accessed through gpu.hpc.iitd.ac.in and mic.hpc.iitd.ac.in respectively. Please avoid using gpu and mic for large data transfer.

Once logged in to the system, you have access to your home (backed up) and scratch (not backed up) directories. Please generate an ssh key pair in your .ssh directory to start using PBS. Please report issues to hpchelp[@]iitd.ac.in

Hardware Specifications

  • Total number of compute nodes: 422
    CPU nodes: 238
    GPU accelerated nodes: 161
    Xeon Phi co-processor nodes: 23

  • Basic configuration:
    GPU: 2x NVIDIA K40 (12GB, 2880 CUDA cores)
    Xeon Phi: 2x Intel Xeon Phi 7120P (16GB, 1.238 GHz, 61 cores)
    CPU: 2x E5-2680 v3 2.5GHz/12-Core
    RAM: 62 GB

  • 8 CPU, 8 GPU and 4 Xeon Phi nodes have 505 GB RAM each

  • In addition, the 16 nodes of the old HPCA machine are incorporated here, and can be accessed using the appropriate resource directive.

  • The cluster can be accessed through 4 general login nodes, 2 GPU login nodes and 2 Xeon Phi Login nodes.
  • Storage:
    Home space 500 TB
    Scratch space 1000 TB