Last modified: September 30 2016.

hpchelp@iitd.ac.in

PADUM: Hybrid High Performance Computing Facility at IITD

How to Use and setup the environment.

You will need an ssh client to connect to the cluster. CPU login is available through ssh hpc.iitd.ac.in (Use IITD credentials). To copy data use scp to hpc.iitd.ac.in. GPU or Mic (Xeon Phi) nodes can be directly accessed through gpu.hpc.iitd.ac.in and mic.hpc.iitd.ac.in respectively. Please avoid using gpu and mic for large data transfer.

Once logged in to the system, you have access to your home (backed up) and scratch (not backed up) directories. Please generate an ssh key pair in your .ssh directory to start using PBS. Please report issues to hpchelp[@]iitd.ac.in

Maintenance for HPC- 27th oct 2016 to 30th oct 2016

The HPC facility will be undergoing a planned maintenance from 27th Oct 2016 (Time-23:00) till 30th Oct 2016 (Time-23:00). We will be upgrading some essential software. During this period:

  • Login to the cluster will not be available.
  • Jobs and job queue will be stopped. (Queued jobs do not need to be re-submitted.)
  • Data would be inaccessible. (Reminder: Take a backup of your /scratch data)
  • Account creation and extension will not done during the maintenance period.

Hardware Specifications

  • Total number of compute nodes: 422
    CPU nodes: 238
    GPU accelerated nodes: 161
    Xeon Phi co-processor nodes: 23

  • Basic configuration:
    GPU: 2x NVIDIA K40 (12GB, 2880 CUDA cores)
    Xeon Phi: 2x Intel Xeon Phi 7120P (16GB, 1.238 GHz, 61 cores)
    CPU: 2x E5-2680 v3 2.5GHz/12-Core
    RAM: 64 GB

  • 8 CPU, 8 GPU and 4 Xeon Phi nodes have 512 GB RAM each

  • In addition, the 16 nodes of the old HPCA machine are incorporated here, and can be accessed using the appropriate resource directive.

  • The cluster can be accessed through 4 general login nodes, 2 GPU login nodes and 2 Xeon Phi Login nodes.
  • Storage:
    Home space 500 TB
    Scratch space 1000 TB