Last modified: December 15 2016.

hpchelp@iitd.ac.in

Softwares

Applications

  • apps/abaqus
  • apps/abinit/7.10.5/intel
  • apps/anaconda/4.1.1/gnu
  • apps/autoconf/2.69/gnu
  • apps/automake/1.15/gnu
  • apps/binutils/2.25/gnu
  • apps/bison/3.0/gnu
  • apps/bzip2/1.0.6/gnu
  • apps/caffe
  • apps/Caffe/master/27.01.2016/gnu
  • apps/cdo
  • apps/cesm
  • apps/cesm1_1_2
  • apps/cesm1_2_2
  • apps/cesm1_2_2_CAMChem
  • apps/cmake/2.8.12/gnu
  • apps/cmake/3.4.1/gnu
  • apps/codesaturne/4.0.6/intel
  • apps/cpmd/appvars
  • apps/cppunit/1.12.1/intel
  • apps/curl/7.46.0/gnu
  • apps/dssp/2.0.1/gnu
  • apps/dssp/2.0.4/bin
  • apps/dssp/2.2.1/gnu
  • apps/ffmpeg/3.1.5/gnu
  • apps/ffmpeg/3.2.1/gnu
  • apps/flex/2.6.0/gnu
  • apps/fluent
  • apps/Fluent/17.2/precompiled
  • apps/freesteam/2.1/gnu
  • apps/gawk/4.1.4/gnu
  • apps/git/2.9.0/gnu
  • apps/gphoto/2/2.5.6/gnu
  • apps/graphicsmagick/1.3.25/gnu
  • apps/graphviz/2.38.0/intel
  • apps/gromacs/4.6.2
  • apps/gromacs/4.6.2_noncuda
  • apps/gromacs/4.6.2_plumed
  • apps/gromacs/4.6.5
  • apps/gromacs/4.6.5_plumed
  • apps/gromacs/4.6.7/intel
  • apps/gromacs/4.6.7/intel1
  • apps/gromacs/4.6.7/intel2
  • apps/gromacs/5.1.1
  • apps/gromacs/5.1.2/intel
  • apps/gromacs/5.1.2/intel1
  • apps/gromacs/5.1.2/intel2
  • apps/gromacs/5.1.4/intel
  • apps/gromacs/5.1.4/intel1
  • apps/gromacs5.1.1
  • apps/java/1.8.0.112/precompiled
  • apps/lammps/16.02.2016/gpu
  • apps/lammps/16.02.2016/k20gpu
  • apps/lammps/cpu/lammps
  • apps/lammps/gpu
  • apps/lammps/gpu-mixed
  • apps/lammps/intel_phi/lammps
  • apps/leveldb/1.19/gnu
  • apps/lmdb/0.9.18/gnu
  • apps/matlab
  • apps/Matlab/r2014b/gnu
  • apps/Matlab/r2015b/gnu
  • apps/mpas/4.0/intel
  • apps/namd
  • apps/NAMD/2.10/gpu/gnu
  • apps/NAMD/2.10/mic/intel
  • apps/NAMD/2.11/intel
  • apps/NAMD/2.11/k20/intel
  • apps/ncl_ncarg/6.3.0/gnu
  • apps/nco
  • apps/netcdf/3.6/appvars
  • apps/netpbm/10.47.63/gnu
  • apps/omniorb/4.2.1/intel
  • apps/opencascade/6.9.1/intel
  • apps/opencascade/6.9.1/precompiled
  • apps/opencv2.3
  • apps/openfoam2.3.1
  • apps/openfoam3.0.0
  • apps/pkgconfig/0.29/gnu
  • apps/protobuf/3.1.0/gnu
  • apps/pyqt/4/4.11.4/gnu
  • apps/pythonpackages/2.7.10/freestream/1.0.1/gnu
  • apps/pythonpackages/2.7.10/scons/2.5.1/gnu
  • apps/redmd/2.3/gnu
  • apps/RegCM/4.4.5.10/pgi
  • apps/rnnlib/2013.08.20/gnu
  • apps/rstudio/0.98.1103/precompiled
  • apps/salome/gui/7.8.0/precompiled
  • apps/salome/kernel/7.8.0/intel
  • apps/salome/kernel/7.8.0/precompiled
  • apps/salome/yacs/7.8.0/precompiled
  • apps/sip/4.18.1/gnu
  • apps/snappy/1.1.3/gnu
  • apps/socat/1.7.3.0/gnu
  • apps/spcam2_0-cesm1_1_1
  • apps/tar/1.28/gnu
  • apps/tensorflow/0.11/gnu
  • apps/tensorflow/0.7/gnu
  • apps/test/gromacs-5.1.1
  • apps/test/openfoam-2.3.1
  • apps/test/openfoam-3.0.0
  • apps/theano/0.7.0
  • apps/theano/0.8.0.dev/04.04.2016/gpu
  • apps/uvcdat/2.2/gnu
  • apps/uvcdat/2.4/gnu
  • apps/valgrind/3.11.0/ompi
  • apps/visualization/grace/5.1.25/gnu
  • apps/visualization/grace5
  • apps/visualization/ncview
  • apps/visualization/paraview/3.12.0/gnu
  • apps/visualization/paraview/5.0.0RC4/bin
  • apps/visualization/paraview4
  • apps/visualization/uvcdat
  • apps/visualization/vmd
  • apps/vmd/appvars
  • apps/_vsap/5.4.1/gpu/intel
  • apps/wrf/3.6/appvars
  • apps/yasm/1.3.0/gnu

Compilers

  • compiler/cuda/6.0/compilervars
  • compiler/cuda/6.5/compilervars
  • compiler/cuda/7.0/compilervars
  • compiler/gcc/4.4.4/compilervars
  • compiler/gcc/4.9.3/compilervars
  • compiler/gcc/5.1.0/compilervars
  • compiler/intel/icsxe2013/compilervars
  • compiler/intel/psxe2015/compilervars
  • compiler/pgi/pgicdk-13.7/compilervars
  • compiler/python/2.7.10/compilervars
  • suite/intel/icsxe2013/icsxevars

Libraries

  • apps/rnnlib/2013.08.20/gnu
  • lib/blas/3.6.0/gnu
  • lib/boost/1.54.0/gnu
  • lib/boost/1.59.0/gnu
  • lib/boost/1.59.0/gnu_ucs2
  • lib/bzip2/1.0.6/gnu
  • lib/caffedeps/master/intel
  • lib/cgal/4.7/gnu
  • lib/cgns/3.3.0/intel
  • lib/cudnn/6.5.2.0/precompiled
  • lib/cudnn/7.0/precompiled
  • lib/cudnn/7.0.3.0/precompiled
  • lib/cudnn/7.0.4.0/precompiled
  • lib/cudnn/7.5.5.0/precompiled
  • lib/devil/1.7.8/gnu
  • lib/esmf/6.3.0.1/gnu
  • lib/esmf/6.3.0.1/intel
  • lib/ffi/3.2/gnu
  • lib/fftw/2.1.5/intel
  • lib/fftw/3.2.2/gnu
  • lib/fftw/3.2.2/intel
  • lib/fftw/3.3.4/gnu
  • lib/freetype/2.5.0/gnu
  • lib/ftgl/2.1.3/gnu
  • lib/g2clib/1.4.0/gnu
  • lib/gdal/2.0.1/gnu
  • lib/gflags/2.1.2/gnu
  • lib/glog/0.3.4/gnu
  • lib/gphoto/2/2.5.6/gnu
  • lib/graphicsmagick/1.3.24/gnu
  • lib/hdf/4/4.2.11/gnu
  • lib/hdf/4/4.2.11/intel
  • lib/hdf/5/1.8.16/gnu
  • lib/hdf/5/1.8.16/intel
  • lib/hdf5/1.8.15/gcc/hdf5
  • lib/hdf5/1.8.15/intel/hdf5
  • lib/hdf5/1.8.15/pgi/hdf5
  • lib/jasper/1.900.1/gnu
  • lib/jpeg/6b/gnu
  • lib/jpeg/6b/k20/gnu
  • lib/lapack/3.4.1/gnu
  • lib/lcms/1.19/gnu
  • lib/lcms/2/2.7/gnu
  • lib/libtool/2.4.6/k40/gnu
  • lib/med/3.2.0/intel
  • lib/mesa/7.5/gnu
  • lib/metis/5.1.0/intel
  • lib/mng/1.0.10/gnu
  • lib/netcdf/4.1/gnu
  • lib/netcdf/4.1-gcc/netcdf
  • lib/netcdf/4.1-gcc_cxx
  • lib/netcdf/4.1-pgi/netcdf
  • lib/netcdf/4.3-intel/netcdf
  • lib/netcdf/4.4.2f_4.3.3.1c/gnu
  • lib/netcdf/4.4.2f_4.3.3.1c/intel
  • lib/netcdf/4.4.2-fortran/netcdf
  • lib/netcdf/c/4.3.3.1/gnu
  • lib/netcdf/c/4.3.3.1/intel
  • lib/netcdf/cxx/4.2/gnu
  • lib/netcdf/cxx/4-4.2.1/gnu
  • lib/netcdf/fort/4.4.2/gnu
  • lib/netcdf/fort/4.4.2/intel
  • lib/netcdf/parallel-netcdf/1.6.0/pgi/pnetcdf
  • lib/netcdf/parallel-netcdf/1.6.1/intel/pntecdf
  • lib/opencv/2.3/gnu
  • lib/opencv/2.4.13/gnu
  • lib/opencv/3.0.0/gnu
  • lib/parmetis/4.0.3/intel
  • lib/phdf/5/1.8.16/intel
  • lib/phdf/5/1.8.16/ompi
  • lib/pio/1.7.1/parallel/intel
  • lib/ple/2.0.1/gnu
  • lib/pnetcdf/1.6.1/intel
  • lib/pnetcdf/1.6.1/ompi
  • lib/png/1.2.56/gnu
  • lib/png/1.6.19/gnu
  • lib/proj.4/master-31_12_15/gnu
  • lib/ptscotch/6.0.4/intel
  • lib/QT/4.6.4/gnu
  • lib/QT/4.8.7/gnu
  • lib/scotch/6.0.4/gnu
  • lib/ssh2/1.8.0/gnu
  • lib/szip/2.1/gcc/szip
  • lib/szip/2.1/gnu
  • lib/szip/2.1/intel/szip
  • lib/szip/2.1/pgi/szip
  • lib/tiff/3.8.2/gnu
  • lib/udunits/2.2.20/gnu
  • lib/x264/2016.11.30/gnu
  • lib/xml/2/2.9.4/gnu
  • lib/zlib/1.2.8/gcc/zlib
  • lib/zlib/1.2.8/gnu
  • lib/zlib/1.2.8/intel/zlib
  • lib/zlib/1.2.8/pgi/zlib
  • mkl/intel/psxe2015/mklvars

MPI

  • compiler/mpi/openmpi/1.10.0/gnu
  • compiler/mpi/openmpi/1.6.5/gnu
  • compiler/mpi/openmpi/1.8.4/gnu
  • compiler/mpi/openmpi/2.0.1/gnu
  • mpi/mpich/3.1.4/gcc/mpivars
  • mpi/mpich/3.1.4/intel/mpivars
  • mpi/mpich/3.1.4/pgi/mpivars
  • mpi/mvapich2/2.2a/gcc/mpivars
  • mpi/mvapich2/2.2a/intel/mpivars
  • mpi/mvapich2/2.2a/pgi/mpivars
  • mpi/openmpi/1.10.0/gcc/mpivars
  • mpi/openmpi/1.6.5/gcc/mpivars
  • mpi/openmpi/1.6.5/intel/mpivars
  • mpi/openmpi/1.6.5/pgi/mpivars
  • mpi/openmpi/1.8.4/gcc/mpivars
  • mpi/openmpi/1.8.4/intel/mpivars
  • mpi/openmpi/1.8.4/pgi/mpivars
  • suite/intel/parallelStudio

Under Testing

  • test/cp2k/2.6.0/gpu/cp2k
  • test/lammps/cpu/lammps
  • test/lammps/gpu/lammps
  • test/lammps/intel_phi/lammps
  • test/quantum_espresso/cpu/quantum_espresso
  • test/quantum_espresso/gpu/quantum_espresso

ANSYS FLUENT

Steps for preparing and submitting your job

  1. Case and Data files
  2. Transfer your case and data files to your /home or /scratch directory. If you are generating large data (> 10GB), please use /scratch.
  3. Journal File e.g. journal.jou
  4. A "journal file" is needed to execute fluent commands in the batch mode. e.g.
    rcd case_and_data
    /solve/dual-time-iterate 240000 50
    wcd output
    exit ok
    
    This journal file will read case_and_data.cas and case_and_data.dat. it will runa dual time iteration with 50 iteration per time-step for 240000 time-steps. It will write the output in "output".
  5. PBS submit file. e.g. pbssubmit.sh
  6. #!/bin/bash
    #PBS -N jobname
    #PBS -P department
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    #PBS -l select=1:ncpus=20
    #PBS -o stdout_file
    #PBS -e stderr_file
    #PBS -l walltime=168:00:00
    #PBS -V
    #PBS -l fluent=1
    #PBS -l fluent_hpc=4
    #PBS -l software=ANSYS
    cd $PBS_O_WORKDIR
    module load apps/fluent
    #default version is 15.0.7 
    
    module load compiler/mpi/openmpi/1.10.0/gnu
    
    time -p fluent -g 2ddp -t $NCPUS -i journal.jou -ssh -mpi=openmpi -cnf=$PBS_NODEFILE -pinfiniband &> log
    
    The example submit file will request 20 cpus for 168 hours and run a "2ddp" job over infiniband interconnect using the journal file "journal.jou". It will use 1 base fluent license (fluent=1) and 4 HPC licenses (fluent_hpc=4). The base license allows a 16 process parallel job.
    Following summarizes the fluent command line arguments:
    • 2ddp: Run a 2D simulation in double precision(remove dp to run in single precesion)
    • -g: Run Fluent without the GUI
    • -ssh: Use ssh to login to the available nodes(-rsh is default)
    • -pinfiniband: Use infiniband interconnect
    • -cnf=$PBS_NODEFILE: the list of nodes allocated to the PBS job is also provided to fluent solver.
    • &> : Redirect the Flueent output & error information to "log" file.
    • -mpi=openmpi: specifies the type of MPI (intel,mpich2,openmpi are currently supported on IITD HPC). Please load appropriate mpi module as per specified option.
  7. Job submission
  8. Submit the above job using "qsub pbssubmit.sh"

Checklist

  • Choose license carefully: a large job will take a long time to start!
  • Do NOT use the full path for your files inside fluent.
  • For graphical interface, please install a X11 Client
  • For transferring files, you can use a graphical tool such as winscp

MATLAB

Steps for preparing and submitting your job

You can load matlab module via:
module load apps/matlab
for short jobs/runs you can run matlab gui via
matlab
command. long running jobs should be submitted via batch system a script file, example:
#!/bin/bash
#PBS -N jobname
#PBS -P department
#PBS -m bea
#PBS -M $USER@iitd.ac.in
#PBS -l select=1:ncpus=8
#PBS -o stdout_file
#PBS -e stderr_file
#PBS -l walltime=168:00:00
#PBS -l matlab=1
#PBS -V
#PBS -l software=MATLAB
cd $PBS_O_WORKDIR
module load apps/matlab
time -p matlab -nosplash -nodisplay < myprogram.m > matlab.log
now submit the job using qsub command.

GROMACS

Steps for preparing and submitting your job

You can check for available gromacs modules via
 module avail apps/gromacs
------------------------------- /home/soft/modules ---------------------------------------
apps/gromacs/4.6.2         apps/gromacs/4.6.2_plumed  apps/gromacs/4.6.5_plumed  apps/gromacs5.1.1
apps/gromacs/4.6.2_noncuda apps/gromacs/4.6.5         apps/gromacs/5.1.1
You can load gromacs version of your choice via (e. gromacs 4.6.2):
module load apps/gromacs/4.6.2
This loads & sets all the prerequisites for gromacs 4.6.2(cuda). For short test runs you can run (say)
mdrun_mpi
gromacs command, from gpu login nodes. Long running jobs should be submitted via batch system using a script file, example:
#!/bin/bash
#PBS -N jobname
#PBS -P department
#PBS -m bea
#PBS -M $USER@iitd.ac.in
#PBS -l select=4:ncpus=24:mpiprocs=2:ngpus=2
#PBS -o stdout_file
#PBS -e stderr_file
#PBS -l walltime=24:00:00
#PBS -V
#PBS -l software=GROMACS
echo "==============================="
echo $PBS_JOBID
cat $PBS_NODEFILE
echo "==============================="
cd $PBS_O_WORKDIR


module load apps/gromacs/4.6.2
time -p mpirun -np $PBS_NTASKS -machinefile $PBS_NODEFILE -genv OMP_NUM_THREADS 12 mdrun_mpi < gromacs specific input files & parameters >
now submit the job using
qsub
command.
This script will request for 4 nodes with 2 gpu cards per node(total 8 gpu cards) & 96 cpu cores (24 core per node).
This script will also launch 8 process in total (2 process per node) on the requested nodes, also each process creates 12 threads (-genv OMP_NUM_THREADS 12) , hence launching 24 threads per node.

NAMD (XEON PHI)

Steps for running your job on nodes having XEON PHI cards

    add nmics flag to your select statement in pbs script as follows:
    For ex.
    #PBS -l select=1:ncpus=24:nmics=1       #if you want to use one mic card only
    
    or
    
    #PBS -l select=1:ncpus=24:nmics=2       #if you want to use two mic cards 
    
    #NOTE : ONLY VALUE 1 and 2 are allowed for nmics flag
    
    #Add following command in your pbs script.
    module load apps/NAMD/2.10/mic/intel    #To load namd xeon phi binary and libraries
    
    #Command to execute NAMD on xeon phi
    mpiexec.hydra -machinefile $PBS_NODEFILE -n 2 -perhost 2 namd2 +ppn 11 +commap 0,12 +pemap 1-11,13-23 +devices 0,1 ./stmv.namd
    
    
  • Explanation
  • mpiexec.hydra  			#Program to launch MPI
    -machinefile $PBS_NODEFILE  	#To Nodes allotted for current job
    -n 				#Number Of Processes
    -perhost 			#MPI processes per host
    namd2				#namd binary name
    +ppn      			#Worker Processes(threads) per process
    +commap 0,12			#Thread communication to process numbers( threads 1-11 will communicate with process id 0,13-23 will communicate with process id 12 )
    +pemap 1-11,13-23		#Worker Process(threads) Mapping
    +devices 0,1			#MIC cards want to use
    stmv.namd			#input file
    if any issues/errors please mail on hpchelp@iitd.ac.in
    

LAMMPS (XEON PHI)

Steps for running your job on nodes having XEON PHI cards

    add nmics flag to your select statement in pbs script as follows:
    For ex.
    #PBS -l select=1:ncpus=24:nmics=1       #if you want to use one mic card only
    
    or
    
    #PBS -l select=1:ncpus=24:nmics=2       #if you want to use two mic cards
    
    #NOTE : ONLY VALUE 1 and 2 are allowed for nmics flag
    
    #Add following command in your pbs script.
    module load apps/lammps/intel_phi/lammps    #To load LAMMPS xeon phi binaries and libraries
    
    
  • Add following Lines to LAMMPS input file
  • package intel 2 mode mixed balance $b
    package omp 0
    suffix $s
    
    #Command to execute LAMMPS on xeon phi
    mpiexec.hydra -np 24 -machinefile $PBS_NODEFILE  -genv OMP_NUM_THREADS 1 lmp_intel_phi -in in.intel.rhodo -log none -v b -1 -v s intel
    
  • Explanation
  • -np  		         	#number of MPI processes
    -genv OMP_NUM_THREADS  	        #number of threads per process
    -in input_file_name		#input file name
    -log 				#where to send log output
    -v s intel 			#suffix value=intel
    -v b 				#0 = no mic cards get used, -1 = balance the workload between the cards,0.75 = give 75% workload to MIC cards
    if any issues/errors please mail on hpchelp@iitd.ac.in