Last modified: December 14 2017.

hpchelp@iitd.ac.in

Software

For list of available software, please check available modules:
$ module avail
If the required software is not available/listed in modules, users can
  • Install the software in your own account ($HOME) -or-
  • Request installation.

NOTE : OLD Softwares will be phased out by 18/06/2017

Software Module Lists

Old

New/Alternative

	mpi/openmpi/1.10.0/gcc/mpivars
	mpi/openmpi/1.6.5/gcc/mpivars
	mpi/openmpi/1.8.4/gcc/mpivars
   	compiler/python/2.7.10/compilervars
	module load apps/caffe
	apps/Caffe/master/27.01.2016/gnu
	apps/tensorflow/0.11/gnu
	apps/tensorflow/0.7/gnu
	apps/tensorflow/1.0/gpu
	apps/gromacs/4.6.2_noncuda
	apps/gromacs/4.6.2_plumed
	apps/gromacs/4.6.7/intel1
	apps/gromacs/4.6.7/intel2
	apps/gromacs/5.1.2/intel1
	apps/test/gromacs-5.1.1
	apps/gromacs/5.1.2/intel2
	apps/gromacs/5.1.4/intel1
	apps/matlab
	apps/namd
	apps/visualization/paraview4
	lib/cudnn/2.0/precompiled
	lib/cudnn/3.0/precompiled
	lib/cudnn/4.0/precompiled
	lib/cudnn/5.0/precompiled
	lib/cudnn/6.5.2.0/precompiled   
	lib/cudnn/7.0/precompiled       
	lib/cudnn/7.0.3.0/precompiled    
	lib/cudnn/3.0.7/precompiled      
	apps/anaconda/4.1.1/gnu
	lib/cudnn/7.0.4.0/precompiled    
	lib/cudnn/4.0.7/precompiled     
	lib/cudnn/7.5.5.0/precompiled   
	apps/anaconda/4.1.1/gnu
	apps/pythonpackages/2.7.10/freestream/1.0.1/gnu
	apps/pythonpackages/2.7.10/funcsigs/1.0.2/gnu
	apps/pythonpackages/2.7.10/keras/1.2.2/gnu
	apps/pythonpackages/2.7.10/mock/2.0.0/gnu
	apps/pythonpackages/2.7.10/pbr/1.10.0/gnu
	apps/pythonpackages/2.7.10/protobuf/3.1.0/gnu
	apps/pythonpackages/2.7.10/pyyaml/3.12/gnu
	apps/pythonpackages/2.7.10/scons/2.5.1/gnu
	lib/boost/1.54.0/gnu
	
	compiler/mpi/openmpi/1.10.0/gnu
	compiler/mpi/openmpi/1.8.4/gnu
	compiler/mpi/openmpi/1.6.5/gnu
	compiler/python/2.7.13/ucs4/gnu/447
	apps/Caffe/master/01.05.2017/gpu
	apps/Caffe/0.9999/gpu
	apps/tensorflow/1.1.0/gpu
	apps/Matlab/r2014b/gnu
	apps/Matlab/r2015b/gnu
	apps/Matlab/r2016b/precompiled
	apps/Matlab/r2017a/precompiled
	apps/visualization/paraview/5.0.0RC4/precompiled
	lib/cudnn/3.0.7/precompiled
	lib/cudnn/4.0.7/precompiled
	lib/cudnn/5.0.4/precompiled
	lib/boost/1.64.0/gnu_ucs4
	apps/NAMD/2.11/intel
	apps/NAMD/2.10/gpu/gnu
	apps/NAMD/2.10/mic/intel
	apps/NAMD/2.11/k20/intel
	

Installing Software in your own account

User can install software in their own accounts. Super user access cannot be provided for any installation. Please check modules before requesting any dependencies.

Requesting installation of software

If multiple users need access to a software, the supervisor/Head/HPC represnetative can request central installation of software. Please send email(s) to hpchelp.

ANSYS FLUENT

Steps for preparing and submitting your job

  1. Case and Data files
  2. Transfer your case and data files to your /home or /scratch directory. If you are generating large data (> 10GB), please use /scratch.
  3. Journal File e.g. journal.jou
  4. A "journal file" is needed to execute fluent commands in the batch mode. e.g.
    rcd case_and_data
    /solve/dual-time-iterate 240000 50
    wcd output
    exit ok
    
    This journal file will read case_and_data.cas and case_and_data.dat. it will runa dual time iteration with 50 iteration per time-step for 240000 time-steps. It will write the output in "output".
    Following summarizes the fluent command line arguments:
    • 2ddp: Run a 2D simulation in double precision(remove dp to run in single precesion)
    • -g: Run Fluent without the GUI
    • -ssh: Use ssh to login to the available nodes(-rsh is default)
    • -pinfiniband: Use infiniband interconnect
    • -cnf=$PBS_NODEFILE: the list of nodes allocated to the PBS job is also provided to fluent solver.
    • &> : Redirect the Fluent output & error information to "log" file.
    • -mpi=openmpi: specifies the type of MPI (intel,mpich2,openmpi are currently supported on IITD HPC). Please load appropriate mpi module as per specified option.
  5. PBS submit file. e.g. pbssubmit.sh
  6. #!/bin/bash
    #PBS -N jobname
    #PBS -P department
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    #PBS -l select=1:ncpus=20
    #PBS -l walltime=168:00:00
    #PBS -l fluent=1
    #PBS -l fluent_hpc=4
    #PBS -l software=ANSYS
    cd $PBS_O_WORKDIR
    module load apps/fluent
    #default version is 15.0.7 
    
    module load compiler/mpi/openmpi/1.10.0/gnu
    
    time -p fluent -g 2ddp -t $PBS_NTASKS -i journal.jou -ssh -mpi=openmpi -cnf=$PBS_NODEFILE -pinfiniband &> log
    
    The example submit file will request 20 cpus for 168 hours and run a "2ddp" job over infiniband interconnect using the journal file "journal.jou". It will use 1 base fluent license (fluent=1) and 4 HPC licenses (fluent_hpc=4). The base license allows a 16 process parallel job.
  7. Job submission
  8. Submit the above job using "qsub pbssubmit.sh"

Checklist

NOTE:

There are some compatibility issues with fluent 17.2 & openmpi (-mpi=openmpi). Please carry out following modifications in PBS job script's command section:
apps/Fluent/17.2/precompiled
time -p fluent -g 2ddp -t $PBS_NTASKS -i journal.jou -ssh -mpi=intel -cnf=$PBS_NODEFILE -pinfiniband &> log

 

  • To check the status or graph of the ansys licenses,click here
  •  

    HTML Table Cellpadding
    PBS Resources Ansys Licenses Name Description
    ansys_aa_ds AA_DS Licenses ANSYS Academic Teaching DesignSpace
    ansys_aa_mcad AA_MCAD Licenses ANSYS Academic cad interface
    ansys_aa_r_cfd AA_R_CFD Licenses ANSYS Academic Research CFD
    ansys_aa_r_et AA_R_ET Licenses ANSYS Academic Research Electronics Thermal
    ansys_aa_r_hpc AA_R_HPC Licenses ANSYS Academic Research HPC
    ansys_aa_r_me AA_R_ME Licenses ANSYS Academic Research Mechanical
    ansys_aa_t_a AA_T_A Licenses ANSYS Academic Teaching Advance
    ansys_aa_t_cfd AA_T_CFD Licenses ANSYS Academic Teaching CFD
    ansys_aunivres AUNIVRES Licenses

    MATLAB

    Steps for preparing and submitting your job

    You can load matlab
    module via:

    Load any one module from the below:

    module load apps/Matlab/r2014b/gnu
    module load apps/Matlab/r2015b/gnu
    module load apps/Matlab/r2016b/precompiled
    module load apps/Matlab/r2017a/precompiled
    
    for short jobs/runs you can run matlab gui via
    matlab
    command. long running jobs should be submitted via batch system a script file, example:
    #!/bin/bash
    #PBS -N jobname
    #PBS -P department
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    #PBS -l select=1:ncpus=8
    #PBS -l walltime=168:00:00
    #PBS -l matlab=1
    #PBS -l software=MATLAB
    cd $PBS_O_WORKDIR
    module load apps/Matlab/r2014b/gnu
    time -p matlab -nosplash -nodisplay < myprogram.m > matlab.log
    
    now submit the job using qsub command.

     

  • If you are using toolbox in your script then you need to add additional PBS resource directive in your job script. This ensures that your jobs starts only when sufficient number of licenses for toolbox is available. Example -
    #PBS -l matlab=1
    #PBS -l matlab_bioinformatics_toolbox=1
    
    Here is the list of PBS resource attributes for matlab toolboxes -
  •  

    HTML Table Cellpadding
    PBS Resources Matlab Licenses
    matlab Matlab Licenses
    matlab_bioinformatics_toolbox Bioinformatics_Toolbox Licenses
    matlab_builder_for_java MATLAB_Builder_for_Java Licenses
    matlab_communication_toolbox Communication_Toolbox Licenses
    matlab_control_toolbox Control_Toolbox Licenses
    matlab_curve_fitting_toolbox Curve_Fitting_Toolbox Licenses
    matlab_data_acq_toolbox Data_Acq_Toolbox Licenses
    matlab_distrib_comp_engine MATLAB_Distrib_Comp_Engine Licenses
    matlab_distrib_computing_toolbox Distrib_Computing_Toolbox Licenses
    matlab_financial_toolbox Financial_Toolbox Licenses
    matlab_fixed_point_toolbox Fixed_Point_Toolbox Licenses
    matlab_fuzzy_toolbox Fuzzy_Toolbox Licenses
    matlab_image_acquisition_toolbox Image_Acquisition_Toolbox Licenses
    matlab_image_toolbox Image_Toolbox Licenses
    matlab_instr_control_toolbox Instr_Control_Toolbox Licenses
    matlab_neural_network_toolbox Neural_Network_Toolbox Licenses
    matlab_optimization_toolbox Optimization_Toolbox Licenses
    matlab_power_system_blocks Power_System_Blocks Licenses
    matlab_realtime_workshop Real-Time_Workshop Licenses
    matlab_robust_toolbox Robust_Toolbox Licenses
    matlab_rtw_embedded_coder RTW_Embedded_Coder Licenses
    matlab_signal_blocks Signal_Blocks Licenses
    matlab_signal_toolbox Signal_Toolbox Licenses
    matlab_simulink_control_design Simulink_Control_Design Licenses
    matlab_statistics_toolbox Statistics_Toolbox Licenses
    matlab_symbolic_toolbox Symbolic_Toolbox Licenses
    matlab_video_and_image_blockset Video_and_Image_Blockset Licenses
    matlab_virtual_reality_toolbox Virtual_Reality_Toolbox Licenses
    matlab_wavelet_toolbox Wavelet_Toolbox Licenses
    matlab_gads_toolbox GADS_Toolbox Licenses
    matlab_pde_toolbox PDE_Toolbox Licenses
    matlab_map_toolbox MAP_Toolbox Licenses
    matlab_rf_toolbox RF_Toolbox Licenses
    matlab_xpc_target XPC_Target Licenses
    matlab_stateflow.sh Stateflow Licenses
    matlab_simulink SIMULINK Licenses
    matlab_simscape Simscape Licenses
    matlab_SimEvents SimEvents Licenses
    matlab_coder MATLAB_Coder Licenses
    matlab_compiler Compiler Licenses

    GROMACS

    Steps for preparing and submitting your job

    You can check for available gromacs modules via
     module avail apps/gromacs
    ------------------------------- /home/soft/modules ---------------------------------------
    apps/gromacs/4.6.2         apps/gromacs/4.6.2_plumed  apps/gromacs/4.6.5_plumed  apps/gromacs5.1.1
    apps/gromacs/4.6.2_noncuda apps/gromacs/4.6.5         apps/gromacs/5.1.1
    
    You can load gromacs version of your choice via (e. gromacs 4.6.2):
    module load apps/gromacs/4.6.2
    
    This loads & sets all the prerequisites for gromacs 4.6.2(cuda). For short test runs you can run (say)
    mdrun_mpi
    gromacs command, from gpu login nodes. Long running jobs should be submitted via batch system using a script file, example:
    #!/bin/bash
    #PBS -N jobname
    #PBS -P department
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    #PBS -l select=4:ncpus=24:mpiprocs=2:ngpus=2
    #PBS -o stdout_file
    #PBS -e stderr_file
    #PBS -l walltime=24:00:00
    #PBS -l software=GROMACS
    echo "==============================="
    echo $PBS_JOBID
    cat $PBS_NODEFILE
    echo "==============================="
    cd $PBS_O_WORKDIR
    
    
    module load apps/gromacs/4.6.2
    time -p mpirun -np $PBS_NTASKS -machinefile $PBS_NODEFILE -genv OMP_NUM_THREADS 12 mdrun_mpi < gromacs specific input files & parameters >
    
    now submit the job using
    qsub
    command.
    This script will request for 4 nodes with 2 gpu cards per node(total 8 gpu cards) & 96 cpu cores (24 core per node).
    This script will also launch 8 process in total (2 process per node) on the requested nodes, also each process creates 12 threads (-genv OMP_NUM_THREADS 12) , hence launching 24 threads per node.

    NAMD (XEON PHI)

    Steps for running your job on nodes having XEON PHI cards

      add nmics flag to your select statement in pbs script as follows:
      For ex.
      #PBS -l select=1:ncpus=24:nmics=1       #if you want to use one mic card only
      
      or
      
      #PBS -l select=1:ncpus=24:nmics=2       #if you want to use two mic cards 
      
      #NOTE : ONLY VALUE 1 and 2 are allowed for nmics flag
      
      #Add following command in your pbs script.
      module load apps/NAMD/2.10/mic/intel    #To load namd xeon phi binary and libraries
      
      #Command to execute NAMD on xeon phi
      mpiexec.hydra -machinefile $PBS_NODEFILE -n 2 -perhost 2 namd2 +ppn 11 +commap 0,12 +pemap 1-11,13-23 +devices 0,1 ./stmv.namd
      
      
    • Explanation
    • mpiexec.hydra  			#Program to launch MPI
      -machinefile $PBS_NODEFILE  	#To Nodes allotted for current job
      -n 				#Number Of Processes
      -perhost 			#MPI processes per host
      namd2				#namd binary name
      +ppn      			#Worker Processes(threads) per process
      +commap 0,12			#Thread communication to process numbers( threads 1-11 will communicate with process id 0,13-23 will communicate with process id 12 )
      +pemap 1-11,13-23		#Worker Process(threads) Mapping
      +devices 0,1			#MIC cards want to use
      stmv.namd			#input file
      if any issues/errors please mail on hpchelp@iitd.ac.in
      

    LAMMPS (XEON PHI)

    Steps for running your job on nodes having XEON PHI cards

      add nmics flag to your select statement in pbs script as follows:
      For ex.
      #PBS -l select=1:ncpus=24:nmics=1       #if you want to use one mic card only
      
      or
      
      #PBS -l select=1:ncpus=24:nmics=2       #if you want to use two mic cards
      
      #NOTE : ONLY VALUE 1 and 2 are allowed for nmics flag
      
      #Add following command in your pbs script.
      module load apps/lammps/intel_phi/lammps    #To load LAMMPS xeon phi binaries and libraries
      
      
    • Add following Lines to LAMMPS input file
    • package intel 2 mode mixed balance $b
      package omp 0
      suffix $s
      
      #Command to execute LAMMPS on xeon phi
      mpiexec.hydra -np 24 -machinefile $PBS_NODEFILE  -genv OMP_NUM_THREADS 1 lmp_intel_phi -in in.intel.rhodo -log none -v b -1 -v s intel
      
    • Explanation
    • -np  		         	#number of MPI processes
      -genv OMP_NUM_THREADS  	        #number of threads per process
      -in input_file_name		#input file name
      -log 				#where to send log output
      -v s intel 			#suffix value=intel
      -v b 				#0 = no mic cards get used, -1 = balance the workload between the cards,0.75 = give 75% workload to MIC cards
      if any issues/errors please mail on hpchelp@iitd.ac.in
      

    PYTHON

    Python:

    To load python 2.7.13 in your environment, please use - compiler/python/2.7.13/ucs4/gnu/447 Please note that after executing previous command, the python packages like numpy, scipy will not be available in your environment. After loading python, you need to explicitly load an entire python package suite as- pythonpackages/2.7.13/ucs4/gnu/447/package_suite/1 Following command will help you with the list of available python package within a "package_suite" - module help pythonpackages/2.7.13/ucs4/gnu/447/package_suite/1 Or, you could selectively load only required modules from the output of following command - module avail pythonpackages/2.7.13