Last modified: April 04 2024.
Contact: hpchelp [at] iitd.ac.in

Software

List of some softwares for which the instructions are available:

For list of available softwares, please check available modules:
$ module avail
If the required software is not available/listed in modules, users can
  • Install the software in your own account ($HOME) -or-
  • Request installation, via your supervisor.

Installing Software in your own account

User can install software in their own accounts. Super user access cannot be provided for any installation. Please check modules before requesting any dependencies.

Requesting installation of software

If multiple users need access to a software, the supervisor/Head/HPC represnetative can request central installation of software. Please send email(s) to hpchelp.

Resources


Availability of different compilers on HPC ?

Different compilers are available on HPC as modules.(Refer Module Tutorials)

Here is the list of some of them.


GNU Compilers for Serial Code

C Compiler: gcc, C++ Compiler : g++ , Fortran Compiler : gfortran

compiler/gcc/6.5.0/compilervars

compiler/gcc/7.1.0/compilervars

compiler/gcc/7.3.0/compilervars

compiler/gcc/7.4.0/compilervars

compiler/gcc/9.1.0


Compilers for Parallel Code (OpenMPI/MPICH)

MPI C Compiler: mpicc, MPI C++ Compiler : mpic++, MPI Fortran Compiler : mpifort

compiler/gcc/6.5/openmpi/4.0.2

compiler/gcc/9.1/openmpi/4.0.2

compiler/gcc/9.1/mpich/3.3.1


Intel Compilers (Recommended)

Compilers for Serial Code :

C Compiler: icc, C++ Compiler : icpc, Fortran Compiler : ifort

Compilers for Parallel Code :

MPI C Compiler: mpiicc, MPI C++ Compiler : mpiicpc, MPI Fortran Compiler : mpiifort

suite/intel/parallelStudio/2015

suite/intel/parallelStudio/2018

suite/intel/parallelStudio/2019

suite/intel/parallelStudio/2020


Cuda Compiler (nvcc)

compiler/cuda/11.0/compilervars

compiler/cuda/10.1/compilervars

compiler/cuda/10.2/compilervars

compiler/cuda/10.0/compilervars

compiler/cuda/9.2/compilervars 

compiler/cuda/8.0/compilervars 

compiler/cuda/7.5/compilervars 

compiler/cuda/7.0/compilervars 

compiler/cuda/6.5/compilervars 

compiler/cuda/6.0/compilervars 


NVIDIA HPC SDK

suite/nvidia-hpc-sdk/20.11/cuda11.0

suite/nvidia-hpc-sdk/20.7/cuda11.0

suite/nvidia-hpc-sdk/20.9/cuda11.0

suite/nvidia-hpc-sdk/21.7/cuda11.0



How to compile & run MPI program on HPC ?

C, C++, Fortran compiler names in case of Intel MPI are mpiicc, mpiicpc, mpiifort & in case of OpenMPI/MPICH/MVAPICH are mpicc, mpic++, mpifort respectively, which are available with their respective modules.


To compile code using Intel MPI (Recommended)

Load any intel parallel studio module.

Ex.

module purge

module load suite/intel/parallelStudio/2018


General Compilation Command :

[compiler name] [name of file with MPI code] -o [name of the executable to generate]

e.g. MPI C code compilation :

mpiicc  matrix_multi.c  -o  intelmpi.exec

Successful compilation will create executable file with name :

intelmpi.exec

Sample MPI C program (Matrix Multiplication) is available at :

/home/apps/skeleton/examples/MPI/matrix_multi.c 

(Compilation Steps are same in case of C++ & Fortran Code, simply change the compiler name, input file etc.)


To run the above generated executable on Multiple Cores via PBS job load the respective module & run command as :

mpirun -np $PBS_NTASKS  [path to the executable e.g. intelmpi.exec]


To compile code using OpenMPI/MPICH/MVAPICH follow the same steps mentioned above using respective modules & their respective compiler commands.

Sample Batch Job Submission Scripts are also available at :

/home/apps/skeleton/examples/MPI

Refer PBS Tutorials



How to check whether your job is using allocated resources or not ?

  • ssh to each allocated node of your job. (Use qstat -n to get the list of allocated nodes.)
  • Use top -u $USER command (Exit using Q), to check whether job processes are running on requested no. of cpu cores & their cpu utilization on particular node
  • If you requested for gpu nodes for your job, in addition to above, you can check the gpu utilization on allocated node using nvidia-smi command, which will show you gpu utilization as well as processes running on respective gpu cards.
    Use nvidia-smi-l 1 will print output for interval of 1s. (Exit using Ctrl + C)


  • How to monitor the memory of jobs when it is running ?

    To avoid the out of memory issue , basic step can be submit job with #PBS -l place=scatter option, submit job with select value more than ncpus value & with centos=skylake option. (Availability of memory on skylake nodes is more as compared to haswell , this may increase waiting time for your job as skylake nodes are less in no. as compared to haswell)

    e.g., For high memory intensive job, 120 cores can be requested as
    #PBS -l select=12:ncpus=10:mpiprocs=10:centos=skylake
    #PBS -l place=scatter

    More advanced step can be, use of memory monitoring script i.e., check the memory requirement of your job on each node by using memory monitoring script inside your batch script, get a clear idea about memory requirement and use mem option in select statement.

    Usage :
    
    ### Put this before setting up environment for the job i.e., before  module load commands ####
    export NODES=`cat ${PBS_NODEFILE}|sort|uniq|tr '\n' ','|sed 's:,$::g'`
    echo ${NODES}
    pdsh -w ${NODES} "/home/apps/mem_monitor_script/monitorStats.sh ${PBS_O_WORKDIR}/MEM_MONITOR_${PBS_JOBID}" 2> /dev/null &
    export mem_check_pid=$!
    #####
    
    ## Execution commands here e.g. mpirun -np ...... etc ##
    
    
    ## Put this at the end of the script ##
    
    kill -9 $mem_check_pid
    
    ##################################
    
    

    Output : The script will create a folder with name MEM_MONITOR_ containing files for each node having memory metrics for CPU RAM, also detect if the gpu is allocated to the job or not, if found will also create a file containing gpu metrics.

    ------------

    After getting the exact idea about the memory , you can use mem option in the select statement. e.g., Overall memory requirement for your job is 120 gb, select statement can be #PBS -l select=12:ncpus=10:mpiprocs=10:mem=10gb i.e 12 * 10gb = 120gb



    How to avoid unreliable connection specifically in case of interactive jobs (screen command) ?


    To avoid unreliable connection specifically in case of interactive jobs, screen command is one of the possible solution.Using screen command you can maintain a virtual session corresponding to the present working terminal assumed that you are running the command inside the screen session. The screen session will be available even after you get disconnected from the current session because of some reason.


    How to Use :

    Note : screen sessions are specific to particular node, screen can be reattached/detached/removed only from the node from where it was created. Hence it is required to keep a note of login node from where it was created.

    From any login node on HPC.

    1. Create a screen session

    screen -S [session name you want to give]

    2. Execute command inside screen session.

    Example : Submit your interactive job

    qsub -I -P [project code] -lselect=1:ncpus=4:mpiprocs=4 -lwalltime=01:00:00

    Execute the operations which you want to perform.

    The session will be available even after the connection is lost.

    You can also detached from the session using following command inside the screen session.

    Ctrl A + Ctrl D OR screen -d

    Note : Executing quit/exit command inside the screen session will terminate the session.

    Please make sure that you are not creating multiple screens inside the screen session.

    3. To reattach particular screen session use command.

    screen -r [screen session name]

    4. If there are multiple screen sessions running on the particular node. You can get the list using command :

    screen -ls

    5. To close/terminate a particular screen session use command.

    screen -XS [screen session name] quit

    Note : screen command execution needs practice, its users responsibility to make sure that all the unwanted screen sessions are closed.

    The above mentioned examples are basic use case of screen command, to know more check with command

    man screen OR screen --help



    Transfer of data from outside IIT Delhi to HPC :


    For small data size (* less than 100MB)

    • Use the Proxy Server to access the internet. Please follow the steps given here.

    For large data size but one time download (* 100MB to 1TB)

    • Use the IIT Delhi Download server. Steps are given here.
    • Use the Proxy Server to access the internet. Please follow the steps given here.

    For large data size but multiple time downloads (* greater than 1TB)

    • Use the Proxy Server to access the internet (Not Recommended). Please follow the steps given here.
    • Use the IIT Delhi Download server. Steps are given here.
    • Use of tunneling (Highly Recommended). Steps are given here.
    • * Approximate

    CESM

    Load Balancing Reference Links

  • Optimizing processor layout
  • Load balancing a case
  • Setting the case PE layout
  • Changing PE layout (Example)
  • Steps for preparing and submitting your job


    For CESM 1.2.2:


    Step 1: Load the module

    module load apps/CESM/cesm1_2_2/intel

     

    Step 2 : Create the CASE

    e.g.,
    export CASE=Path to create CASE
    create_newcase -case $CASE -res f09_f09 -compset F_2000_CAM5 -mach IITD1 -compiler intel

     

    Note : Set Machine as IITD1 and Compiler intel

     

    Step 3 : Setup and Build the case

  • ./cesm_setup
  • ./$CASE.build
  •  

    Step 4 : Submit the CASE

  • ./$CASE.submit
  •  

    Important Notes :
    1. Make sure a soft link for users scratch is present in your account.
    2. A cesm1_2_2_OUT directory will be created into your scratch, which contains two subdirectories CASES and archives. The directory CASES will contain the CASE wise run and bld directories. The archives directory will contains CASE wise archive data.
    3. By default input directory is set to : /home/cas/faculty/dilipganguly/cesm/inputdata

     

    For CESM 2 :

    Use interactive job to build CESM case.

     

    Step. 1 : Load the module

    For CESM 2.1.1

    module load apps/CESM/cesm2.1.1/intel

    For CESM 2.2.0

    module load apps/CESM/cesm2.2.0/intel2020

     

    Step 2 : Environment Setup

    Set the input data path e.g export CESMDATAROOT=/home/cas/faculty/dilipganguly/cesm/inputdata

     

    Step 3 : Create the CASE

    e.g.,
    export CASE=Path to create CASE
    create_newcase --case $CASE --compset FHIST --res f09_f09_mg17 --machine PADUM
    cd $CASE
    ./xmlchange --file env_run.xml --id DIN_LOC_ROOT --val $CESMDATAROOT

     

    Step 4 : Setup and Build the case

  • ./case.setup
  • ./case.build --skip-provenance-check
  •  

    Step 5 : Submit the CASE

  • ./case.submit
  • Important Notes :
    1. Make sure a soft link for users scratch is present in your account
    2. cesm2.1.1_out and cesm2.2.0_out directory will be created into your scratch specific to the version which you are using, which contains two subdirectories CASES and archives. The directory CASES will contain the CASE wise run and bld directories. The archives directory will contains CASE wise archive data.

    WRF


    NEW TO WRF ? please refer: WRF Online Tutorial


    Step 1: WRF Dependencies

     

    Loading below module will set all the dependencies ,environment variables & paths required for wrf.


    For WRF 3.8.1 :

    module load apps/wrf/intel2015

    For WRF 4.3.3 :

    module load apps/wrf/deps/intel2019

     

    Note:

    Please do remove environment variables from .bashrc related to hdf5,mpi,pnetcdf,netcdf,flex,bison etc & also unload modules related to the same if any , this may ends with multiple issues.

     

    Step 2: There are different already compiled flavours of WRF are available at the following paths :

     

  • WRF 3.8.1 : /home/apps/skeleton/wrf/wrf_3.8.1.tar.gz
  • WRF-CHEM 3.8.1 : /home/apps/skeleton/wrf/wrf_chem_3.8.1.tar.gz
  • WRF-CHEM-KPP 3.8.1 : /home/apps/skeleton/wrf/wrf_chem_kpp_3.8.1.tar.gz
  • WRF 4.3.3 COMPLETE (WITH CHEM + KPP ) : /home/apps/skeleton/wrf/wrf_complete_4.3.3.tar.gz
  • Note:

    Each tar contains already compiled two folders WPS & WRFV3 (WPS & WRF in case of v4.3.3), you simply need to copy the tar files & extract using tar -xzvf filename inside any folder. Make the changes for input as per your requirement. A sample submit script named pbs_submit.sh is available in each WRFV3/run or WRF/run folder (do the necessary changes). Changes made in source code/files of any model like physics ,chem etc needs recompilation.

    Recompilation Steps (Follow only when required) : (Take interactive job for at least 2 hours for recompilation)

    Note:

    For WRF CHEM (without KPP): export WRF_CHEM=1

    Note:

    For WRF CHEM (with KPP): export WRF_CHEM=1 && export WRF_KPP=1

  • cd WRFV3 OR cd WRF
  • module load apps/wrf/intel2015 OR module load apps/wrf/deps/intel2019
  • cp configure.wrf configure.wrf.bkp
  • ./clean -a
  • cp configure.wrf.bkp configure.wrf
  • ./compile em_real
  •  

    For WPS & to run real.exe take a interactive job, do module load apps/wrf/intel2015 OR module load apps/wrf/deps/intel2019 and run the binaries as per the wrf procedure.

     

    Some Useful Hints :

     

    To use pnetcdf features of wrf you can do the following changes in namelist.input file present either WRFV3/run or WRFV3/test/em_real/ (For 4.3.3 : WRF/run or WRF/test/em_real/) i.e. from where you prefer to submit a job, also copy pbs_submit.sh to that folder.

  • io_form_history = 11,
  • io_form_restart = 11,
  • io_form_input = 11,
  • io_form_boundary = 11,
  • nocolons =.true.
  • ANSYS

    Latest Available Modules

    apps/ANSYS/2024R1/precompiled
    apps/ANSYS/2024R1/CFX/precompiled
    apps/Fluent/2024R1/precompiled 
    

    Note:

    Ansys Mechanical is now available for use.

    To use the Ansys Mechanical, please load this module: apps/ANSYS/2024R1/precompiled 
    To open a GUI of Ansys Mechanical: mapdl -g
    To Open a command line of Ansys Mechanical: mapdl
    Note: To open the workbench of ANSYS, please use: runwb2
    

    FLUENT

    Steps for preparing and submitting your job

    1. Case and Data files
    2. Transfer your case and data files to your /home or /scratch directory. If you are generating large data (> 10GB), please use /scratch.
    3. Journal File e.g. journal.jou
    4. A "journal file" is needed to execute fluent commands in the batch mode. e.g.
      rcd case_and_data
      /solve/dual-time-iterate 240000 50
      wcd output
      exit ok
      
      This journal file will read case_and_data.cas and case_and_data.dat. it will runa dual time iteration with 50 iteration per time-step for 240000 time-steps. It will write the output in "output".
      Following summarizes the fluent command line arguments:
      • 2ddp: Run a 2D simulation in double precision(remove dp to run in single precesion)
      • -g: Run Fluent without the GUI
      • -ssh: Use ssh to login to the available nodes(-rsh is default)
      • -pinfiniband: Use infiniband interconnect
      • -cnf=$PBS_NODEFILE: the list of nodes allocated to the PBS job is also provided to fluent solver.
      • &> : Redirect the Fluent output & error information to "log" file.
      • -mpi=intel: specifies the type of MPI (intel,mpich2,openmpi are currently supported on IITD HPC). Please load appropriate mpi module as per specified option.
    5. PBS submit file. e.g. pbssubmit.sh
    6. #!/bin/bash
      #PBS -N jobname
      #PBS -P department
      #PBS -m bea
      #PBS -M $USER@iitd.ac.in
      #PBS -l select=1:ncpus=20
      #PBS -l walltime=168:00:00
      #PBS -l fluent=1
      #PBS -l fluent_hpc=16
      #PBS -l software=ANSYS
      cd $PBS_O_WORKDIR
      module load apps/fluent
      #default version is 2020R1
       
      module load suite/intel/parallelStudio/2020
      
      
      time -p fluent -g 2ddp -t $PBS_NTASKS -i journal.jou -ssh -mpi=intel -cnf=$PBS_NODEFILE -pinfiniband &> log
      
      The example submit file will request 20 cpus for 168 hours and run a "2ddp" job over infiniband interconnect using the journal file "journal.jou". It will use 1 base fluent license (fluent=1) and 16 HPC licenses (fluent_hpc=16). The base license allows a 4 process parallel job.
      For any parallel simulation, only one base license will be used. Rest need to be hpc licences. i.e #PBS -l fluent=1 allows 4 processes & #PBS -l fluent_hpc=[remaining no. of process]
    7. Job submission
    8. Submit the above job using "qsub pbssubmit.sh"

    Checklist

    NOTE:

    There are some compatibility issues with fluent 17.2 & openmpi (-mpi=openmpi). Please carry out following modifications in PBS job script's command section:
    apps/Fluent/17.2/precompiled
    time -p fluent -g 2ddp -t $PBS_NTASKS -i journal.jou -ssh -mpi=intel -cnf=$PBS_NODEFILE -pinfiniband &> log

     

  • To check the status or graph of the ansys licenses,click here
  •  

    HTML Table Cellpadding
    PBS Resources Ansys Licenses Name Description
    ansys_aa_ds AA_DS Licenses ANSYS Academic Teaching DesignSpace
    ansys_aa_mcad AA_MCAD Licenses ANSYS Academic cad interface
    ansys_aa_r_cfd AA_R_CFD Licenses ANSYS Academic Research CFD
    ansys_aa_r_et AA_R_ET Licenses ANSYS Academic Research Electronics Thermal
    ansys_aa_r_hpc AA_R_HPC Licenses ANSYS Academic Research HPC
    ansys_aa_r_me AA_R_ME Licenses ANSYS Academic Research Mechanical
    ansys_aa_t_a AA_T_A Licenses ANSYS Academic Teaching Advance
    ansys_aa_t_cfd AA_T_CFD Licenses ANSYS Academic Teaching CFD
    ansys_aunivres AUNIVRES Licenses

    MATLAB


    Totally new to MATLAB, we recommend to go through MATLAB Onramp first: Click Here


    Links for MATLAB installation on local system :

    MATLAB workshops





    Steps for preparing and submitting your job

    Please check the available modules by module avail apps/Matlab. You can load matlab module via:

    Load any one module from the below list:

    apps/Matlab/r2016b/precompiled
    apps/Matlab/r2017a/precompiled
    apps/Matlab/r2017b/precompiled
    apps/Matlab/r2018b/precompiled
    apps/Matlab/r2019a/precompiled
    apps/Matlab/r2019b/precompiled
    apps/Matlab/r2020a/precompiled
    apps/Matlab/r2020b/precompiled
    apps/Matlab/r2021a/precompiled
    apps/Matlab/r2021b/precompiled
    apps/Matlab/r2022b/precompiled
    
    for short jobs/runs you can run matlab gui via
    matlab
    command. long running jobs should be submitted via batch system a script file, example:
    #!/bin/bash
    #PBS -N jobname
    #PBS -P project name
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    #PBS -l select=1:ncpus=8
    #PBS -l walltime=20:00:00
    #PBS -l software=MATLAB
    cd $PBS_O_WORKDIR
    module load apps/Matlab/r2021b/precompiled
    time -p matlab -logfile matlab.log -batch  myprogram
    
    now submit the job using qsub command.

     

  • Now MATLAB licenses(including all toolbox) are available on campus level so you will get the license for your job whenever it required.
  • Instructions and documentation for running parallel jobs on a single node, as well as across nodes are available here (For MATLAB version 2020 & above):
    Getting Started with Parallel Computing using MATLAB on the PADUM HPC Cluster

    Necessary scripts (For remote job submission to HPC from local system): Linux (tar.gz) Windows (zip)
  •  

    GROMACS (2023.2)

    Description: Gromacs 2023.2 with Plumed 2.9 compiled using GCC 9.1 + CUDA 11.0

    Binaries Available: gmx_mpi, demux.pl, xplor2gmx.pl, plumed, plumed-config

    Use Module:

    module load apps/gromacs/2023.2/gnu
    Note:

    Please read each comment carefully As per the recent benchmark for above mentioned version, we got best efficiency with OMP_NUM_THREADS = 2. Make sure that ncpus value must be a multiple of OMP_NUM_THREADS value. Change project name,resources,queue,input file etc. In case of GPU job there is no separate binary, gmx_mpi will work for any job as gromacs automatically detect if the gpu is available.

    1. Interactive Job Submission
    2. CPU Job
      qsub -I -P cc -q standard -lselect=1:ncpus=12:mpiprocs=12 -lwalltime=00:30:00 -lsoftware=GROMACS
      NVIDIA K40 GPU Job
      qsub -I -P cc -q standard -lselect=1:ncpus=12:mpiprocs=12:ngpus=1:centos=haswell -lwalltime=00:30:00 -lsoftware=GROMACS
      NVIDIA V100 GPU Job
      qsub -I -P cc -q standard -lselect=1:ncpus=20:mpiprocs=20:ngpus=1:centos=skylake -lwalltime=00:30:00 -lsoftware=GROMACS
      NVIDIA A100 GPU Job
      qsub -I -P cc -q standard -lselect=1:ncpus=32:mpiprocs=32:ngpus=1:centos=icelake -lwalltime=00:30:00 -lsoftware=GROMACS

      Refer any of the above qsub statement as per the type of job you want to submit. After getting the resources, use below execution commands for any of the above mentioned type of job.

      module load apps/gromacs/2023.2/gnu
      export OMP_NUM_THREADS=2
      PROCS=$((PBS_NTASKS / OMP_NUM_THREADS))
      mpirun -np $PROCS  gmx_mpi mdrun  -ntomp $OMP_NUM_THREADS -nsteps 10000 -v -s TPRname.tpr

    3. Batch Job Submission
    4. Copy sample batch job submission script i.e gromacsJobSubmit.sh from /home/apps/skeleton/gromacsJobSubmit.sh

      Submit job using :
      qsub gromacsJobSubmit.sh
      ----------------------- gromacsJobSubmit.sh  -----------------------------
      #!/usr/bin/env bash
      #PBS -N Gromacs
      ## Change Project Name,Queue,Resources,Walltime, Input File Name etc.
      #PBS -P cc
      #PBS -q standard
      #PBS -M $USER@iitd.ac.in
      #PBS -m bea
      #########################################
      ## Refer any of the below mentioned select statemnt
      ## as per the type of job you want to submit
      ## Keep single # before PBS to consider it as command ,
      ## more than one # before PBS considered as comment.
      ## any command/statement other than PBS starting with # is considered as comment.
      ## Please comment/uncomment the portion as per your requirement before submitting job
      
      ## CPU JOB
      #PBS -l select=1:ncpus=12:mpiprocs=12
      
      ## K40 GPU JOB
      ##PBS -lselect=1:ncpus=12:mpiprocs=12:ngpus=1:centos=haswell
      
      ## V100 GPU JOB
      ##PBS -lselect=1:ncpus=20:mpiprocs=20:ngpus=1:centos=skylake
      
      ## A100 GPU JOB
      ##PBS -lselect=1:ncpus=32:mpiprocs=32:ngpus=1:centos=icelake
      
      #PBS -l walltime=00:30:00
      #PBS -l software=GROMACS
      
      # Environment
      echo "==============================="
      echo $PBS_JOBID
      cat $PBS_NODEFILE
      echo "==============================="echo "==============================="
      cd $PBS_O_WORKDIR
      
      module purge
      module load apps/gromacs/2023.2/gnu
      
      export OMP_NUM_THREADS=2
      PROCS=$((PBS_NTASKS / OMP_NUM_THREADS))
      mpirun -np $PROCS  gmx_mpi mdrun  -ntomp $OMP_NUM_THREADS -nsteps 10000 -v -s TPRname.tpr

    NAMD (Version 2.13)

    Description : Two separate modules for CPU & GPU version, both are compiled with intel compiler + FFTW + TCL + cuda 10.0
    Binaries Available : charmrun, flipbinpdb, flipdcd, namd2, psfgen, sortreplicas

    CPU version, Use Module:

     module load apps/NAMD/2.13/cpu/intel2015 

    GPU version, Use Module:

     module load apps/NAMD/2.13/gpu/intel2019
    Note:

    Please read each comment carefully. Change project name,resources,input file name, queue, Input File Name etc.

    1. Interactive Job Submission
    2. CPU Job
      qsub -I -P cc -q standard -lselect=1:ncpus=12 -lwalltime=00:30:00 -lsoftware=NAMD
      NVIDIA K40 GPU Job
      qsub -I -P cc -q standard -lselect=1:ncpus=12:ngpus=1:centos=haswell -lwalltime=00:30:00 -lsoftware=NAMD
      NVIDIA V100 GPU Job
      qsub -I -P cc -q standard -lselect=1:ncpus=12:ngpus=1:centos=skylake -lwalltime=00:30:00 -lsoftware=NAMD

      Refer any of the above qsub statement as per the type of job you want to submit. After getting the resources, use below execution commands for any of the above mentioned type of job.

      ------------ CPU JOB EXECUTION BLOCK ----------------
      module load apps/NAMD/2.13/cpu/intel2015
      mpirun -np $PBS_NATSKS namd2 +idlepoll stmv.namd
      -----------------------------------------------------
      -------------GPU JOB EXECUTION BLOCK ----------------
      module load apps/NAMD/2.13/gpu/intel2019

      NAMD GPU built with SMP MPI CUDA Hence, need to use OMP_NUM_THREADS greater than 1, otherwise will give error

      export OMP_NUM_THREADS=2
      PROCS=$((PBS_NTASKS / OMP_NUM_THREADS))
      mpirun -np $PROCS namd2 +ppn $OMP_NUM_THREADS +idlepoll stmv.namd
    3. Batch Job Submission
    4. Copy sample batch job submission script i.e namdSubmit.sh from /home/apps/skeleton/namdSubmit.sh

      Submit job using :
      qsub namdSubmit.sh
      -------------- namdSubmit.sh ---------------------------------------------
      #/bin/sh
      #PBS -N NAMD2.13
      ## Change Project Name,Queue,Resources,Walltime, Input file name etc.
      #PBS -P cc
      #PBS -q standard
      #PBS -M $USER@iitd.ac.in
      #PBS -m bea
      #########################################
      ## Refer any of the below mentioned select statemnt
      ## as per the type of job you want to submit
      ## Keep single # before PBS to consider it as command ,
      ## more than one # before PBS considered as comment.
      ## any command/statement other than PBS starting with # is considered as comment.
      ## Please comment/uncomment the portion as per your requirement before submitting job
      
      ## CPU JOB
      #PBS -l select=1:ncpus=10
      
      ## K40 GPU JOB
      ##PBS -lselect=1:ncpus=10:ngpus=1:centos=haswell
      
      ## V100 GPU JOB
      ##PBS -lselect=1:ncpus=10:ngpus=1:centos=skylake
      
      #PBS -l walltime=00:30:00
      #PBS -l software=NAMD
      
      ## Environment
      echo "==============================="
      echo $PBS_JOBID
      cat $PBS_NODEFILE
      echo "==============================="
      module purge
      
      ##------------ CPU JOB EXECUTION BLOCK ----------------
      module load apps/NAMD/2.13/cpu/intel2015
      mpirun -np $PBS_NATSKS namd2 +idlepoll stmv.namd
      ##----------------------------------------------------
      
      ##------------ GPU JOB EXECUTION BLOCK ----------------
      # module load apps/NAMD/2.13/gpu/intel2019
      ## NAMD GPU built with SMP MPI CUDA
      ## Hence , need to use OMP_NUM_THREADS greater than 1 & 
      ## ncpus value multiple of OMP_NUM_THREADS,  otherwise will give error
      # export OMP_NUM_THREADS=2
      # PROCS=$((PBS_NTASKS / OMP_NUM_THREADS))
      # mpirun -np $PROCS namd2 +ppn $OMP_NUM_THREADS +idlepoll stmv.namd
      ##-----------------------------------------------------------------

    LAMMPS

    Description : Having LAMMPS CPM + all the packages except vtk , compiled with intel 2018, cuda 10.0


    Binaries Available:

    CPU :

    lmp_mpi_cpu
    Haswell GPU K40 :
    lmp_mpi_gpu_k40
    Skylake GPU V100 :
    lmp_mpi_gpu_v100


    Use Module:

    module load apps/lammps/intel/7Aug19

    Note:

    Please read each comment carefully. Change project name,resources,input file name, queue, Input File Name etc. In case of GPU job the value after -pk gpu must be equal to ngpus value. Refer any of the below mentioned qsub statement as per the type of job you want to submit.

    1. Interactive Job Submission
    2. CPU Job
      qsub -I -P cc -q standard -N LAMMPS_7AUG19 -lselect=1:ncpus=10 -lwalltime=00:30:00 -lsoftware=LAMMPS

      After getting the resources:

      module load apps/lammps/intel/7Aug19
      mpirun  -np $PBS_NTASKS  lmp_mpi_cpu -in in.lj
      NVIDIA K40 GPU Job
      qsub -I -P cc -q standard -N LAMMPS_7AUG19 -lselect=1:ncpus=10:ngpus=1:centos=haswell -lwalltime=00:30:00 -lsoftware=LAMMPS

      After getting the resources:

      module load apps/lammps/intel/7Aug19
      mpirun -np $PBS_NTASKS lmp_mpi_gpu_k40 -sf gpu -pk gpu 1 -in in.lj
      NVIDIA V100 GPU Job
      qsub -I -P cc -q standard -N LAMMPS_7AUG19 -lselect=1:ncpus=10:ngpus=1:centos=skylake -lwalltime=00:30:00 -lsoftware=LAMMPS

      After getting the resources:

      module load apps/lammps/intel/7Aug19
      mpirun -np $PBS_NTASKS lmp_mpi_gpu_v100 -sf gpu -pk gpu 1 -in in.lj

    3. Batch Job Submission
    4. Copy sample batch job submission script i.e lammpsSubmit.sh from /home/apps/skeleton/lammpsSubmit.sh

      Submit job using :
      qsub lammpsSubmit.sh
      ------------- lammpsSubmit.sh -------------------------------------------
      #!/bin/sh
      #PBS -N LAMMPS_7AUG19
      ## Change Project Name,Queue,Resources,Walltime,Input File Name etc.
      #PBS -P cc
      #PBS -q standard
      #PBS -M $USER@iitd.ac.in
      #PBS -m bea
      #########################################
      ## Refer any of the below mentioned select statemnt
      ## as per the type of job you want to submit
      ## Keep single # before PBS to consider it as command ,
      ## more than one # before PBS considered as comment.
      ## any command/statement other than PBS starting with # is considered as comment.
      ## Please comment/uncomment the portion as per your requirement before submitting job
      
      
      ## CPU JOB
      #PBS -l select=1:ncpus=10
      
      ## K40 GPU JOB
      ##PBS -lselect=1:ncpus=10:ngpus=1:centos=haswell
      
      ## V100 GPU JOB
      ##PBS -lselect=1:ncpus=10:ngpus=1:centos=skylake
      
      #PBS -l walltime=00:30:00
      #PBS -l software=LAMMPS
      
      export OMP_NUM_THREADS=1
      
      ## Environment
      echo "==============================="
      echo $PBS_JOBID
      cat $PBS_NODEFILE
      echo "==============================="
      cd $PBS_O_WORKDIR
      
      module purge
      module load apps/lammps/intel/7Aug19
      
      ## CPU JOB
      mpirun -np $PBS_NTASKS lmp_mpi_cpu -in in.lj
      
      ## K40 GPU JOB
      #mpirun -np $PBS_NTASKS lmp_mpi_gpu_k40 -sf gpu -pk gpu 1 -in in.lj
      

      Explanation :

      -np  		         	#number of MPI processes
      -in input_file_name			#input file name

    PYTHON3

    Available Modules:

    Module Name:

    apps/anaconda/3
    Description: Having most the packages used by iitd hpc users ,already installed , check with conda list command , not recommended for conda env creation.

    Module Name:

    apps/anaconda/3EnvCreation
    Description>: Dedicated for creation of conda environment with packages & their versions required by user as per their requirement., also having basic python packages.
    Module Name :
    compiler/intel/2019u5/intelpython3
    compiler/intel/2020u4/intelpython3.7
    Description> : Intel distribution of python ,recommended for installation of packages from source code(having its own advantages).

    How to install python packages

    Load Python Compiler
    module load compiler/intel/2019u5/intelpython3

    There are two ways to install python package choose any one. NOTE: Enable internet connectivity before installing python package with pip or for downloading source code , follow steps http://supercomputing.iitd.ac.in/?FAQ#internet


    • Using pip
    • export INST_DIR=/directory/to/install/package
      e.g  export INST_DIR=/home/cc/vfaculty/skapil.vfaculty/pythonpackages/python3/scipy
      OR
      export INST_DIR=$HOME/pythonpackages/python3/scipy
      NOTE: $HOME store path of users directory ,check with echo $HOME command , in this case it is  /home/cc/vfaculty/skapil.vfaculty
      
      mkdir -p $INST_DIR
      
      pip install package_name --prefix=$INST_DIR
      e.g., pip install scipy --prefix=$HOME/pythonpackages/python3/scipy
      
      ## Set the environment variable PYTHONPATH as or append same into .bashrc
      
      export PYTHONPATH=$INST_DIR/python3.6/site-packages:$PYTHONPATH
      e.g., export PYTHONPATH=$HOME/pythonpackages/python3/scipy/python3.6/site-packages:$PYTHONPATH
      
      or
      
      pip install package_name --user
      ## This will install the python packages to hidden folder .local  in your home directory.
      export PYTHONPATH=$HOME/.local/lib/python3.6/site-packages:$PYTHONPATH  
      e.g., pip install scipy --user

    • From Source
    • Download source file of package Example : scipy package installation e.g., $ wget https://pypi.python.org/packages/d0/73/76fc6ea21818eed0de8dd38e1e9586725578864169a2b31acdeffb9131c8/scipy-1.0.0.tar.gz

      export INST_DIR=$HOME/pythonpackages/python3/scipy
      mkdir -p $INST_DIR
      tar xzf scipy-1.0.0.tar.gz
      cd scipy-1.0.0
      export PYTHONPATH=$INST_DIR/python3.6/site-packages:$PYTHONPATH
      python setup.py install --prefix=$INST_DIR
      

      To use the installed packages , please set PYTHONPATH variable correctly for each package everytime for packages or add the same in .bashrc file present in your home folder before using it.

      e.g.,
      export PYTHONPATH=/home/cc/vfaculty/skapil.vfaculty/pythonpackages/python3/scipy/python3.6/site-packages:$PYTHONPATH
      e.g.,
      export PYTHONPATH=/home/cc/vfaculty/skapil.vfaculty/pythonpackages/python3/numpy/python3.6/site-packages:$PYTHONPATH

    JUPYTER NOTEBOOK

    Login to the HPC

    Submit a Interactive Job (CPU or GPU Job as per your requirement)

    e.g, qsub -I -P cc -q standard -lselect=1:ncpus=4 -lwalltime=00:30:00

    After getting the resources , you will land on one of the node

    e.g, chas102

    Load any of available anaconda module as per your requirement :

    For Python 2.7 : module load apps/anaconda/2
    For Python 3 : module load apps/anaconda/3
    

    Copy exactly the below mentioned command & run: (No changes required)

    jupyter notebook --ip=e$(hostname).hpc.iitd.ac.in --no-browser
    NOTE: Here 'e' indicates the ethernet

    It will show result similar to:

    To access the notebook, open this file in a browser:
    file:///run/user/85368/jupyter/nbserver-13953-open.html
    Or copy and paste one of the URL:
      http://echas102.hpc.iitd.ac.in:8888/?token=0a5da675d3174fda463d2bbc48edfb89ecbbf404a09b6985

    Copy the url which contains hpc.iitd.ac.in to your desktop/laptop browser and press enter :

    e.g., http://echas102.hpc.iitd.ac.in:8888/?token=0a5da675d3174fda463d2bbc48edfb89ecbbf404a09b6985

    NOTE :Here echas102.hpc.iitd.ac.in is taken as an example. Please use the actual node assigned to the job.
    The assigned node can be checked by qstat -n jobid .

    PARAVIEW

    Remote paraview access using Forward Connection Over SSH Tunnel (Client- Server)


    Pre-requisites :
  • Need to be in IITD network
  • Paraview version on client i.e on your PC should match the version which you are using on HPC.
  • Step 1: Server Setup "Run the pvserver on HPC using batch or interactive job"

    Login to the HPC
  • Submit a PBS job (interactive/batch) with pvserver running on node : copy the sample submission script from : /home/apps/skeleton/paraviewServer.sh
  • Change project name & resources as per your requirement.
  • Batch Job Submission:

    Please read the given instructions carefully

    Script (paraviewServer.sh :)
    #!/usr/bin/env bash
    #!/bin/bash
    #PBS -N ParaviewServer
    #PBS -P cc
    #PBS -q standard
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    #PBS -l select=1:ncpus=4
    #PBS -l walltime=00:30:00
    #PBS -l software=Paraview
    
    
    ## Client & Server both need to have same Paraview Version
    cd $PBS_O_WORKDIR
    
    module purge
    module load apps/visualization/paraview/4.4.0-Qt4-OpenGL2/precompiled
    module load suite/intel/parallelStudio/2018
    
    # Run executable with mpirun , 
    # note down the port no. which you are using i.e here we are using default port for Paraview i.e 11111
    mpirun -np $PBS_NTASKS pvserver --server-port=11111
    
    
    Submit the batch job
    qsub paraviewServer.sh

    Interactive job submission
    qsub -I -P cc -l select=1:ncpus=4 -l walltime=00:30:00 -q standard
    You will land on the particular node. Please refer the meaning of below lines from the above bash script.
    module purge
    module load apps/visualization/paraview/4.4.0-Qt4-OpenGL2/precompiled
    module load suite/intel/parallelStudio/2018
    mpirun -np $PBS_NTASKS pvserver --server-port=11111
    NOTE:Read the comments present in batch job submission section , the meaning is same here.

    Step 2: Port Forwarding

    PBS will allocate an node for you, note down the node name for ex. chas112 Open another terminal, login to HPC

  • You can use any login node Note: use e with hostname allocated to you
  • Use the same port no. on which pvserver is running
  • Execute:
    ssh -L 11111:echas112:11111 login03 

    Step 3: Client Setup

  • One time Step : Install Paraview on your PC i.e same version which you are using on HPC
  • Open Paraview on your machine/PC which is IITD network
  • Go To : file --> connect --> add server

    Name : (Any e.x) IITD

    host : i.e echas112.hpc.iitd.ac.in

    port : 11111

    Configure --> Choose manual connection option --> save

    Note: This setting will change every time as the allocated node may be different every time.
  • Select the server when your job is in running state, and click on connect.
  • Sometimes you may get display warning, click ok.
  • To check whether you are successfully connected or not . check the output file of your job it will show that , client is connected.
  • MUMAX 3


    For Visualization:

    Login to the HPC

    Submit Interactive Job for GPU node as per your requirement

    Please do not run on the login nodes

    qsub -I -q standard -P cc -N mumax3 -lselect=1:ncpus=4:ngpus=1:mpiprocs=4 -lwalltime=00:30:00

    After getting the resources , you will land on one of the node

    e.g, khas118

    Please run the below commands:


    module purge
    module load apps/mumax/3.10
    mumax3 -i

    Will show output like :

    //starting GUI at http://127.0.0.1:35367 //please open http://127.0.0.1:35367 in a browser //entering interactive mode

    Instead of 127.0.0.1 use e[hostname].hpc.iitd.ac.in

    e.g., if i am on node with name khas118 then i will type http://ekhas118.hpc.iitd.ac.in:35367 in my local(laptop/desktop) system browser.

    NOTE: Here 'e' indicates the ethernet

    PYTORCH


    Note : conda installation of pytorch(gpu) version above 1.1.0 will work on V100 cards i.e vsky nodes (cuda compute capability 70) but will not work on K40 cards i.e on khas nodes (cuda compute capability 35)

    To work on K40 cards version above 1.1.0 requires installation from source, refer

    While installing from source set the environment variable TORCH_CUDA_ARCH_LIST in the following way

    export TORCH_CUDA_ARCH_LIST="3.5;7.0"


    Available pytorch versions on HPC which will work on both K40 & V100 cards are :


    Pytorch 1.1.0, Torchvision 0.3.0 with cuda 10.0 : module load apps/anaconda/3

    Pytorch 1.5.0, Torchvision 0.6.0 with cuda 10.0 : module load apps/pytorch/1.5.0/gpu/anaconda3

    Pytorch 1.6.0, Torchvision 0.7.0 with cuda 10.0 : module load apps/pytorch/1.6.0/gpu/anaconda3

    Pytorch 1.9.0, Torchvision 0.10.0 with cuda 10.2 : module load apps/pytorch/1.9.0/gpu/intelpython3.7

    Pytorch 1.10.0, Torchvision 0.11.1, Torchaudio 0.10.0 with cuda 11.0 : module load apps/pytorch/1.10.0/gpu/intelpython3.7


    GAUSSIAN 16


    Login to the HPC


    Method 1: Batch Job Submission:

    Please read the given instructions carefully

    Script (pbs_submit.sh) :

    --------------------
    #!/usr/bin/env bash
    #PBS -N g16
    #PBS -P cc
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    ### Use mem only if you know about memory requirement
    #PBS -l select=6:ncpus=4:mpiprocs=4:mem=32GB
    #PBS -l walltime=00:30:00
    #PBS -q standard
    # Environment
    echo "==============================="
    echo $PBS_JOBID
    cat $PBS_NODEFILE
    echo "==============================="
    cd $PBS_O_WORKDIR
    
    module purge
    module load apps/gaussian/16
    ##Specify the full  path where gaussian will put temporary files 
    ##(Always Use scratch location for it.)
    
    export GAUSS_SCRDIR=${HOME/home/scratch}/gaussian16
    
    WLIST="$(sort -u ${PBS_NODEFILE} | awk -v ORS=, '{print $1}' | sed 's/,$//')"
    
    ##If you know about how much memory required for your job,
    ##use mem is select statement & -m=""
    ##otherwise skip it.
    
    ##Input file need to be their from directory you submit the job
    ##Otherwise use full path for input file.
    
    g16 -m="32gb" -p="$PBS_NTASKS" -w="$WLIST" test0397.com
    

    Submit the batch job

    qsub pbs_submit.sh

    Method 2: Interactive job submission

    qsub -I -q standard -P cc -N g16 -lselect=6:ncpus=4:mpiprocs=4 -lwalltime=00:30:00

    You will land on the particular node. Please refer the meaning of below lines from the above bash script.

    module load apps/gaussian/16
    export GAUSS_SCRDIR=/scratch/cc/vfaculty/skapil.vfaculty/gaussian16
    WLIST="$(sort -u ${PBS_NODEFILE} | awk -v ORS=, '{print $1}' | sed 's/,$//')"
    g16 -m="32gb" -p="$PBS_NTASKS" -w="$WLIST" test0397.com

    List of unused modules

    apps/2.6.4/gnu
    apps/BEAST/1.10.1/precompiled
    apps/CFD-Post/18.0/precompiled
    apps/COMSOL/5.3a/precompiled
    apps/COMSOL/5.4/precompiled
    apps/Caffe/0.9999/gpu
    apps/Caffe/master/01.05.2017/gpu
    apps/FigTree/1.4.3/precompiled
    apps/NAMD/2.10/mic/intel
    apps/NAMD/2.10/mic/temp
    apps/NAMD/2.11/gpu/k20/intel
    apps/Tracer/1.7.1/precompiled
    apps/autoconf/2.69/gnu
    apps/autodyn/19.0/precompiled
    apps/automake/1.14/gnu
    apps/automake/1.15/gnu
    apps/avogadro/1.2.0/gnu
    apps/bazel/0.4.4/gnu1
    apps/bzip2/1.0.6/gnu
    apps/ccpem/171101/precompiled
    apps/cesm
    apps/cmake/2.8.12/gnu
    apps/codesaturne/4.0.6/intel
    apps/cpmd/appvars
    apps/cppunit/1.12.1/intel
    apps/ctffind/4.1.10/precompiled
    apps/date_utils/0.4.1/gnu
    apps/dssp/2.0.4/bin
    apps/dssp/2.2.1/gnu
    apps/ffmpeg/3.1.5/gnu
    apps/fluent/15.0/precompiled
    apps/fluent/17.2/precompiled
    apps/freesteam/2.1/gnu
    apps/gawk/4.1.4/gnu
    apps/gctf/gpu/precompiled
    apps/gphoto/2/2.5.14/gnu
    apps/gphoto/2/2.5.6/gnu
    apps/gradle/3.2/gnu
    apps/graphicsmagick/1.3.25/gnu
    apps/graphviz/2.38.0/intel
    apps/gromacs/4.6.7/intel
    apps/gromacs/5.1.4/gnu
    apps/grpc-java/1.3.0/gnu
    apps/grpc/1.1.2/gnu
    apps/imagic/precompiled
    apps/imod/4.9.6/gnu
    apps/lammps/11.08.2017/gpu1
    apps/lammps/16.02.2016/k20gpu
    apps/lammps/31.03.2017/gpu
    apps/lammps/gpu
    apps/lammps/gpu-mixed
    apps/lua/5.3.4/gnu
    apps/luajit/2.0.4/gnu
    apps/luarocks/2.4.2/gnu
    apps/modeller/9.19/precompiled
    apps/motioncor2/1.0.5/precompiled
    apps/mpas/4.0/intel
    apps/nasm/2.12.02/gnu
    apps/omniorb/4.2.1/intel
    apps/opencascade/6.9.1/intel
    apps/opencascade/6.9.1/precompiled
    apps/openfoam/4.1/intel
    apps/openfoam2.3.1
    apps/phylip/3.697/gnu
    apps/pyqt/4/4.11.4/gnu
    apps/pythonpackages/2.7.10/funcsigs/1.0.2/gnu
    apps/pythonpackages/2.7.10/mock/2.0.0/gnu
    apps/pythonpackages/2.7.10/pbr/1.10.0/gnu
    apps/pythonpackages/2.7.10/pyyaml/3.12/gnu
    apps/pythonpackages/2.7.10/scons/2.5.1/gnu
    apps/pythonpackages/2.7.13/tensorflow/1.3.1/gpu
    apps/pythonpackages/3.6.0/graph-tool/2.27/gnu
    apps/pytorch/0.3.1/gpu
    apps/redmd/2.3/gnu
    apps/resmap/1.1.4/gnu
    apps/rings/1.3.1/intel
    apps/rnnlib/2013.08.20/gnu
    apps/rstudio/0.98.1103/precompiled
    apps/salome/gui/7.8.0/precompiled
    apps/salome/kernel/7.8.0/intel
    apps/salome/kernel/7.8.0/precompiled
    apps/salome/yacs/7.8.0/precompiled
    apps/sip/4.18.1/gnu
    apps/socat/1.7.3.0/gnu
    apps/spcam2_0-cesm1_1_1
    apps/tar/1.28/gnu
    apps/tempy/1.1/gnu
    apps/test/openfoam-2.3.1
    apps/theano/0.8.0.dev/04.04.2016/gpu
    apps/torch/7/gpu
    apps/uvcdat/2.2/gnu
    apps/valgrind/3.11.0/ompi
    apps/visualization/paraview/4.4.0-Qt4/precompiled
    apps/visualization/uvcdat
    apps/wrf/3.6/appvars
    compiler/R/3.2.3/gnu
    compiler/pgi-community-edition/16.10/PrgEnv-pgi/16.10
    lib/QT/4.6.4/gnu
    lib/agg/2.5/gnu
    lib/atlas/3.10.2/gnu
    lib/beagle/3.1.0/gnu
    lib/blas/netlib/3.7.0/gnu
    lib/boost/1.59.0/gnu_ucs2
    lib/boost/1.64.0/gnu_ucs71
    lib/bzip2/1.0.6/gnu
    lib/caffedeps/master/intel
    lib/cgal/4.10.1/gnu_ucs71
    lib/cgal/4.7/gnu
    lib/cgns/3.3.0/intel
    lib/cudnn/5.0.4/precompiled
    lib/cudnn/5.1.10/precompiled
    lib/devil/1.7.8/gnu
    lib/eigen/2.0.17/gnu
    lib/eigen/3.2.8/gnu
    lib/esmf/6.3.0.1/gnu
    lib/esmf/6.3.0.1/intel
    lib/fftw/2.1.5/intel
    lib/fftw/3.2.2/gnu
    lib/fftw/3.3.7/gnu1
    lib/fltk/1.3.0/gnu
    lib/freeglut/3.0.0/gnu
    lib/freetype/2.8.1/gnu
    lib/ftgl/2.1.3/gnu
    lib/g2clib/1.4.0/gnu
    lib/gdal/2.0.1/gnu
    lib/glew/2.0.0/gnu
    lib/gphoto/2/2.5.14/gnu
    lib/gphoto/2/2.5.6/gnu
    lib/graphicsmagick/1.3.24/gnu
    lib/graphicsmagick/1.3.29/gnu
    lib/gtkglext/1.2.0/gnu
    lib/hdf/4/4.2.11/intel
    lib/imagemagick/7.0.7/gnu
    lib/jpeg/6b/k20/gnu
    lib/jpeg_turbo/1.5.1/gnu
    lib/lcms/1.19/gnu
    lib/libtool/2.4.6/k20/gnu
    lib/med/3.2.0/intel
    lib/metis/5.1.0/intel
    lib/mng/1.0.10/gnu
    lib/mpir/3.0.0/gnu
    lib/ntl/10.3.0/gnu
    lib/openbabel/2.3.2/gnu
    lib/openbabel/2.4.1/gnu
    lib/opencv/2.3/gnu
    lib/opencv/2.4.13/gnu
    lib/parmetis/4.0.3/intel
    lib/pcre2/10.23/gnu
    lib/phdf/5/1.10.2/intel
    lib/phdf/5/1.8.16/ompi
    lib/phdf5/1.8.20/gnu
    lib/pio/1.7.1/parallel/intel
    lib/ple/2.0.1/gnu
    lib/ptscotch/6.0.4/gnu1
    lib/ptscotch/6.0.4/intel
    lib/readline/6.3/gnu
    lib/rs/3.1.0/gnu
    lib/scotch/6.0.4/gnu
    lib/ssh2/1.8.0/gnu
    lib/szip/2.1/gcc/szip
    lib/tcl/8.6.7/gnu
    lib/trilions/12.12.1/gnu1
    lib/wxWidgets/3.1.0/gnu
    lib/x264/2016.11.30/gnu
    lib/xz/5.2.3/gnu
    lib/yaml/0.1.7/gnu
    lib/yasm/1.3.0/gnu
    pythonpackages/2.7.13/ASE/3.16.2/gnu
    pythonpackages/2.7.13/Babel/2.6.0/gnu
    pythonpackages/2.7.13/JINJA2/2.10/gnu
    pythonpackages/2.7.13/Werkzeug/0.14.1/gnu
    pythonpackages/2.7.13/biopython/1.70/gnu
    pythonpackages/2.7.13/catmap/0.3.0/gnu
    pythonpackages/2.7.13/click/7.0/gnu
    pythonpackages/2.7.13/flask/1.0.2/gnu
    pythonpackages/2.7.13/futures/3.2.0/gnu
    pythonpackages/2.7.13/gmpy/1.17/gnu
    pythonpackages/2.7.13/graphviz/0.10.1/gnu
    pythonpackages/2.7.13/itsdangerous/1.1.0/gnu
    pythonpackages/2.7.13/mpmath/1.1.0/gnu
    pythonpackages/2.7.13/tensorflow_tensorboard/0.1.2/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/csv/1.0/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/genshi/0.7/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/inflection/0.3.1/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/ipykernel/4.6.1/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/more-itertools/3.0.0/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/nose/1.3.7/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/quandl/3.1.0/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/requests/2.13.0/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/tornado_xstatic/0.2/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/xstatic/1.0.1/gnu
    pythonpackages/3.6.0/PyWavelets/0.5.2/gnu
    pythonpackages/3.6.0/enum34/1.1.6/gnu
    pythonpackages/3.6.0/mako/1.0.7/gnu
    pythonpackages/3.6.0/pydot-ng/1.0.0/gnu
    pythonpackages/3.6.0/ucs4/gnu/447/mock/2.0.0/gnu
    pythonpackages/3.6.0/ucs4/gnu/447/pbr/2.0.0/gnu
    r_packages/3.4.0/gnu/raster/2.5-8/gnu
    r_packages/3.4.0/gnu/rcpp/0.12.13/gnu
    r_packages/3.4.0/gnu/rgdal/1.2-15/gnu
    r_packages/3.4.0/gnu/rgeos/0.3-26/gnu
    r_packages/3.4.0/gnu/sp/1.2-5/gnu
    test/cp2k/2.6.0/gpu/cp2k
    test/quantum_espresso/gpu/quantum_espresso