Last modified: July 20 2020.
Contact: hpchelp [at] iitd.ac.in

Software

For list of available software, please check available modules:
$ module avail
If the required software is not available/listed in modules, users can
  • Install the software in your own account ($HOME) -or-
  • Request installation, via your supervisor.

Installing Software in your own account

User can install software in their own accounts. Super user access cannot be provided for any installation. Please check modules before requesting any dependencies.

Requesting installation of software

If multiple users need access to a software, the supervisor/Head/HPC represnetative can request central installation of software. Please send email(s) to hpchelp.

CESM

Steps for preparing and submitting your job


For cesm1_2_2:


Step 1: Load the module

module load apps/CESM/cesm1_2_2/intel

 

Step 2 : Create the CASE

e.g.,
export CASE=Path to create CASE
create_newcase -case $CASE -res f09_f09 -compset F_2000_CAM5 -mach IITD1 -compiler intel

 

Note : Set Machine as IITD1 and Compiler intel

 

Step 3 : Setup and Build the case

  • ./cesm_setup
  • ./CASE.build
  •  

    Step 4 : Submit the CASE

  • ./CASE.submit
  •  

    Important Notes :
    1. Make sure a soft link for users scratch is present in your account.
    2. A cesm1_2_2_OUT directory will be created into your scratch, which contains two subdirectories CASES and archives. The directory CASES will contain the CASE wise run and bld directories. The archives directory will contains CASE wise archive data.
    3. By default input directory is set to : /home/cas/faculty/dilipganguly/cesm/inputdata

     

    For cesm2_1_1 :

    Use login04 or interactive job on centos nodes

     

    Step. 1 : Load the module

    module load apps/CESM/cesm2.1.1/intel

     

    Step 2 : Environment Setup

    Set the input data path e.g export CESMDATAROOT=${SCRATCH}/cesm2_inputdata

     

    Step 3 : Create the CASE

    e.g.,
    export CASE=Path to create CASE
    create_newcase --case $CASE --compset FHIST --res f09_f09_mg17 --machine PADUM
    cd $CASE
    ./xmlchange --file env_run.xml --id DIN_LOC_ROOT --val $CESMDATAROOT

     

    Step 4 : Setup and Build the case

  • ./case.setup
  • ./case.build --skip-provenance-check
  •  

    Step 5 : Submit the CASE

  • ./case.submit
  • Important Notes :
    1. Make sure a soft link for users scratch is present in your account
    2. A cesm2.1.1_out directory will be created into your scratch, which contains two subdirectories CASES and archives. The directory CASES will contain the CASE wise run and bld directories. The archives directory will contains CASE wise archive data.

    WRF

    Step 1: WRF Dependencies

     

    Loading below module will set all the dependencies ,environment variables & paths required for wrf.

    module load apps/wrf/intel2015

     

    Note:

    Please do remove environment variables from .bashrc related to hdf5,mpi,pnetcdf,netcdf,flex,bison etc & alsounload modules related to the same if any , this causes problem.

     

    Step 2: There are three different compiled flavours of wrf 3.8.1 are available at the following paths :

     

  • WRF 3.8.1 : /home/apps/skeleton/wrf/wrf_3.8.1.tar.gz
  • WRF-CHEM 3.8.1 : /home/apps/skeleton/wrf/wrf_chem_3.8.1.tar.gz
  • WRF-CHEM-KPP 3.8.1 : /home/apps/skeleton/wrf/wrf_chem_kpp_3.8.1.tar.gz
  • Note:

    Each tar contains two folders WPS & WRFV3 (already compiled), you simply need to copy the tar files & extract using tar -xzvf filename inside any folder. Make the changes for input as per your requirement. A sample submit script named pbs_submit.sh is available in each WRFV3/run folder (do the necessary changes). Changes made in source code/files of any model like physics ,chem etc needs recompilation.

    Recompilation Steps : (Take interactive job for at least 2 hours for recompilation)

    Note:

    For WRF CHEM (without KPP): export WRF_CHEM=1

    Note:

    For WRF CHEM (with KPP): export WRF_CHEM=1 && export WRF_KPP=1

  • cd WRFV3
  • module load apps/wrf/intel2015
  • cp configure.wrf configure.wrf.bkp
  • ./clean -a
  • cp configure.wrf.bkp configure.wrf
  • ./compile em_real
  •  

    For WPS & to run real.exe take a interactive job ,do module load apps/wrf/intel2015 and run the binaries as per the wrf procedure.

     

    Some Useful Hints :

     

    To use pnetcdf features of wrf you can do the following changes in namelist.input file present either WRFV3/run or WRFV3/test/em_real/ i.e. from where you prefer to submit a job, also copy pbs_submit.sh to that folder.

  • io_form_history = 11,
  • io_form_restart = 11,
  • io_form_input = 11,
  • io_form_boundary = 11,
  • nocolons =.true.
  • ANSYS FLUENT

    Steps for preparing and submitting your job

    1. Case and Data files
    2. Transfer your case and data files to your /home or /scratch directory. If you are generating large data (> 10GB), please use /scratch.
    3. Journal File e.g. journal.jou
    4. A "journal file" is needed to execute fluent commands in the batch mode. e.g.
      rcd case_and_data
      /solve/dual-time-iterate 240000 50
      wcd output
      exit ok
      
      This journal file will read case_and_data.cas and case_and_data.dat. it will runa dual time iteration with 50 iteration per time-step for 240000 time-steps. It will write the output in "output".
      Following summarizes the fluent command line arguments:
      • 2ddp: Run a 2D simulation in double precision(remove dp to run in single precesion)
      • -g: Run Fluent without the GUI
      • -ssh: Use ssh to login to the available nodes(-rsh is default)
      • -pinfiniband: Use infiniband interconnect
      • -cnf=$PBS_NODEFILE: the list of nodes allocated to the PBS job is also provided to fluent solver.
      • &> : Redirect the Fluent output & error information to "log" file.
      • -mpi=openmpi: specifies the type of MPI (intel,mpich2,openmpi are currently supported on IITD HPC). Please load appropriate mpi module as per specified option.
    5. PBS submit file. e.g. pbssubmit.sh
    6. #!/bin/bash
      #PBS -N jobname
      #PBS -P department
      #PBS -m bea
      #PBS -M $USER@iitd.ac.in
      #PBS -l select=1:ncpus=20
      #PBS -l walltime=168:00:00
      #PBS -l fluent=1
      #PBS -l fluent_hpc=4
      #PBS -l software=ANSYS
      cd $PBS_O_WORKDIR
      module load apps/fluent
      #default version is 15.0.7 
      
      module load compiler/mpi/openmpi/1.10.0/gnu
      
      time -p fluent -g 2ddp -t $PBS_NTASKS -i journal.jou -ssh -mpi=openmpi -cnf=$PBS_NODEFILE -pinfiniband &> log
      
      The example submit file will request 20 cpus for 168 hours and run a "2ddp" job over infiniband interconnect using the journal file "journal.jou". It will use 1 base fluent license (fluent=1) and 4 HPC licenses (fluent_hpc=4). The base license allows a 16 process parallel job.
    7. Job submission
    8. Submit the above job using "qsub pbssubmit.sh"

    Checklist

    NOTE:

    There are some compatibility issues with fluent 17.2 & openmpi (-mpi=openmpi). Please carry out following modifications in PBS job script's command section:
    apps/Fluent/17.2/precompiled
    time -p fluent -g 2ddp -t $PBS_NTASKS -i journal.jou -ssh -mpi=intel -cnf=$PBS_NODEFILE -pinfiniband &> log

     

  • To check the status or graph of the ansys licenses,click here
  •  

    HTML Table Cellpadding
    PBS Resources Ansys Licenses Name Description
    ansys_aa_ds AA_DS Licenses ANSYS Academic Teaching DesignSpace
    ansys_aa_mcad AA_MCAD Licenses ANSYS Academic cad interface
    ansys_aa_r_cfd AA_R_CFD Licenses ANSYS Academic Research CFD
    ansys_aa_r_et AA_R_ET Licenses ANSYS Academic Research Electronics Thermal
    ansys_aa_r_hpc AA_R_HPC Licenses ANSYS Academic Research HPC
    ansys_aa_r_me AA_R_ME Licenses ANSYS Academic Research Mechanical
    ansys_aa_t_a AA_T_A Licenses ANSYS Academic Teaching Advance
    ansys_aa_t_cfd AA_T_CFD Licenses ANSYS Academic Teaching CFD
    ansys_aunivres AUNIVRES Licenses

    MATLAB

    Steps for preparing and submitting your job

    Please check the available modules by module avail apps/Matlab. You can load matlab
    module via:

    Load any one module from the below:

    module load apps/Matlab/r2016b/precompiled
    module load apps/Matlab/r2017a/precompiled
    module load apps/Matlab/r2017b/precompiled
    
    for short jobs/runs you can run matlab gui via
    matlab
    command. long running jobs should be submitted via batch system a script file, example:
    #!/bin/bash
    #PBS -N jobname
    #PBS -P department
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    #PBS -l select=1:ncpus=8
    #PBS -l walltime=168:00:00
    #PBS -l software=MATLAB
    cd $PBS_O_WORKDIR
    module load apps/Matlab/r2014b/gnu
    time -p matlab -nosplash -nodisplay < myprogram.m > matlab.log
    
    now submit the job using qsub command.

     

  • Now Matlab licenses(including all toolbox) are available on campus level so you will get the license for your job whenever it required.
  • Instructions and documentation for running parallel jobs on a single node, as well as across nodes are available here:
    Documentation for shared storage (on HPC)
    Documentation for remote submission
    Necessary scripts: Linux (tar.gz) Windows (zip)
  •  

    GROMACS

    Steps for preparing and submitting your job

    You can check for available gromacs modules via
     module avail apps/gromacs
    ------------------------------- /home/soft/modules ---------------------------------------
    apps/gromacs/4.6.2         apps/gromacs/4.6.2_plumed  apps/gromacs/4.6.5_plumed  apps/gromacs5.1.1
    apps/gromacs/4.6.2_noncuda apps/gromacs/4.6.5         apps/gromacs/5.1.1
    
    You can load gromacs version of your choice via (e. gromacs 4.6.2):
    module load apps/gromacs/4.6.2
    
    This loads & sets all the prerequisites for gromacs 4.6.2(cuda). For short test runs you can run (say)
    mdrun_mpi
    gromacs command, from gpu login nodes. Long running jobs should be submitted via batch system using a script file, example:
    #!/bin/bash
    #PBS -N jobname
    #PBS -P department
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    #PBS -l select=4:ncpus=24:mpiprocs=2:ngpus=2
    #PBS -o stdout_file
    #PBS -e stderr_file
    #PBS -l walltime=24:00:00
    #PBS -l software=GROMACS
    echo "==============================="
    echo $PBS_JOBID
    cat $PBS_NODEFILE
    echo "==============================="
    cd $PBS_O_WORKDIR
    
    
    module load apps/gromacs/4.6.2
    time -p mpirun -np $PBS_NTASKS -machinefile $PBS_NODEFILE -genv OMP_NUM_THREADS 12 mdrun_mpi < gromacs specific input files & parameters >
    
    now submit the job using
    qsub
    command.
    This script will request for 4 nodes with 2 gpu cards per node(total 8 gpu cards) & 96 cpu cores (24 core per node).
    This script will also launch 8 process in total (2 process per node) on the requested nodes, also each process creates 12 threads (-genv OMP_NUM_THREADS 12) , hence launching 24 threads per node.

    NAMD (XEON PHI)

    Steps for running your job on nodes having XEON PHI cards

      add nmics flag to your select statement in pbs script as follows:
      For ex.
      #PBS -l select=1:ncpus=24:nmics=1       #if you want to use one mic card only
      
      or
      
      #PBS -l select=1:ncpus=24:nmics=2       #if you want to use two mic cards 
      
      #NOTE : ONLY VALUE 1 and 2 are allowed for nmics flag
      
      #Add following command in your pbs script.
      module load apps/NAMD/2.10/mic/intel    #To load namd xeon phi binary and libraries
      
      #Command to execute NAMD on xeon phi
      mpiexec.hydra -machinefile $PBS_NODEFILE -n 2 -perhost 2 namd2 +ppn 11 +commap 0,12 +pemap 1-11,13-23 +devices 0,1 ./stmv.namd
      
      
    • Explanation
    • mpiexec.hydra  			#Program to launch MPI
      -machinefile $PBS_NODEFILE  	#To Nodes allotted for current job
      -n 				#Number Of Processes
      -perhost 			#MPI processes per host
      namd2				#namd binary name
      +ppn      			#Worker Processes(threads) per process
      +commap 0,12			#Thread communication to process numbers( threads 1-11 will communicate with process id 0,13-23 will communicate with process id 12 )
      +pemap 1-11,13-23		#Worker Process(threads) Mapping
      +devices 0,1			#MIC cards want to use
      stmv.namd			#input file
      if any issues/errors please mail on hpchelp@iitd.ac.in
      

    LAMMPS (XEON PHI)

    Steps for running your job on nodes having XEON PHI cards

      add nmics flag to your select statement in pbs script as follows:
      For ex.
      #PBS -l select=1:ncpus=24:nmics=1       #if you want to use one mic card only
      
      or
      
      #PBS -l select=1:ncpus=24:nmics=2       #if you want to use two mic cards
      
      #NOTE : ONLY VALUE 1 and 2 are allowed for nmics flag
      
      #Add following command in your pbs script.
      module load apps/lammps/intel_phi/lammps    #To load LAMMPS xeon phi binaries and libraries
      
      
    • Add following Lines to LAMMPS input file
    • package intel 2 mode mixed balance $b
      package omp 0
      suffix $s
      
      #Command to execute LAMMPS on xeon phi
      mpiexec.hydra -np 24 -machinefile $PBS_NODEFILE  -genv OMP_NUM_THREADS 1 lmp_intel_phi -in in.intel.rhodo -log none -v b -1 -v s intel
      
    • Explanation
    • -np  		         	#number of MPI processes
      -genv OMP_NUM_THREADS  	        #number of threads per process
      -in input_file_name		#input file name
      -log 				#where to send log output
      -v s intel 			#suffix value=intel
      -v b 				#0 = no mic cards get used, -1 = balance the workload between the cards,0.75 = give 75% workload to MIC cards
      if any issues/errors please mail on hpchelp@iitd.ac.in
      

    PYTHON

    Python:

    To load python 2.7.13 in your environment, please use - compiler/python/2.7.13/ucs4/gnu/447 Please note that after executing previous command, the python packages like numpy, scipy will not be available in your environment. After loading python, you need to explicitly load an entire python package suite as- pythonpackages/2.7.13/ucs4/gnu/447/package_suite/1 Following command will help you with the list of available python package within a "package_suite" - module help pythonpackages/2.7.13/ucs4/gnu/447/package_suite/1 Or, you could selectively load only required modules from the output of following command - module avail pythonpackages/2.7.13

    JUPYTER NOTEBOOK

    Login to the HPC

    Submit a Interactive Job (CPU or GPU Job as per your requirement)

    e.g, qsub -I -P cc -q standard -lselect=1:ncpus=4 -lwalltime=00:30:00

    After getting the resources , you will land on one of the node

    e.g, chas102

    Load any of available anaconda module as per your requirement :

    For Python 2.7 : module load apps/anaconda/2
    For Python 3 : module load apps/anaconda/3
    

    Copy exactly the below mentioned command & run: (No changes required)

    jupyter notebook --ip=e$(hostname).hpc.iitd.ac.in --no-browser
    NOTE: Here 'e' indicates the ethernet

    It will show result similar to:

    To access the notebook, open this file in a browser:
    file:///run/user/85368/jupyter/nbserver-13953-open.html
    Or copy and paste one of the URL:
      http://echas102.hpc.iitd.ac.in:8888/?token=0a5da675d3174fda463d2bbc48edfb89ecbbf404a09b6985

    Copy the url which contains hpc.iitd.ac.in to your desktop/laptop browser and press enter :

    e.g., http://echas102.hpc.iitd.ac.in:8888/?token=0a5da675d3174fda463d2bbc48edfb89ecbbf404a09b6985

    NOTE :Here echas102.hpc.iitd.ac.in is taken as an example. Please use the actual node assigned to the job.
    The assigned node can be checked by qstat -n jobid .

    PARAVIEW

    Remote paraview access using Forward Connection Over SSH Tunnel (Client- Server)


    Pre-requisites :
  • Need to be in IITD network
  • Paraview version on client i.e on your PC should match the version which you are using on HPC.
  • Step 1: Server Setup "Run the pvserver on HPC using batch or interactive job"

    Login to the HPC
  • Submit a PBS job (interactive/batch) with pvserver running on node : copy the sample submission script from : /home/apps/skeleton/paraviewServer.sh
  • Change project name & resources as per your requirement.
  • Batch Job Submission:

    Please read the given instructions carefully

    Script (paraviewServer.sh :)
    #!/usr/bin/env bash
    #!/bin/bash
    #PBS -N ParaviewServer
    #PBS -P cc
    #PBS -q standard
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    #PBS -l select=1:ncpus=4
    #PBS -l walltime=00:30:00
    #PBS -l software=Paraview
    
    
    ## Client & Server both need to have same Paraview Version
    cd $PBS_O_WORKDIR
    
    module purge
    module load apps/visualization/paraview/4.4.0-Qt4-OpenGL2/precompiled
    module load suite/intel/parallelStudio/2018
    
    # Run executable with mpirun , 
    # note down the port no. which you are using i.e here we are using default port for Paraview i.e 11111
    mpirun -np $PBS_NTASKS pvserver --server-port=11111
    
    
    Submit the batch job
    qsub paraviewServer.sh

    Interactive job submission
    qsub -I -P cc -l select=1:ncpus=4 -l walltime=00:30:00 -q standard
    You will land on the particular node. Please refer the meaning of below lines from the above bash script.
    module purge
    module load apps/visualization/paraview/4.4.0-Qt4-OpenGL2/precompiled
    module load suite/intel/parallelStudio/2018
    mpirun -np $PBS_NTASKS pvserver --server-port=11111
    NOTE:Read the comments present in batch job submission section , the meaning is same here.

    Step 2: Port Forwarding

    PBS will allocate an node for you, note down the node name for ex. chas112 Open another terminal, login to HPC

  • You can use any login node Note: use e with hostname allocated to you
  • Use the same port no. on which pvserver is running
  • Execute:
    ssh -L 11111:echas112:11111 login03 

    Step 3: Client Setup

  • One time Step : Install Paraview on your PC i.e same version which you are using on HPC
  • Open Paraview on your machine/PC which is IITD network
  • Go To : file --> connect --> add server

    Name : (Any e.x) IITD

    host : i.e echas112.hpc.iitd.ac.in

    port : 11111

    Configure --> Choose manual connection option --> save

    Note: This setting will change every time as the allocated node may be different every time.
  • Select the server when your job is in running state, and click on connect.
  • Sometimes you may get display warning, click ok.
  • To check whether you are successfully connected or not . check the output file of your job it will show that , client is connected.
  • MUMAX 3


    Login to the HPC

    Submit Interactive Job for GPU node as per your requirement

    Please do not run on the login nodes

    qsub -I -q standard -P cc -N mumax3 -lselect=1:ncpus=4:ngpus=1:mpiprocs=4 -lwalltime=00:30:00

    After getting the resources , you will land on one of the node

    e.g, khas118

    Please run the below commands:


    module purge
    module load apps/mumax/3.10Beta
    mumax3

    Will show output like :

    //starting GUI at http://127.0.0.1:35367 //please open http://127.0.0.1:35367 in a browser //entering interactive mode

    Instead of 127.0.0.1 use e[hostname].hpc.iitd.ac.in

    e.g., if i am on node with name khas118 then i will type http://ekhas118.hpc.iitd.ac.in:35367 in my local(laptop/desktop) system browser.

    NOTE: Here 'e' indicates the ethernet

    GAUSSIAN 16


    Login to the HPC


    Method 1: Batch Job Submission:

    Please read the given instructions carefully

    Script (pbs_submit.sh) :

    --------------------
    #!/usr/bin/env bash
    #PBS -N g16
    #PBS -P cc
    #PBS -m bea
    #PBS -M $USER@iitd.ac.in
    ### Use mem only if you know about memory requirement
    #PBS -l select=6:ncpus=4:mpiprocs=4:mem=32GB
    #PBS -l walltime=00:30:00
    #PBS -q standard
    # Environment
    echo "==============================="
    echo $PBS_JOBID
    cat $PBS_NODEFILE
    echo "==============================="
    cd $PBS_O_WORKDIR
    
    module purge
    module load apps/gaussian/16
    ##Specify the full  path where gaussian will put temporary files 
    ##(Always Use scratch location for it.)
    
    export GAUSS_SCRDIR=/scratch/cc/vfaculty/skapil.vfaculty/gaussian16
    
    WLIST="$(sort -u ${PBS_NODEFILE} | awk -v ORS=, '{print $1}' | sed 's/,$//')"
    
    ##If you know about how much memory required for your job,
    ##use mem is select statement & -m=""
    ##otherwise skip it.
    
    ##Input file need to be their from directory you submit the job
    ##Otherwise use full path for input file.
    
    g16 -m="32gb" -p="$PBS_NTASKS" -w="$WLIST" test0397.com
    

    Submit the batch job

    qsub pbs_submit.sh

    After getting the resources , you will land on one of the node

    e.g, chas102

    Load any of available anaconda module as per your requirement :

    For Python 2.7 : module load apps/anaconda/2
    For Python 3 : module load apps/anaconda/3
    

    Method 2: Interactive job submission

    qsub -I -q standard -P cc -N g16 -lselect=6:ncpus=4:mpiprocs=4 -lwalltime=00:30:00

    You will land on the particular node. Please refer the meaning of below lines from the above bash script.

    module load apps/gaussian/16
    export GAUSS_SCRDIR=/scratch/cc/vfaculty/skapil.vfaculty/gaussian16
    WLIST="$(sort -u ${PBS_NODEFILE} | awk -v ORS=, '{print $1}' | sed 's/,$//')"
    g16 -m="32gb" -p="$PBS_NTASKS" -w="$WLIST" test0397.com

    List of unused modules

    apps/2.6.4/gnu
    apps/BEAST/1.10.1/precompiled
    apps/CFD-Post/18.0/precompiled
    apps/COMSOL/5.3a/precompiled
    apps/COMSOL/5.4/precompiled
    apps/Caffe/0.9999/gpu
    apps/Caffe/master/01.05.2017/gpu
    apps/FigTree/1.4.3/precompiled
    apps/NAMD/2.10/mic/intel
    apps/NAMD/2.10/mic/temp
    apps/NAMD/2.11/gpu/k20/intel
    apps/Tracer/1.7.1/precompiled
    apps/autoconf/2.69/gnu
    apps/autodyn/19.0/precompiled
    apps/automake/1.14/gnu
    apps/automake/1.15/gnu
    apps/avogadro/1.2.0/gnu
    apps/bazel/0.4.4/gnu1
    apps/bzip2/1.0.6/gnu
    apps/ccpem/171101/precompiled
    apps/cesm
    apps/cmake/2.8.12/gnu
    apps/codesaturne/4.0.6/intel
    apps/cpmd/appvars
    apps/cppunit/1.12.1/intel
    apps/ctffind/4.1.10/precompiled
    apps/date_utils/0.4.1/gnu
    apps/dssp/2.0.4/bin
    apps/dssp/2.2.1/gnu
    apps/ffmpeg/3.1.5/gnu
    apps/fluent/15.0/precompiled
    apps/fluent/17.2/precompiled
    apps/freesteam/2.1/gnu
    apps/gawk/4.1.4/gnu
    apps/gctf/gpu/precompiled
    apps/gphoto/2/2.5.14/gnu
    apps/gphoto/2/2.5.6/gnu
    apps/gradle/3.2/gnu
    apps/graphicsmagick/1.3.25/gnu
    apps/graphviz/2.38.0/intel
    apps/gromacs/4.6.7/intel
    apps/gromacs/5.1.4/gnu
    apps/grpc-java/1.3.0/gnu
    apps/grpc/1.1.2/gnu
    apps/imagic/precompiled
    apps/imod/4.9.6/gnu
    apps/lammps/11.08.2017/gpu1
    apps/lammps/16.02.2016/k20gpu
    apps/lammps/31.03.2017/gpu
    apps/lammps/gpu
    apps/lammps/gpu-mixed
    apps/lua/5.3.4/gnu
    apps/luajit/2.0.4/gnu
    apps/luarocks/2.4.2/gnu
    apps/modeller/9.19/precompiled
    apps/motioncor2/1.0.5/precompiled
    apps/mpas/4.0/intel
    apps/nasm/2.12.02/gnu
    apps/omniorb/4.2.1/intel
    apps/opencascade/6.9.1/intel
    apps/opencascade/6.9.1/precompiled
    apps/openfoam/4.1/intel
    apps/openfoam2.3.1
    apps/phylip/3.697/gnu
    apps/pyqt/4/4.11.4/gnu
    apps/pythonpackages/2.7.10/funcsigs/1.0.2/gnu
    apps/pythonpackages/2.7.10/mock/2.0.0/gnu
    apps/pythonpackages/2.7.10/pbr/1.10.0/gnu
    apps/pythonpackages/2.7.10/pyyaml/3.12/gnu
    apps/pythonpackages/2.7.10/scons/2.5.1/gnu
    apps/pythonpackages/2.7.13/tensorflow/1.3.1/gpu
    apps/pythonpackages/3.6.0/graph-tool/2.27/gnu
    apps/pytorch/0.3.1/gpu
    apps/redmd/2.3/gnu
    apps/resmap/1.1.4/gnu
    apps/rings/1.3.1/intel
    apps/rnnlib/2013.08.20/gnu
    apps/rstudio/0.98.1103/precompiled
    apps/salome/gui/7.8.0/precompiled
    apps/salome/kernel/7.8.0/intel
    apps/salome/kernel/7.8.0/precompiled
    apps/salome/yacs/7.8.0/precompiled
    apps/sip/4.18.1/gnu
    apps/socat/1.7.3.0/gnu
    apps/spcam2_0-cesm1_1_1
    apps/tar/1.28/gnu
    apps/tempy/1.1/gnu
    apps/test/openfoam-2.3.1
    apps/theano/0.8.0.dev/04.04.2016/gpu
    apps/torch/7/gpu
    apps/uvcdat/2.2/gnu
    apps/valgrind/3.11.0/ompi
    apps/visualization/paraview/4.4.0-Qt4/precompiled
    apps/visualization/uvcdat
    apps/wrf/3.6/appvars
    compiler/R/3.2.3/gnu
    compiler/pgi-community-edition/16.10/PrgEnv-pgi/16.10
    lib/QT/4.6.4/gnu
    lib/agg/2.5/gnu
    lib/atlas/3.10.2/gnu
    lib/beagle/3.1.0/gnu
    lib/blas/netlib/3.7.0/gnu
    lib/boost/1.59.0/gnu_ucs2
    lib/boost/1.64.0/gnu_ucs71
    lib/bzip2/1.0.6/gnu
    lib/caffedeps/master/intel
    lib/cgal/4.10.1/gnu_ucs71
    lib/cgal/4.7/gnu
    lib/cgns/3.3.0/intel
    lib/cudnn/5.0.4/precompiled
    lib/cudnn/5.1.10/precompiled
    lib/devil/1.7.8/gnu
    lib/eigen/2.0.17/gnu
    lib/eigen/3.2.8/gnu
    lib/esmf/6.3.0.1/gnu
    lib/esmf/6.3.0.1/intel
    lib/fftw/2.1.5/intel
    lib/fftw/3.2.2/gnu
    lib/fftw/3.3.7/gnu1
    lib/fltk/1.3.0/gnu
    lib/freeglut/3.0.0/gnu
    lib/freetype/2.8.1/gnu
    lib/ftgl/2.1.3/gnu
    lib/g2clib/1.4.0/gnu
    lib/gdal/2.0.1/gnu
    lib/glew/2.0.0/gnu
    lib/gphoto/2/2.5.14/gnu
    lib/gphoto/2/2.5.6/gnu
    lib/graphicsmagick/1.3.24/gnu
    lib/graphicsmagick/1.3.29/gnu
    lib/gtkglext/1.2.0/gnu
    lib/hdf/4/4.2.11/intel
    lib/imagemagick/7.0.7/gnu
    lib/jpeg/6b/k20/gnu
    lib/jpeg_turbo/1.5.1/gnu
    lib/lcms/1.19/gnu
    lib/libtool/2.4.6/k20/gnu
    lib/med/3.2.0/intel
    lib/metis/5.1.0/intel
    lib/mng/1.0.10/gnu
    lib/mpir/3.0.0/gnu
    lib/ntl/10.3.0/gnu
    lib/openbabel/2.3.2/gnu
    lib/openbabel/2.4.1/gnu
    lib/opencv/2.3/gnu
    lib/opencv/2.4.13/gnu
    lib/parmetis/4.0.3/intel
    lib/pcre2/10.23/gnu
    lib/phdf/5/1.10.2/intel
    lib/phdf/5/1.8.16/ompi
    lib/phdf5/1.8.20/gnu
    lib/pio/1.7.1/parallel/intel
    lib/ple/2.0.1/gnu
    lib/ptscotch/6.0.4/gnu1
    lib/ptscotch/6.0.4/intel
    lib/readline/6.3/gnu
    lib/rs/3.1.0/gnu
    lib/scotch/6.0.4/gnu
    lib/ssh2/1.8.0/gnu
    lib/szip/2.1/gcc/szip
    lib/tcl/8.6.7/gnu
    lib/trilions/12.12.1/gnu1
    lib/wxWidgets/3.1.0/gnu
    lib/x264/2016.11.30/gnu
    lib/xz/5.2.3/gnu
    lib/yaml/0.1.7/gnu
    lib/yasm/1.3.0/gnu
    pythonpackages/2.7.13/ASE/3.16.2/gnu
    pythonpackages/2.7.13/Babel/2.6.0/gnu
    pythonpackages/2.7.13/JINJA2/2.10/gnu
    pythonpackages/2.7.13/Werkzeug/0.14.1/gnu
    pythonpackages/2.7.13/biopython/1.70/gnu
    pythonpackages/2.7.13/catmap/0.3.0/gnu
    pythonpackages/2.7.13/click/7.0/gnu
    pythonpackages/2.7.13/flask/1.0.2/gnu
    pythonpackages/2.7.13/futures/3.2.0/gnu
    pythonpackages/2.7.13/gmpy/1.17/gnu
    pythonpackages/2.7.13/graphviz/0.10.1/gnu
    pythonpackages/2.7.13/itsdangerous/1.1.0/gnu
    pythonpackages/2.7.13/mpmath/1.1.0/gnu
    pythonpackages/2.7.13/tensorflow_tensorboard/0.1.2/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/csv/1.0/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/genshi/0.7/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/inflection/0.3.1/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/ipykernel/4.6.1/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/more-itertools/3.0.0/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/nose/1.3.7/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/quandl/3.1.0/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/requests/2.13.0/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/tornado_xstatic/0.2/gnu
    pythonpackages/2.7.13/ucs4/gnu/447/xstatic/1.0.1/gnu
    pythonpackages/3.6.0/PyWavelets/0.5.2/gnu
    pythonpackages/3.6.0/enum34/1.1.6/gnu
    pythonpackages/3.6.0/mako/1.0.7/gnu
    pythonpackages/3.6.0/pydot-ng/1.0.0/gnu
    pythonpackages/3.6.0/ucs4/gnu/447/mock/2.0.0/gnu
    pythonpackages/3.6.0/ucs4/gnu/447/pbr/2.0.0/gnu
    r_packages/3.4.0/gnu/raster/2.5-8/gnu
    r_packages/3.4.0/gnu/rcpp/0.12.13/gnu
    r_packages/3.4.0/gnu/rgdal/1.2-15/gnu
    r_packages/3.4.0/gnu/rgeos/0.3-26/gnu
    r_packages/3.4.0/gnu/sp/1.2-5/gnu
    test/cp2k/2.6.0/gpu/cp2k
    test/quantum_espresso/gpu/quantum_espresso