Logo of Science Foundation Ireland  Logo of the Higher Education Authority, Ireland7 CapacitiesGPGPU Research Projects
Ireland's High-Performance Computing Centre | ICHEC
Home | News | Infrastructure | Outreach | Services | Research | Support | Education & Training | Consultancy | About Us | Login

ICHEC Software

Information about software packages installed on the ICHEC systems.

GROMACS

Versions Installed

Stoney: 4.5.5 / 4.6

Fionn: 4.5.7 / 4.6.3 / 4.6.5 (Interfaced with PLUMED 2.0.2)

Description

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but as GROMACS is fast at calculating the nonbonded interactions there is also much research on non-biological systems, e.g. polymers.

On ICHEC's machine, GROMACS is installed with MPI and double precision support enabled, for general usage. In addition, on Stoney, a GPU-enabled version is available as well. However, this one only supports sequential computation (i.e. no MPI or multi-threading) and only for single precision computations. Moreover, only a subset of the features have been ported so far. The GPU version of GROMACS (v4.5.5) relies on the OpenMM library while the latest (v4.6) version has native GPGPU support.

License

GROMACS is available under the GNU General Public License.

Benchmarks

Version 4.6 on Stoney

Taking a 94,124 atom system (membrane protein, explicit solvent) the following image displays the number of nanosecond per day that can be achieved by using CPU-only (blue) compared to CPU-GPGPU (red) of GROMACS v4.6 on Stoney. Using 64 cores (8 nodes) with 16 GPGPUs can result in an additional 17ns/day on top of what the CPU-only version can compute.

Version 4.6.3 on Fionn

The following image displays the same problem as on Stoney with CPU-only on the ICE X partition (red) compared to CPU-only on the GPGPU partition (green) and CPU-GPGPU (blue) using GROMACS v4.6.3 on Fionn. Up to 80 cores (4 nodes on GPU partition) with 8 GPGPUs an additional 10 ns/day can be computed when using GPGPUs over the CPU only version.

Job Submission Example

To run the code in parallel on Fionn, please follow the below example.

#!/bin/bash
#PBS -N MyJobName
#PBS -j oe
#PBS -r n
#PBS -A MyProjectCode
#PBS -l nodes=2:ppn=24
#PBS -l walltime=00:10:00

module load molmodel
# v4.6.3/v4.6.5 for single precision and gpu on Fionn
module load gromacs/intel/4.6.3
#Or v4.5.7 for double precision on Fionn
module load gromacs/intel/4.5.7

cd $PBS_O_WORKDIR

# pre-process the input for start, here we use single precision across multiple processes
# so we use the suffix "_mpi"
# for double precision version use grompp_d

# grompp_mpi -f grompp.mdp -p topol.top -c conf.gro -o water.tpr
grompp_mpi MyOptions and Files

# start the md

# When restarting jobs you must pass the '-noappend' flag to mdrun in order to obtain correct results
# mpiexec mdrun_mpi -s water.tpr -o water.trr -c water.out -g water.log
mpiexec mdrun_mpi MyOptions and files

# v4.5.7 double precision
# mpiexec mdrun_mpi_d -s water.tpr -o water.trr -c water.out -g water.log
mpiexec mdrun_mpi_d MyOptions and files

When using GPGPUs you must specify the gpu (using the -gpuid option to mdrun) you wish each MPI process to use. Best performance is seen if you use similar numbers of GPUs and MPI processes per node (use the -ppn option to mpiexec). The rest of the node can be filled using OpenMP threads (using the -ntomp to mdrun). The mdrun command used in the below example is the one used to produce the benchmark data. For running the code on Fionn using the GPU version, the above example becomes:

#!/bin/bash
#PBS -N MyJobName
#PBS -j oe
#PBS -r n
#PBS -A MyProjectCode
#PBS -l nodes=2:ppn=20
#PBS -l walltime=00:10:00
#PBS -q GpuQ

module load molmodel
# v4.6.3
module load gromacs/intel/4.6.3

cd $PBS_O_WORKDIR

# pre-process the input for start

# grompp_mpi_gpu -f grompp.mdp -p topol.top -c conf.gro -o water.tpr
grompp_mpi_gpu MyOptions and Files

# start the md

# When restarting jobs you must pass the '-noappend' flag to mdrun in order to obtain correct results
# mpiexec -ppn 4 mdrun_mpi_gpu -ntomp 5 -gpu_id 0011 -s water.tpr -o water.trr -c water.out -g water.log
mpiexec -ppn 4 mdrun_mpi_gpu -ntomp 5 -gpu_id 0011 MyOptions and files

For running the code on Stoney using the GPU version:

#!/bin/bash
#PBS -N MyJobName
#PBS -j oe
#PBS -r n
#PBS -A MyProjectCode
#PBS -l nodes=1:ppn=8
#PBS -l walltime=00:10:00
#PBS -q GpuQ

module load gromacs/4.5.5
# Or v4.6
module load gromacs/4.6

cd $PBS_O_WORKDIR

# pre-process the input for start

# grompp_d -f grompp.mdp -p topol.top -c conf.gro -o water.tpr
grompp_d MyOptions and Files

# start the md

# mdrun-gpu -s water.tpr -o water.trr -c water.out -g water.log
mdrun-gpu MyOptions and files

# Or v4.6
mpiexec mdrun_mpi MyOptions and files

Additional Notes

Further information can be obtained at www.gromacs.org.

Return to the software index