Logo of Science Foundation Ireland  Logo of the Higher Education Authority, Ireland7 CapacitiesGPGPU Research Projects
Ireland's High-Performance Computing Centre | ICHEC
Home | News | Infrastructure | Outreach | Services | Research | Support | Education & Training | Consultancy | About Us | Login

ICHEC Software

Information about software packages installed on the ICHEC systems.

NAMD

Versions Installed

Stoney: 2.8 / 2.9 / 2.10

Fionn: 2.9 / 2.10

Description

NAMD is a molecular dynamics application designed specifically for the simulation of large biomolecular systems using modern parallel architectures. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign using object-oriented techniques. NAMD can read X-PLOR, CHARMM, AMBER, and GROMACS input files.

NAMD only uses the GPU for nonbonded force evaluation. Energy evaluation is done on the CPU. To benefit from GPU acceleration you should set outputEnergies to 100 or higher in the simulation config file. Some features are unavailable in CUDA builds, including alchemical free energy perturbation. As this is a new feature you are encouraged to test all simulations before beginning production runs. Forces evaluated on the GPU differ slightly from a CPU-only calculation, an effect more visible in reported scalar pressure values than in energies.

VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.

License

The NAMD package is available for use by all ICHEC users. Contact the Helpdesk to gain access to this package.

Benchmarks

2.9 on Stoney:

The figure below shows the scaling performance of both the MPI version and the GPU-enabled version of NAMD (v2.9) on Stoney. The benchmark system used to obtain these results was ApoA1 (92,224 atoms). For the performance data, please refer the NAMD Benchmarks site.

The figure shows that by harnessing the additional 2 GPGPU cards on each node results in at least 50% reduction in wall-clock time for all node counts. In particular, the GPU-enabled version running on one node with two M2090s performs as good as the CPU version running on four nodes with 32 cores. However, the overall computational speedup of the GPU-enabled version is limited by the amount of data transfer between the CPU and the GPU. For the problem investigated, the optimal number of nodes is four.

2.10 on Fionn:

The figure below shows the scaling performance of both the MPI version and the GPU-enabled version of NAMD (v2.10) on Fionn. The benchmark system used to obtain these results was Test Case A (8,533,024 atoms) obtained from the PRACE Unified European Applications Benchmark Suite.

The figure shows that by harnessing the additional 2 GPGPU cards on each node results in at least 50% reduction in wall-clock time for all node counts. In particular, the GPU-enabled version running on one node with two K20s and 18 cores performs as good as the CPU version running on five nodes with 120 cores. However, the overall computational speedup of the GPU-enabled version is limited by the amount of data transfer between the CPU and the GPU. For the problem investigated, the optimal number of nodes is nine.

Additional Notes

To use a version of NAMD on Stoney load the relevant environment module:

module load namd/2.9

To use a version of NAMD on Fionn load the relevant environment module:

module load molmodel namd/intel/2.10

Job Submission Example on Stoney

#!/bin/bash
#PBS -l nodes=2:ppn=8
#PBS -l walltime=30:00:00
#PBS -N MyJobName
#PBS -A MyProjectName

#Load the NAMD module
module load namd/2.9

cd $PBS_O_WORKDIR

mpiexec -n 16 namd2_mpi InpFile > OutFile

Job Submission Example on Stoney using GPGPUs

#!/bin/bash
#PBS -l nodes=2:ppn=8
#PBS -l walltime=30:00:00
#PBS -q GpuQ
#PBS -N MyJobName
#PBS -A MyProjectName

#Load the NAMD module
module load namd/2.9

cd $PBS_O_WORKDIR

mpiexec -n 4 -npernode 2 namd2 +idlepoll InpFile > OutFile

Job Submission Example on Fionn

To run NAMD follow this example:

#!/bin/bash
#PBS -l nodes=2:ppn=24
#PBS -l walltime=30:00:00
#PBS -N MyJobName
#PBS -A MyProjectName

#Load the NAMD module
module load molmodel namd/intel/2.10

cd $PBS_O_WORKDIR

mpiexec -n 48 -ppn 24 namd2_mpi InpFile > OutFile

For memory optimized configurations, use namd2_mpi_memopt.

Job Submission Example on Fionn using GPGPUs

To run NAMD with CUDA support on Fionn follow this example (Note the change in the value of ppn from above):

#!/bin/bash
#PBS -l nodes=10:ppn=20
#PBS -l walltime=30:00:00
#PBS -q GpuQ
#PBS -N MyJobName
#PBS -A MyProjectName

cd $PBS_O_WORKDIR

#Load the NAMD module
module load molmodel namd/intel/2.10

mpiexec –n 20 –ppn 2 namd2_cuda +idlepoll InpFile > OutFile

For memory optimized configurations, use namd2_cuda_memopt.

Further Information

More information can be obtained at the NAMD user guide available from the NAMD webpage.

Return to the software index