Fionn: 2.9 / 2.10 / 2.11
NAMD is a molecular dynamics application designed specifically for the simulation of large biomolecular systems using modern parallel architectures. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign using object-oriented techniques. NAMD can read X-PLOR, CHARMM, AMBER, and GROMACS input files.
NAMD only uses the GPU for nonbonded force evaluation. Energy evaluation is done on the CPU. To benefit from GPU acceleration you should set outputEnergies to 100 or higher in the simulation config file. Some features are unavailable in CUDA builds, including alchemical free energy perturbation. As this is a new feature you are encouraged to test all simulations before beginning production runs. Forces evaluated on the GPU differ slightly from a CPU-only calculation, an effect more visible in reported scalar pressure values than in energies.
VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
The NAMD package is available for use by all ICHEC users. Contact the Helpdesk to gain access to this package.
2.10 on Fionn:
The figure below shows the scaling performance of both the MPI version and the GPU-enabled version of NAMD (v2.10) on Fionn. The benchmark system used to obtain these results was Test Case A (8,533,024 atoms) obtained from the PRACE Unified European Applications Benchmark Suite.
The figure shows that by harnessing the additional 2 GPGPU cards on each node results in at least 50% reduction in wall-clock time for all node counts. In particular, the GPU-enabled version running on one node with two K20s and 20 cores performs as good as the CPU version running on five nodes with 120 cores. However, the overall computational speedup of the GPU-enabled version is limited by the amount of data transfer between the CPU and the GPU. For the problem investigated, the optimal number of nodes is nine.
To use a version of NAMD on Fionn load the relevant environment module:
module load molmodel namd/intel/latest Job Submission Example on Fionn
To run NAMD follow this example:
#!/bin/bash #PBS -l nodes=2:ppn=24 #PBS -l walltime=30:00:00 #PBS -N MyJobName #PBS -A MyProjectName #Load the NAMD module module load molmodel namd/intel/latest cd $PBS_O_WORKDIR mpirun namd2_mpi InpFile > OutFile
For memory optimized configurations, use namd2_mpi_memopt.
Job Submission Example on Fionn using GPGPUs
To use the GPGPU version of the code, please use preferably the code version 2.11b1 onwards.
The GPU-accelerated version of NAMD comes in two flavours: a single-node SMP version and a multi-node MPI version. The former can be accessed with namd2_cuda_smp and the latter with namd2_cuda_mpi.
In both cases, the code will use all GPUs found in the nodes, provided there is at least as many MPI processes or threads as there are of GPUs (a single MPI process or thread can only manage one single GPU, therefore accessing all GPU requests multiple processes/threads). Moreover, the maximum performance is usually obtained with using as many MPI processes or threads as possible / sensible per node. On Fionn, this means using 20 MPI processes per GPU node or using 20 threads in case of the SMP version.
Here are two sample PBS scripts to that effect:
#!/bin/bash #PBS -l nodes=1:ppn=20 #PBS -l walltime=30:00:00 #PBS -q GpuQ #PBS -N MyJobName #PBS -A MyProjectName cd $PBS_O_WORKDIR #Load the NAMD module module load molmodel namd/intel/2.11b1 namd2_cuda_smp +p20 +idlepoll InpFile > OutFile
#!/bin/bash #PBS -l nodes=4:ppn=20 #PBS -l walltime=30:00:00 #PBS -q GpuQ #PBS -N MyJobName #PBS -A MyProjectName cd $PBS_O_WORKDIR #Load the NAMD module module load molmodel namd/intel/2.11b1 mpirun namd2_cuda_mpi +idlepoll InpFile > OutFile
For memory optimized configurations, use namd2_cuda_smp_memopt or namd2_cuda_mpi_memopt.
More information can be obtained at the NAMD user guide available from the NAMD webpage.