Logo of Science Foundation Ireland  Logo of the Higher Education Authority, Ireland7 CapacitiesGPGPU Research Projects
Ireland's High-Performance Computing Centre | ICHEC
Home | News | Infrastructure | Outreach | Services | Research | Support | Education & Training | Consultancy | About Us | Login

ICHEC Software

Information about software packages installed on the ICHEC systems.

Amber

Versions Installed

Fionn: 11, 12

Stoney: 11

Description

Amber (Assisted Model Building with Energy Refinement), is a general purpose molecular mechanics/dynamics suite which uses analytic potential energy functions, derived from experimental and ab initio data, to refine macromolecular conformations.

License

ICHEC has acquired a site license for the Amber 11 (AmberTools 1.5) and Amber12 (AmberTools 12) packages. Users should express their interest to the Helpdesk to gain access to the executables. GPU support is available for Amber on Fionn, but only for version 12.

Benchmarks

The Amber Benchmarks from Mike Wu and Ross Walker were run on the Fionn cluster. The benchmark was downloaded from here.

Additional Notes

To use a version of Amber load the relevant environment module:

Fionn

module load molmodel amber/11
module load molmodel amber/12

Stoney

module load amber/11

Job submission examples follow. Amber 11 is used in the same way as Amber 12 on Fionn except the appropriate module should be loaded. Assuming that this PBS script is saved as amber.pbs, the job can be submitted to the queue by running the following command in the same directory:

Job Submission Example on Fionn

#!/bin/bash
#PBS -l nodes=2:ppn=24
#
#PBS -l walltime=30:00:00
#PBS -N MyJobName
#PBS -A MyProjectName

#Load the Amber module
module load molmodel amber/12

cd $PBS_O_WORKDIR

mpiexec $AMBERHOME/exe/pmemd.MPI -O \
-i mdin -p prmtop -c inpcrd -ref inpcrd -suffix outSuffix

Job Submission Example on Fionn 1 node with 2 K20 GPUs

This will use as many GPUs as it can find. On one node of Fionn this is 2. Note that the number of MPI processes must be the same or less than the number of GPUs available.

#!/bin/bash
#PBS -l nodes=1:ppn=20
#
#PBS -l walltime=30:00:00
#PBS -N MyJobName
#PBS -A MyProjectName
#PBS -q GpuQ

#Load the Amber module
module load molmodel amber/12

cd $PBS_O_WORKDIR

export CUDA_VISIBLE_DEVICES=0,1
mpiexec -ppn 2 $AMBERHOME/bin/pmemd.cuda.MPI -O -o mdout.2K20 \
-inf mdinfo.2K20 -x mdcrd.2K20 -r restrt.2K20 -ref inpcrd

Job Submission Example on Stoney

#!/bin/bash
#PBS -l nodes=2:ppn=8
#
#PBS -l walltime=30:00:00
#PBS -N MyJobName
#PBS -A MyProjectName

#Load the Amber module
module load amber/11

cd $PBS_O_WORKDIR

mpiexec.hydra -n 16 -rmk pbs $AMBERHOME/exe/pmemd.MPI -O -i mdin \
-p prmtop -c inpcrd -ref inpcrd -suffix outSuffix

Job Submission Example on Stoney using a single GPGPU

The following script will run on a single node but only use one NVIDIA M2090 card.
#!/bin/bash
#PBS -l nodes=1:ppn=8
#
#PBS -l walltime=30:00:00
#PBS -N MyJobName
#PBS -A MyProjectName

#Load the Amber module
module load amber/11

cd $PBS_O_WORKDIR

$AMBERHOME/exe/pmemd.cuda -O -i mdin -p prmtop -c inpcrd \
-ref inpcrd -suffix outSuffix -gpu 1

Job Submission Example on Stoney using Multiple GPGPUs

The following script will run on a single node but use two NVIDIA M2090 cards.
#!/bin/bash
#PBS -l nodes=1:ppn=8
#
#PBS -l walltime=30:00:00
#PBS -N MyJobName
#PBS -A MyProjectName

#Load the Amber module
module load amber/11

cd $PBS_O_WORKDIR

# Create hosts file
for i in {1..1}; do cat $PBS_NODEFILE | uniq >> hosts.1.2; done;

mpiexec.hydra -machine hosts.1.2 -n 2 -rmk pbs $AMBERHOME/bin/pmemd.cuda.MPI \
-O -i mdin -p prmtop -c inpcrd -ref inpcrd -suffix outSuffix

Job Submission Example on Stoney using Multiple Nodes/GPGPUs

The following script will run on two nodes and use four NVIDIA M2090 cards.
#!/bin/bash
#PBS -l nodes=2:ppn=8
#
#PBS -l walltime=30:00:00
#PBS -N MyJobName
#PBS -A MyProjectName

#Load the Amber module
module load amber/11

cd $PBS_O_WORKDIR

# Create hosts file
for i in {1..2}; do cat $PBS_NODEFILE | uniq >> hosts.2.4; done;

mpiexec.hydra -machine hosts.2.4 -n 4 -rmk pbs $AMBERHOME/bin/pmemd.cuda.MPI \
-O -i mdin -p prmtop -c inpcrd -ref inpcrd -suffix outSuffix
qsub amber.pbs

Return to the software index