ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER
  • User Guides & Documentation
  • Essential Skills
  • Quick Start Guide
  • ARCHER User Guide
  • ARCHER Best Practice Guide
  • Scientific Software Packages
  • UK Research Data Facility Guide
  • Knights Landing Guide
  • Data Management Guide
  • SAFE User Guide
  • ARCHER Troubleshooting Guide
  • ARCHER White Papers
  • Screencast Videos

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

NAMD

Useful links

  • NAMD Webpage
  • NAMD 2.9 User Guide

Licensing and access

NAMD is licensed software that is free for non-commercial use. You can read the licence. All ARCHER users have access to the NAMD binaries.

Running

Running on ARCHER

To run NAMD you need to add the correct module to your environment:

module add namd

Will give you access to the NAMD executable, called namd2.

Specifying process/thread placement

When running NAMD 12 on ARCHER you will need to specify how to place the threads and processes. You should always aim to use NAMD on ARCHER with both MPI processes and OpenMP threads.

A good first place to start with benchmarking is to use 1 MPI process per node with 48 OpenMP threads per process. (Remember that an ARCHER node has 24 physical cores with 2 hyperthreads available per core).

This would give 47 worker threads per MPI process and 1 control thread. To use NAMD in this mode we also need to specify the binding of the threads.

For example, to use 128 ARCHER nodes we would have 128 MPI processes (1 per node) and 48 OpenMP threads per process and the launch line for such a setup would be:

aprun -n 128 -N 1 -d 48 -j 2 -cc none namd2 +ppn 47 +pemap 1-47 +commap 0 input.namd > output.log

The aprun options tell the Cray system how to distribute the processes and threads and then the namd2 options specify the binding.

aprun options explanation:

  • -n 128 = 128 MPI processes in total
  • -N 1 = 1 MPI process per node
  • -d 48 = 48 threads per MPI process
  • -j 2 = 2 hyperthreads per core
  • -cc none = Allows placement to be controlled by the NAMD application

namd2 options explanation:

  • +ppn 47 = 47 worker threads per MPI process
  • +pemap 1-47 = use core IDs 1-14 for worker threads
  • +conmap 0 = use core ID 0 for control thread

The full job submission script for such a setup would look like:

#!/bin/bash --login
#PBS -N namd_apoa1
#PBS -l select=128
#PBS -l walltime=0:20:0
# Change this to your KNL budget
#PBS -A t01

module load namd

# Move to directory that script was submitted from
export PBS_O_WORKDIR=$(readlink -f $PBS_O_WORKDIR)
cd $PBS_O_WORKDIR

# you should replace "input.namd" in the line below with your input filename
aprun -n 128 -N 1 -d 48 -j 2 -cc none namd2 +ppn 47 +pemap 1-47 +commap 0 input.namd > output.log
Example: 2 processes per node, 24 threads per process

When you have multiple MPI processes per node you need to specify the bnding of the NAMD worker and control threads for each process. For example, using 128 nodes, 256 MPI processes, 2 MPI processes per node and 24 threads per process:

aprun -n 256 -N 2 -d 24 -j 2 -cc none namd2 +ppn 23 +pemap 1-23,25-47 +commap 0,24 input.namd > output.log

The rest of the job submission script would be identical to that above.

Running on ARCHER KNL

Instructions for running NAMD on the ARCHER KNL system can be found on GitHub at:

  • Running NAMD 2.11 on ARCHER KNL

Compiling

  • Compiling NAMD for ARCHER (on GitHub)
  • Compiling NAMD 2.9 on ARCHER

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC