ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER
  • About ARCHER
  • News & Events
  • Calendar
  • Blog Articles
  • Hardware
  • Software
  • Service Policies
  • Service Reports
  • Partners
  • People
  • Media Gallery

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

ARCHER News

Wednesday 22nd April 2020

  • Introduction to the ARCHER2 hardware and software - Webinar
  • LAMMPS course
  • Message-passing programming with MPI course
  • Reproducible computational environments using containers course
  • Other upcoming ARCHER training
  • HPC-Europa3 Transnational Access programme

Please note: we have moved the regular user mailing publication from Tuesday afternoons to Wednesday mornings from this week.

Introduction to the ARCHER2 hardware and software

Wednesday 22nd April 2020 11:00-12:00 BST

Andy Turner, EPCC/ARCHER2

In this webinar we will give an overview of the technology that will make up the ARCHER2 national supercomputing service: based on the new Cray Shasta technology, with AMD EPYC processors and the Cray Slingshot interconnect. This will cover the main Cray Shasta system itself as well as other associated parts of the service. We will also provide information on the software environment that will be available on the system, including the Cray software environment and the initial plans for software provided by the ARCHER2 CSE service at EPCC. There will be plenty of opportunity for questions although we will not necessarily be able to answer all of them, as we do not have access to the system yet.

This online session is open to all.

Join link and full details : https://www.archer2.ac.uk/training/courses/200422-archer2-hardware/

LAMMPS course

4 - 18 May 2020

Julien Sindt, EPCC

LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a widely-used classical molecular dynamics (MD) code. This C++ code is easy to use, incredibly versatile, and parallelised to run efficiently on both small-scale personal computers and CPU/GPU/CPU&GPU HPC clusters. As of 2018, LAMMPS has been used, to some degree, in over 14,000 publications in fields as varied as chemistry, physics, material science, granular and lubricated-granular flow, etc.

The course will be run over three 2.5 hour-long sessions.

The first session will be an introduction to setting up and running an MD simulation using LAMMPS. We will begin by running a simulation of a Lennard-Jones fluid before delving deeper into how simulations can be set up and run in LAMMPS.

In the second session, we will discuss how to download and install LAMMPS, with a more in-depth discussion of the various packages LAMMPS offers and how to use them efficiently.

The third session will be a follow-up session for exercises, discussion and questions.

Details and registration: https://www.archer2.ac.uk/training/courses/200504-lammps/

Message-passing programming with MPI

14 - 22 May 2020

David Henty, EPCC

The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

Details and registration: https://www.archer2.ac.uk/training/courses/200514-mpi/

Reproducible computational environments using containers

13-14 July 2020 - Online

This course aims to introduce the use of containers with the goal of using them to effect reproducible computational environments. Such environments are useful for ensuring reproducible research outputs and for simplifying the setup of complex software dependencies across different systems. The course will mostly be based around the use of Docker containers but the material will be of use for whatever container technology you plan to, or end up, using. We will also briefly introduce the Singularity container environment which is compatible with Docker and designed for use on multi-user systems (such as HPC resources).

On completion of this course attendees should:

  • Understand what containers are and what they are used for
  • Understand how to manage and create Docker containers
  • Appreciate decisions that need to be made around containerising research workflows
  • Understand the differences between Docker and Singularity containers and why Singularity is more suitable for multi-user systems (e.g. HPC)
  • Understand how to manage and create Singularity containers
  • Appreciate how containers can be used to enable and improve reproducibility in research

Registration and full details - https://www.archer2.ac.uk/training/courses/200713-containers/

Other upcoming ARCHER2 Training

  • Oracle Cloud HPC benchmarking results, Online, Wednesday 29th April 2020 11:00-12:00 BST
  • ARCHER2 Spectrum of Support, Online, Wednesday 13th May 2020 15:00-16:00 BST
  • The determination of clusters structures combining infrared spectroscopy and density functional theory calculations, Online, Wednesday 20th May 2020 15:00-16:00 BST
  • Reproducible computational environments using containers, Online, 13-14 July 2020

Further details https://www.archer2.ac.uk/training/#upcoming-training

HPC-Europa3 Transnational Access programme

Collaborative research visits using High Performance Computing

Call for applications: next closing date 14th May 2020, for visits during the second half of 2020.

HPC-Europa3 funds research visits for computational scientists in any discipline which can use High Performance Computing (HPC).

Visits can be made to research institutes in Finland, Germany, Greece, Ireland, Italy, the Netherlands, Spain or the UK.

UK-based researchers can benefit in two ways: either by visiting a research group elsewhere in Europe, or by hosting a research visitor from another country.

Coronavirus travel restrictions: Applicants will be notified of the selection panel’s decisions around the end of June. We hope that the travel situation should be returning to normal by this time. Visits should be planned for July to December, but there can be some flexibility if there are continuing restrictions on travel at that stage.

What does HPC-Europa3 provide?

  • Funding for travel, living and accommodation expenses for visits of up to 13 weeks.
  • Access to world-class High Performance Computing (HPC) facilities.
  • Technical support to help you make best use of the HPC systems.
  • Collaborative environment with an expert in your field of research.

Who can apply?

  • Researchers of all levels, from postgraduate to full professors.
  • Researchers from academia or industry.
  • Researchers currently working in a European Union country or Associated State (see http://bit.ly/AssociatedStates for full list of Associated States).
  • Researchers may not visit a group in the country where they currently work.
  • A small number of places may be available for researchers working outside these countries - please contact staff@hpc-europa.org for more information.

How do I apply?

Apply online at http://www.hpc-europa.org

The next closing date is 14th May 2020. Closing dates are held 4 times per year. Applications can be submitted at any time. You should receive a decision approximately 6 weeks after the closing date.

For more information and to apply online, visit: http://www.hpc-europa.org/
Follow us on Twitter for project news: https://twitter.com/HPCEuropa3

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC