ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER
  • Upcoming Courses
  • Online Training
  • Driving Test
  • Course Registration
  • Course Descriptions
  • Virtual Tutorials and Webinars
  • Locations
  • Training personnel
  • Past Course Materials Repository
  • Feedback

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

Hybridisation: Adding OpenMP to MPI for the plasma simulation code GS2

Adrian Jackson, EPCC

GS2 is a gyrokinetic plasma code used to simulate a range of plasmas including the plasma in fusion reactors. The code can perform different simulations, from basic linear gyrokinetic calculations, to full non-linear plasma behaviour with collisions between particles, and it has been parallelised using MPI. However, utilising the full functionality of GS2 requires large amounts of communications between processes when running in parallel, particularly at large core counts.

To address this problem, and enable the code to utilise large numbers of cores without becoming dominated by the cost of sending MPI messages between processes, we have added OpenMP functionality to GS2, augmenting the original MPI parallelisation, and enabling the code to use larger numbers of cores with a smaller number of MPI processes. As the communication cost grows as the number of MPI processes grows (due to the simulation domain being split across more processes and therefore requiring more communication to undertake the simulation), this hybrid approach should give us better scaling at large core counts.

In this presentation we will discuss the benefits and challenges of adding OpenMP to a FORTRAN MPI code to create a hybrid parallelisation. We will highlight the performance data that lead us to consider this hybridisation, some of the code modifications that were required, and look at performance of the code with the new functionality.

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC