ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER

Can't list the navigation items at the moment. Please try again later.

  • ARCHER Community
  • ARCHER Benchmarks
  • ARCHER KNL Performance Reports
  • Cray CoE for ARCHER
  • Embedded CSE
  • ARCHER Champions
  • ARCHER Scientific Consortia
  • HPC Scientific Advisory Committee
  • ARCHER for Early Career Researchers

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

Scalable and interoperable I/O for Fluidity

eCSE01-009

Key Personnel

PI/Co-I: Dr Gerard Gorman - Imperial College, London; Dr Jon Hill - Imperial College, London

Technical: Dr Michael Lange - Imperial College, London

Relevant Documents

eCSE Final Report: Scalable and interoperable I/O for Fluidity

Unstructured Overlapping Mesh Distribution in Parallel - M. Knepley, M. Lange, G. Gorman, 2015. Submitted to ACM Transactions on Mathematical Software.

Flexible, Scalable Mesh and Data Management using PETSc DMPlex; M. Lange, M. Knepley, G. Gorman, 2015. Published in EASC2015 conference proceedings.

Project summary

Scalable file I/O and efficient domain topology management present important challenges for many scientific applications if they are to fully utilise future exascale computing resources. Designing a scientific software stack to meet next-generation simulation demands, not only requires scalable and efficient algorithms to perform data I/O and mesh management at scale, but also an abstraction layer that allows a wide variety of application codes to utilise them and thus promotes code reuse and interoperability. PETSc provides such an abstraction of mesh topology in the form of the DMPlex data management API.

During the course of this project PETSc's DMPlexDistribute API has been optimised and extended to provide scalable generation of arbitrary sized domain overlaps, as well as efficient mesh and data distribution, contributing directly to improving the flexible and scalable domain data management capabilities of the library. The ability to perform parallel load balancing and re-distribution of already parallel meshes was added, which enables further I/O and mesh management optimisations in the future. Moreover, additional mesh input formats have been added to DMPlex, including a binary Gmsh and a Fluent-CAS file reader, which improves the interoperability of DMPlex and all its dependent user codes. A key aspect of this work was to maximise impact by adding features and applying optimisations at a library level in PETSc, resulting in benefits for a several application codes.

The main focus of this project, however, was the integration of DMPlex into the Fluidity mesh initialisation routines. The new Fluidity version utilises DMPlex to perform on-the-fly domain decomposition to significantly improve simulation start-up performance by eliminating a costly I/O-bound pre-processing step and improved data migration (see Illustration 1).

Fluidity start-up improvement through domain decomposition via DMPle
Illustration 1: Fluidity start-up improvement through domain decomposition via DMPlex.

Moreover additional new mesh input formats have been added to the model via new reader routines available in the public DMPlex API. Due to the resulting close integration with DMPlex, mesh renumbering capabilities, such as the Reverse Cuthill-McKee (RCM) algorithm provided by DMPlex, can now be leveraged to improve the cache coherency of Fluidity simulations. The performance benefits of RCM mesh renumbering for velocity assembly and pressure solve are show in Illustration 2 and 3.

Fluidity performance increase for velocity assembly from RCM mesh renumbering
Illustration 2: Fluidity performance increase for velocity assembly from RCM mesh renumbering.
Fluidity performance increase for pressure assembly from RCM mesh renumbering
Illustration 3: Fluidity performance increase for pressure assembly from RCM mesh renumbering.

Summary of the software

Most of the relevant DMPlex additions and optimisations are available in the current master branch of the PETSc development version, as well as the latest release version 3.6. A access to a central developer package for tracking petsc-master on Archer has been provided by the support team and is being maintained by the lead developer until the required features and fixes are available in the latest cray-petsc packages.

The current implementation of the DMPlex-based version of the Fluidity CFD code is available through a public feature branch.

Efforts in providing the full set of Fluidity features through DMPlex are ongoing and integration of this new feature into the next Fluidity release is actively being prepared.

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC