ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER

Can't list the navigation items at the moment. Please try again later.

  • ARCHER Community
  • ARCHER Benchmarks
  • ARCHER KNL Performance Reports
  • Cray CoE for ARCHER
  • Embedded CSE
  • ARCHER Champions
  • ARCHER Scientific Consortia
  • HPC Scientific Advisory Committee
  • ARCHER for Early Career Researchers

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

TPLS: Optimised Parallel I/O and Visualisation

eCSE01-008

Key Personnel

PI: Dr Prashant Valluri - The University of Edinburgh

Technical: Mr Iain Bethune and Dr Toni Collis - EPCC, The University of Edinburgh

Relevant documents

Paper presented at ParCo 2015: Developing a scalable and flexible high-resolution DNS code for two-phase flows;
Download the open access pre-print version of this paper.

Project summary

TPLS is a freely available Computational Fluid Dynamics code which solves the Navier-Stokes equations for an incompressible two-phase flow. The interface is modelled using either a level-set or a diffuse-interface method. This means that TPLS can simulate flows with complicated interfacial topology, involving a range of time- and length-scales. Such problems are common in industrial applications. Indeed, TPLS has already been used to understand the flow-pattern map in a falling-film reactor. The immediate application here was to post-combustion carbon capture, yet such reactors are ubiquitous in the chemical engineering industries. In the same way, TPLS can be used to simulate and hence control multiphase flow in the oil/gas industries, thereby contributing to flow assurance in the transport of hydrocarbons in pipelines. TPLS is also being used to simulate processes concerned with microelectronic cooling, that involve phase-change processes such as refrigerant droplet evaporation and flow boiling in microchannels.

Local hot spots in a semiconductor device

On the scientific side, TPLS has been used to simulate the genesis and evolution of three-dimensional waves in two-phase laminar flows. In these investigations, TPLS has revealed how three-dimensional waves form in systems where two-dimensional disturbances dominate, thereby contributing the resolution of a paradox in the scientific literature on two-phase flows. TPLS has also revealed in good detail flow features responsible for operational limits of distillation/ reaction/ absorption columns used widely in the Chemical industry. Furthermore, TPLS has helped explain the nature of thermocapillary instabilities that can alter the efficiency of refrigerants and other heat transfer fluids.

Distillation/ absorption columns in chemical plants

Computational Achievements

By moving from a simple master-gather I/O algorithm to using parallel-NetCDF for all of the file outputs and check-pointing, the code now scales to over 3000 cores with excellent strong scaling up to 1536 cores, including a speedup of 26x and an efficiency of 72% (comparing with the same simulation on one node (24 cores) and 36 nodes (1536 cores)).

Previously, using the ASCII format for the output data, a job on a grid of the size used above would generate 1.5GB per output operation (i.e. writing (x,y,z) coordinates, (u,v,w) velocity vectors and the phi levelset function every 1,000 iterations). Using the parallel I/O and the NetCDF format, the code generates 225 MB per output operation, which represents a compression factor of 6.67. This gives us an opportunity to access significantly greater Reynolds numbers and physics such as phase-change (which require more grid points) for the pre-eCSE level output size.

By reducing by a factor of over 6 the space required to store these output files, we decrease the probability of needing to minimise data retention.

The code has been refactored to enable future development work to be significantly quicker and easier to debug, as well as providing more runtime options, moving away from extensive use of hard-coded parameters.

Financial Benefits

Before the current eCSE work, the maximum speedup was sub-linear (exponent = 0.6) for large jobs until 1024 cores. This was purely due to serial I/O.

After the eCSE work which involved implementation of parallel I/O, we have recorded a super-linear speedup (exponent = 1.2) until at least 576 cores and a linear speedup until 1152 cores. This is a substantial financial saving for us which enables us to run significantly larger jobs for much less AU consumption than before. As an example, for a typical run (1200 tasks, 22 M grid points) we estimate AU savings of around 30% (on what was used before this eCSE work). This means that we can now afford to access runs with more complex physics than ever before, accelerating the ongoing research work.

Over the next year, we expect to spend 25 MAU on TPLS and related calculations, at a cost of £14,000. This has a cumulative effect on the usage of staff-time (post-doc/PhD student). Whereas earlier a typical droplet evaporation simulation would have taken at least 2 months of work (from set-up to complete analysis), now we estimate that this would take just over a month's work - which would mean just under a 50% time-saving.

Scientific Benefits

This improvement will not only reduce the resource requirement needed for current simulations, but will also enable simulations on a much larger scale than was previously possible - both in terms of spatial extent, and resolution. Indeed, the improvements in the code's performance and scalability mean that TPLS edges closer to being able to simulate fully turbulent flow regimes. These are precisely the flow regimes of interest in the oil and gas industries. Accurate modelling and simulation of flow regimes and the stability of such regimes is crucial to flow assurance, and TPLS will be able to make a contribution here.

Oil/gas pipelines overland

Also of great interest to the microelectronics industry is the accurate quantification of cooling needed which is essential to increase the lifetimes of faster processors. This quantification can be estimated by rigorous TPLS calculations, now possible after the optimisation done under the eCSE programme. Furthermore, post-eCSE project, we are now able to study phase change under both earthly and microgravity systems. This level of ability is unprecedented for complex gas-liquid systems which simultaneously undergo phase change (until now, estimates were only based on purely empirical calculations). The current simulation results will be discussed directly with companies like Selex Galileo and Thermacore who develop cooling systems for space and ground applications.

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC