ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER

Can't list the navigation items at the moment. Please try again later.

  • ARCHER Community
  • ARCHER Benchmarks
  • ARCHER KNL Performance Reports
  • Cray CoE for ARCHER
  • Embedded CSE
  • ARCHER Champions
  • ARCHER Scientific Consortia
  • HPC Scientific Advisory Committee
  • ARCHER for Early Career Researchers

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

Developing Massively Parallel ISPH with Complex Boundary Geometries

eCSE06-09

Key Personnel

PI/Co-Is: Dr. Benedict Rogers, Prof. Peter Stansby, Dr. Mike Ashworth - University of Manchester, Dr. Xiaohu Guo - STFC

Technical: Dr. Xiaohu Guo - STFC

Relevant documents

eCSE Technical Report: Developing Massively Parallel ISPH with Complex Boundary Geometries

Project summary

Project description

The incompressible Smoothed Particle Hydrodynamics (ISPH) method with projection based pressure correction has become very attractive due to its high accuracy and stability for both internal flow and free surface flows. The main aim of this project is to develop new functionalities which are used to enable to simulate multiple floating bodies with a very large number of particles using tens of thousands of cores. Floating bodies/objects for real applications normally have complex geometries. Building on recent theoretical developments at the University of Manchester, these problems require new robust boundary methods to be implemented in the current parallel ISPH framework. The following list highlights the major developments:

1. Precondition the particles’ data to explore the data locality and improve ISPH software extensibilities for new functionalities. In practice, the SPH simulation generally involve various types of particles, such as fluid particle, various boundary particles. This work package redesign the data structure for particles nearest neighbor list search and reduced the memory footprint by not saving neighbor list, reduced the neighbor loops to minimum four loops. Significant development effort was spent removing conditional code branching in the main loop by using the newly implemented preconditioned dynamic vector(see technical report for the details).

2. Parallel implementation multiple boundary tangent solid boundary method. This involves three major new algorithms implementation:

a. Triangle intersection to detect particles inside/outside solid geometry.

b. Validation kernel for normal direction of solid surface particles.

c. Ray tracing algorithm for solid object surface triangulation.

3. Parallel implementation of local uniform Stencil method.

a. Implemented the point in polygon algorithm to decide whether the particle lies inside, outside, or on the surface of solid object.

b. Reduced the memory footprint for LUST boundary kernel by dynamically generating and calculating LUST boundary particles instead of saving LUST boundary particles for each fluid particle.

4. The code has been benchmarked on the the UK National Supercomputing Platform ARCHER with up to 12288 cores.

Achievement of objectives

Task 1: Preconditioning the particles data to explore the data locality and improve ISPH software extensibility. (Total 2 months effort, spent in 3.5 months period). In this task, we have implemented the following functions:

  • Particle preconditioning kernel: divide different type of particles into cells, comparing with dynamic vector approach, we have added one additional array to record their starting address for each type of particles.
  • Improvement of nearest neighbor list searching kernel by looping through each cell’s neighbor cells instead of saving the particles neighbor list, which resolves the big memory footprint issue.
  • Update and optimized the halo exchange kernel due to rearranged the data layout the carried out the above.
  • Updated the moment equation solver and removed any unnecessary “IF branches” for the special treatment of boundary particles, velocity updates for only special types of particles and verified with dam break and still water cases with up to 100 million particles using up to 12,288 MPI partitions.

Task 2: Parallel implementation of the multiple boundary tangent method (MBT) for complex boundary geometries. (Total 5 months efforts, spent in 8.5 months period)

  1. Implement the triangle intersection kernel which is used to identify the near solid object surface fluid particles.
  2. Nominal direction calculation kernel, as stl file has already provide the nominal direction, instead of implementing the calculation kernel of nominal direction, we have implemented the validation kernel of nomial direction of solid surface particles.
  3. Implemented the mirror particles generation kernel for solid object and tested with dambreak with cylinder test cases.
    1. Task 3: Parallel implementation of local uniform stencil method (LUST) (Total 4 months’ effort, spent in 6.5 months period)

      1. Implemented surface triangulation kernel
      2. Implemented the point in polygon algorithm to decide whether the particle lies inside, outside, or on the surface of solid object.
      3. Reduced the memory footprint for LUST boundary kernel by dynamically generating and calculating LUST boundary particles instead of saving LUST boundary particles for each fluid particle.

      Task 4: Application enabling demonstration. We have tested with dam break with cylinder obstacle test case, benchmarking and writing reports, submitted two conference papers and one journal paper.

      Summary of the Software

      The ISPH software packages use git to manage software revision control. The ISPH software development repositories are currently held at bitbucket. Once granted access permission, the archer users can easily use git to check out the master branch. We have created a wiki page for the ISPH project about how to build and contribute the ISPH software packages.

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC