Scaling the Dissipative Particle Dynamic (DPD) code, DL_MESO, on large multi-GPGPUs architectures

Scaling the Dissipative Particle Dynamic (DPD) code, DL_MESO, on large multi-GPGPUs architectures

Project reference: 2016

DPD is a stochastic particle method for mesoscale simulations of complex fluids. It is based on a coarse approximation of the molecular structure of soft materials, with beads that can represent large agglomerates or unions of simpler molecules like water. This approach allows to avoid extreme small time and length scales when compared to classical Molecular Dynamic solvers, but retaining the intrinsic discrete nature of matter. However, realistic applications often require very large number of beads to correctly simulate the physics involved. Here the need of scaling to very large systems using latest hybrid CPU-GPU architectures.

The focus of this work will be on benchmarking and optimization on novel supercomputers an existing multi-GPUs version of the DL_MESO (DPD) code; a discrete particle solver for mesoscale simulation of complex fluids.  Currently it has been tested up to 4096 GPUs and needs further development for good scaling on larger systems as well as for improving its performance per single GPU. The code has been tested only on medium complex cases, like flow between surfaces, vapour liquid systems and water droplet phenomena.

The student will have a minimum task of benchmarking the current version, modify the current limiting factors for scaling on large supercomputers and run performance analysis to identify possible bottleneck and relative solutions for speedup. According to s/he experiences, further improvements on the HPC side as well as new features for complex physics could be added. In particular, s/he will focus on porting the constant pressure barostat options into the CUDA version for multi-GPU of DL_MESO.

Project Mentor: Jony Castagna

Project Co-mentor: Vassil Alexandrov

Site Co-ordinator: Luke Mason

Participants: Davide Crisante, Nursima ÇELİK

Learning Outcomes:
The student will learn to benchmark, profile and modify multi-GPUs code mainly written in Fortran and CUDA languages following typical domain decomposition implemented using MPI libraries. S/he will also gain a basic understanding of the DPD methodology and its impact on mesoscale simulations. The student will also gain a familiarity with proper software development procedure using Software for Version Control, IDE and tools for Parallel Profiling on GPUs.

Student Prerequisites (compulsory):
A good knowledge of Fortran, MPI and CUDA programming is required as well as in parallel programming for distributed memory.

Student Prerequisites (desirable):
Some skills in being able to develop mixed code such Fortran/CUDA will be an advantage as well as experience in multi-GPU programming using CUDA/MPI

Training Materials:
These can be tailored to the student once he/she is selected.

Workplan:

  • Week 1/: Training week
  • Week 2/:  Literature Review Preliminary Report (Plan writing)
  • Week 3 – 7/: Project Development
  • Week8/: Final Report write-up

Final Product Description:
The final product will be an internal report, convertible to a conference or better journal paper, improved version of the code with benchmark and comparison of the improved version of the DL_MESO multi-GPUs.

Adapting the Project: Increasing the Difficulty:
The project is on the appropriate cognitive level, taking into account the timeframe and the need to submit final working product and 1 reports

Adapting the Project: Decreasing the Difficulty:
The topic will be researched and the final product will be designed in full but some of the features may not be developed to ensure working product with some limited features at the end of the project

Resources:
The student will need access to a multi GPUs machines, standard computing resources (laptop, internet connection).

Organisation:
Hartee Centre – STFC

Image result for Hartree Centre - STFC

Tagged with: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.