Investigating Scalability and Performance of MPAS Atmosphere Model

Investigating Scalability and Performance of MPAS Atmosphere Model
Model for Prediction Across Scales (MPAS)

Project reference: 2113

Exploitation of renewable energy sources is critical for humanity to timely address issues raised by climate change and for transitioning towards a sustainable zero-carbon economy. MPAS Atmosphere model, developed by the US National Center for Atmospheric Research, can be used to model the atmosphere given a set of initial and boundary conditions both globally and for a particular geographic region of interest. One important use case is estimating the available resources, e.g. to assess suitability of a specific region for wind/solar farm deployment.
MPAS is designed with parallelism in mind and the aim of this project is to assess the scalability and performance of MPAS on ARCHER2, the new UK National Supercomputer. To this end, a set of experiments will be devised by the participants and performed using increasingly finer input meshes for different geographical regions of interest and using increasing number of processes. Additionally process placement is likely to influence performance due to Non-Uniform Memory Access (NUMA) architecture of the system, which would also be investigated. Ultimately the project aims to explore the scalability boundaries for MPAS and the results would identify the main factors that limit potential scaling.
The results will be communicated via project report and a blog post.

Model for Prediction Across Scales (MPAS)

Project Mentor: Dr. Evgenij Belikov

Project Co-mentor: Dr. Mario Antonioletti

Site Co-ordinator: Catherine Inglis

Participants: Jonas Alexander Eschenfelder, Carla Nicolin Schoder

Learning Outcomes:
The student(s) will gain hands-on experience with running MPAS simulations, gain proficiency in using a HPC system to assess scalability and performance of a production code, including the use of profiling tools to identify potential bottlenecks and improve their communication skills by visualising the results and writing up a report as well as summarising the results in a blog post. This project is also a great opportunity to establish contacts with  Earth System Science researchers and HPC experts at the University of Edinburgh, Edinburgh Centre for Carbon Innovation, and EPCC.

Student Prerequisites (compulsory):

  • familiarity with one programming language
  • working knowledge in a Linux environment
  • commitment to complete the ARCHER2 Driving Test before the start of the project

Student Prerequisites (desirable):

  • proficiency in Fortran/C programming
  • proficiency in bash/Python scripting
  • knowledge of  parallel programming with MPI
  • knowledge of HPC systems and architectures
  • familiarity with profiling and debugging tools
  • domain knowledge in Earth System Science, Renewable Energy and/or Atmosphere Modelling
  • experience in visualisation

Training Materials:

Project timeline by week:

  1. Training week
  2. Planning and experimental design
  3. Test runs and project plan submission
  4. Initial runs (small and medium inputs)
  5. Further runs (large inputs)
  6. Further runs (contd) and post-processing
  7. Visualisation and report draft
  8. Report completion and submission

The workplan is adapted to multiple students during week two by splitting the experiments to be performed accordingly (see below). In particular, we may look into comparing two different HPC systems, or looking at different compilers and their settings, or using meshes centered at different geographical regions.

Final Product Description:
The results will provide insight into MPAS scalability and performance on ARCHER2 which is interesting due to a deep NUMA hierarchy (using over 10k cores). Depending on progress, the simulation outcomes may be usful for predicting the suitability of a chosen region for wind or solar farm deployment.

Adapting the Project: Increasing the Difficulty:
To increase the difficulty, more in-depth experiments can be performed including varying compilers and compiler flags (e.g. vectorisation), using additional meshes (centered around different regions) and profiling using hardware performance counters, as well as investingating experimental GPU support.

Adapting the Project: Decreasing the Difficulty:
To decrease the difficulty, some of the existing model setups could be re-used. Additionally the number of experiments could be limited e.g. to a single compiler and HPC system, using a single mesh and/or focusing only on one geographical region.


  • laptop and stable internet connection
  • HPC cluster access and budget




Tagged with: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.