Eigenvalue Problem in an HPC Environment? That sounds easy!

Eigenvalue Problem in an HPC Environment? That sounds easy!
Simulation of a problem using different time steps - stability can be observed in the first simulation whereas the second simulation (with a too large time step) yields great instabilities.

In my previous blog post, I gave you a quick crash course on how to simulate seismic wave propagation in an HPC environment. This is done through semi-discretization, meaning that we solve the spatial variables using a form of Finite Element Method called Spectral Element Method (SEM), and reformulate the solution to a (simpler) ordinary differential equation.

To solve the ODE, we use a time-stepping scheme, where we encounter some problems with stability. This issue can be solved by calculating the eigenvalue of the matrix M-1K obtained from the SEM formulation. This may sound straight-forward, but as always when it comes to HPC, there are a few important aspects we need to take into account, which I will discuss in this blog post!

The Problem with Matrix Representations

There exist several numerical methods that solve the eigenvalue problem (power iteration, the QR algorithm, the Jacobi eigenvalue algorithm). In practice, however, we would typically like to use an existing library such as LAPACK, Eigen, or SciPy. Not only will these spare us tedious hours of implementing our own method, but they have also been highly optimized to an extent that our homemade implementations won’t be able to compete with. Unfortunately, the above-mentioned libraries aren’t very appropriate for HPC applications.

The problem with most existing libraries is that they require an explicitly defined matrix: either a 2D array or a library-specific data structure. It might sound strange that we’d even consider storing our matrices any other way, but in HPC it’s not very common to store full matrices. One of the biggest reasons for this is that most HPC applications work with large sparse matrices. This means that most matrix elements are zeros. If we’re working with a 5×5 matrix, storing a few zeros might not be a problem, but when we’re working with larger matrices (say 100 000×100 000), these zeros represent a lot of useless data and wasted memory. Memory usage is a very important aspect in HPC as memory I/O is one of the slowest operations and greatest bottlenecks in most programs, so never ask the HPC-coder to switch to a full matrix representation!

Does this really matter to us? Of course!

The wave propagation solver that my project revolves around, SEM3D, is not an exception when it comes to strange matrix representations. When choosing an eigenvalue library for my project, there were two important aspects to take into consideration:

  1. SEM3D is a large software project with over 650 000 lines of highly optimized HPC code. Adapting the existing implementation is out of the question, so my solution to this problem must be compatible with the existing code (mainly the data structures and communication interfaces).
  2. All the essential code is written in Fortran 90, so my solution as well as the used library must also be available for Fortran.
Example of alternative matrix representations in HPC applications, specifically in the wave propagation software SEM3D

FORTRAN fun fact for electronics nerds like myself: FORTRAN has been around since 1957 and was first run on vacuum-tube computers – the predecessor to the modern-day transistor computers we know today!


Not All Heroes Wear Capes: Introducing ARPACK!

The magic component in this entire project is ARPACK, the amazing library that fulfills both our criteria in an elegant way. ARPACK is written in FORTRAN 77 and was developed throughout the 90s to solve large-scale eigenvalue problems without requiring explicit matrix representation. The library implements the Implicitly Restarted Arnoldi Method and is used by many popular software and computing environments such as SciPy, MATLAB, and R.

One of the most important features of the ARPACK library is what’s called a Reverse Communication Interface which means that the library doesn’t act directly on the matrix. Instead, it requires the user to implement a matrix-vector product that’s called between each iteration step using data from the previous iteration-step:

Pseudocode of the reverse communication interface used for the ARPACK eigenvalue solver

At first glance, this might not seem so remarkable, but this means that the library doesn’t care how (or even if) the matrix is stored. Everything related to the matrix is kept far from the library! In SEM3D where the diagonal mass matrix M is stored as a vector, and the stiffness matrix K not stored at all (but implemented as a function that calculates the internal forces Fint=Ku), the reverse communication interface lets us pass the relevant data to the ARPACK solver without modifying the existing data structures or matrix representation!


That’s it for this blog post! Although each problem is unique, I hope that this gave some insight into the special type of obstacles one may face when working with software for HPC applications!

Tagged with: , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.