Passing a library-defined matrix over MPI

Passing a library-defined matrix over MPI

The communication of data between processes is a fundamental concept of MPI. In algorithm development one usually makes use of external dedicated linear algebra libraries (I choose Armadillo for my summer project) mainly because they are well tested, have good performance and optimized structures. Initially one may not think of a straightforward way to pass a library-defined matrix over MPI, but soon you find out it is not complicated at all. From now on I will deal with Armadillo C++ library.

The MPI buffer is a pointer to a block of memory and MPI expects the elements you want to communicate to be in contiguous memory space. If you use a high quality linear algebra library, this is already true (Armadillo, Eigen and Blitz++ are just an example). In this case you need only pass the MPI function a pointer to the base of the object. This can be achieved with an iterator referring to the first element ( .begin() in Armadillo ). For instance, to broadcast an Armadillo matrix, one would call MPI_Bcast the following way:

1

Broadcasting an Armadillo matrix

Now what if, for whatever reason, you do not have a matrix object stored contiguously in the memory? A solution might be writing the matrix to a one-dimensional array and have that sent over. However I would like to address a solution involving a 2D array, which shows how to allocate a multidimensional array on the heap. What we need is an array of pointers to arrays: basically one new for an array of doubles and another new for an array of pointers to doubles. Then each pointer is set to point to the first item in the row. A function implementing an MPI broadcast is shown below (rows 5 to 9 are of interest here).

2

Allocating a 2D array on the heap (lines 5 to 9)

Tagged with: , , ,
0 comments on “Passing a library-defined matrix over MPI
1 Pings/Trackbacks for "Passing a library-defined matrix over MPI"
  1. […] I started to see improvements in the project once I managed to link the correct version of SuperLU, a sparse direct solver. Before that the sparse linear solver was solved using LAPACK (which actually deals with dense matrices) and the code was just underperforming. I experienced even bigger progress after using Eigen’s sparse solver. The only question was how to ping-pong data from Armadillo to Eigen, and the other way around. Copying an Armadillo vector into an Eigen object before solving the system in every iteration, and then copying it in reverse was not an option, since we are talking about huge vectors (more than 500k entries). […]

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.