Hybrid AI Enhanced Monte Carlo Methods for Matrix Computation on Advanced Architectures

Project reference: 2124

The focus of this project will be on  enhancing further hybrid (e.g. stochastic/ deterministic) methods for Linear Algebra using advanced AI approaches to accelerate the computations. The focus is on Monte Carlo  hybrid methods and algorithms for matrix inversion and solving systems of linear algebraic equations. Recent developments led to efficient approaches  based on building an efficient stochastic preconditioner and then solving the corresponding System of Linear Algebraic Equations (SLAE) by applying an iterative method. The preconditioner is a Monte Carlo preconditioner based on Markov Chain Monte Carlo (MCMC) methods to compute a rough approximate matrix inverse first. The above Monte Carlo preconditioner is further used to solve systems of linear algebraic equations thus delivering hybrid stochastic/deterministic algorithms. The advantage of the proposed approach is that the sparse Monte Carlo matrix inversion has a computational complexity linear of the size of the matrix.  Current implementations are either pure MPI or mixed MPI/OpenMP  and GPU based ones. The efficiency of the approach is usually  tested on a set of different test matrices from several matrix market collections.

The intern have to take the existing codes and will have to integrate some AI approaches based on Deep Learning methods in the hybrid  method and to test the efficiency of the new method applied to variety of matrices as well as Systems of Linear Equations with the same matrix but different right hand side.

Project Mentor: Vassil Alexandrov

Project Co-mentor: Anton Lebedev

Site Co-ordinator: Luke Mason

Participants: Iakov Kharitonov, Adrian Lundell

Learning Outcomes:
The student will learn to design parallel hybrid  Monte Carlo methods as well as how to use advanced ML and Deep Learning techniques.
The student will learn how to implement these methods on modern computer architectures with latest GPU accelerators as well as how to design and develop mixed MPI/CUDA and/or MPI/OpenMP code.

Student Prerequisites (compulsory):
Introductory level of Linear Algebra, some parallel algorithms design and implementation concepts, parallel programming using MPI and CUDA.

Student Prerequisites (desirable):
Some skills in being able to develop mixed code such MPI/OpenMP or MPI/CUDA will be an advantage.

Training Materials:
These can be tailored to the student once he/she is selected.

Workplan:

Week 1/: Training week
Week 2/:  Literature Review Preliminary Report (Plan writing)
Week 3 – 7/: Project Development
Week8/: Final Report write-up

Final Product Description:
The final product will be a parallel application that can be executed on hybrid architectures with  GPU accelerators or a multicore one.  Ideally we would like to publish the results in a paper on a conference or a workshop.

Adapting the Project: Increasing the Difficulty:
The project is on the appropriate cognitive level, taking into account the timeframe and the need to submit final working product and 2 reports

Adapting the Project: Decreasing the Difficulty:
The topic will be researched and the final product will be designed in full but some of the features may not be developed to ensure working product with some limited features at the end of the project.

Resources:
The student will need access to a GPU and/or  a multicore based machines, standard computing resources (laptop, internet connection).

Organisation:

Hartee Centre – STFC

Tagged with: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.