Among High Performance Persons
- eat a lot of pasta and pizza
- drink beer in a very cheap bar next to our hotel
- learn about and meet most of the High Performance Computers (HPC) of CINECA
- learn about and meet most of the High Performance Persons (HPP) of SoHPC
A (relatively) bold statement
During the training sessions in Bologna, we learned that there are two popular ways to get the most out of an HPC cluster. Here, I will try to prove that the same two ways apply to an HPP cluster.
In order to support such claim, I have to define some things.
An HPC cluster is an aggregation of computer components, which when run in parallel and interconnected, may have multiplied performance than the stand alone components.Arsenios Chatzigeorgiou after the training week, i.e. week#1.
Interconnected HPC is called cluster and the things that interconnect are called nodes. Each node is roughly a single computer unit just like the one you probably read this post (unless you have printed it on paper, which I would discourage you from doing it again for environmental reasons).
An HPP cluster is a group of people that have a shared target and are willing to work together in order to achieve this goal.Arsenios Chatzigeorgiou after the thinking-of-the-training-week week, i.e. week#2.
Interconnected HPP is called group and the things that interconnects are called participants of this group. Each participant is a human organism just like you (unless you are some kind of replicant which is fine, but then the HPP would be HPC again).
The two popular ways to run an HPC is message passing which is implemented with the MPI standard (Message Passing Interface), and shared memory which is implemented with the OpenMP API (Open Multi-Parallel).
MPI and OpenMP in HPCs
Using MPI, each node implements different tasks independently (heterogeneous system) and the nodes communicate via triggering signals. The communication is achieved with the messages. It is a flexible and expressive method.
In OpenMP when a complex task arrives, a fork path might emerge. The task can be divided in many similar subtasks that each node will be assigned to (homogeneous system). The communication is achieved with the shared memory among the nodes. It is easier to program and debug.
MPI and OpenMP in HPPs
Some groups might perform better when each participant of the group is having independent tasks (heterogeneous) and is isolated in its own place. In the procedure however, it is crucial to have an efficient message passing system to support the communication between participants.
Such method demands persons who can take initiatives and be able to adopt in dynamically changing tasks.
This is MPI in HPPs.
Other groups, perform better with a leader that organizes all the participants. A difficult task, will be allocated into many participants who will perform similar job (homogeneous). What each will do will be directed by the central participant who organizes them.
Such method is easier to manage for an organizational person. However, flexibility is limited and many participants are demanded for better performance.
This is OpenMP in HPPs.
Each SoHPC participant is going to work on a group of HPC, on a specific project in any of the aforementioned methods. In my case, in LECAD lab in University of Ljubljana, this first week we have started using the MPI method for visualizing a gyrokinetic simulation, both for the HPC application and for the HPP group organization.
So, each of the SoHPC participants are going to operate using their own method, but we all going to have the shared memories we gained in these 5 days in Bologna.
I hope I helped you learn something today. In the next post I am going to describe what is a gyrokinetic simulation, what I am trying to do here in LECAD and how I plan to do it (hopefully I will have figure it out) .