What is HPC? And why should you care?

Summer of HPC, right? It’s in the name of the program. So maybe it is time now to explain what is it about!
If you are following this journey because you have an HPC class and want to join the program, you may already know about it but if you’re curious and want to know why you should apply too, follow me!
First of all, HPC stands for… High Performance Computing. It is the use of multiple computers/processors in order to run advanced and complex applications or process large amounts of data reliably, quickly and efficiently.
So, I never said here that it is a “computer science field”. And I will never say that because it is not true. I mean it is not only for computer scientists, but it is everywhere!
Data processing, computational chemistry, biotechnology, physics… and so much more! So you really should read this!
Distributing the work on different processors means that your program is parallel. If it is not distributed, then it is a serial code.

Yes, but why am I saying “the best we can hope for is Ts/4”?
This is actually where we have to introduce the communication time. It is one of the reasons we usually don’t exactly have a direct “Serial version time divided by the number of processors”.
If your friends start washing the dishes at the same time as you, they probably are doing that on another sink. So you have to move the plates to where they are and take them (cleaned, hopefully) back at the end. These steps take a bit of time so even if they finish their task at the EXACT time as you, the “communication” process will make you lose some time, thus end up with a final time that is greater than Ts/4.
And now, you just understood what the speedup is! It is what you gain in time after parallelizing your program! So in the figure showed below, the max should be 4. Maybe you can have a speedup around 3.9 which is not bad at all.
The speedup will help you calculate something else: The efficiency.
Efficiency = Speedup/Number of processors
So efficiency is a parameter ranging between 0 and 1. The best you can get is 1. If you get 0, it means that either none of the other processors is working or you should completely review your program.
When you measure this efficiency, you get some information about how good your program is scaling.
The procedure is pretty simple; you have to launch you serial code then execute the parallel version on 2, 4, 8, 16, 32 … (I mean, the “steps” depend on you and by how many processors you are limited). If you obtain at each step an efficiency that is near the 1 limit and not dropping quickly at some point, then you’re probably on the right track!
But if, for example, get efficiency = 1 for 2 and 4 processors, then 0.9 for 16 processors, then suddenly drop to 0.7 when using 64, there should be something to improve! In many cases, it is a communication problem… If you use them too much they can end up consuming too much time and you would lose a lot compared to what you gain by parallelizing your application.
Alright, maybe it is enough! This was a really quick introduction to HPC that, I hope, makes you want to dig into it. I can advise you to check these resources: https://insidehpc.com/, https://epcced.github.io/hpc-intro/010-hpc-concepts/ and of course check some courses about MPI and OpenMP to train!
See you soon for (probably the last_ post!
Leave a Reply