No internet? Let the game begin
I know it has been only a few days since my last post, and I am also aware that you are not used to listening (or maybe reading!) from me so often. However here, at BSC, we are dealing with some minor maintainance experiences that, although “de-energized” part of the network, gave me the chance to think about and report what has been going on the last days.
First and foremost, this week’s highlight has been my visit to MareNostrum 4, i.e. BSC’s supercomputer. Its efficiency and technical characteristics are of course indisputable but what is particularly special about this machine, is also its location. The first image that one faces is that of a 19th-century chapel facade that betrays very little about what’s hidden inside it. Correct! The huge supercomputer which is framed by a glass surface for extra protection is to be found in the interior of this chapel provoking admiration feelings to anyone visiting, no matter how deep an understanding of its use he/she has. This “oxymoron” view that so smoothly combines history with technology can only partially be transferred through the photo below.
It is exactly the technology based in this room that allowed me to experiment with the tools and the applications mentioned in my previous post. So, I would say it is about time to jump from this short MareNostrum excursion to the harsh, yet equally interesting, reality!
This reality involves enormous amounts of data that require processing and may come from diverse applications such as those that simulate the movement of billions of air particles or those that attempt to solve rather demanding mathematical problems by deploying computational methods. The behaviour of such applications resembles the one of a wider range of scientific applications, therefore the extracted conclusions can be effortlessly generalized and therefore this is the exact nature of the programs I analyzed!
But what do I mean by “analyzed”. Well, in order to make that clear we have to take a deeper dive into the different application profiling techniques: Internet is brimming with tools that allow an easy way (sometimes not that intuitive however) to gain insight into an application’s performance and determine the limiting factors. Valgrind is an open-source tool which many of you might know as memory-leak detector, but, believe me, you’d be amazed by the variety of add-on tools that come with it. One of these tools can track down every memory access and determine the number of last-level cache misses. It was this exact tool that was modified in order to perform sufficient sampling rather than give exact memory access numbers.
Sufficient is the keyword! Last week I succeeded in determinig the ideal application-based sampling period that minimizes the experiment execution time, yet provides results which drive in a final performance-enchancing memory data distribution! This would’t have been possible unless the the experimentation process was automated exploiting bash scripts that finally did the work for me! What is not clearly stated here is that I was called to dust off my bash programming skills which remained idle for the last couple of years.
Having obtained results from both benchmark applications what is more is to compare and find a connection between them as well as to correlate the final memory distribution and speedup to the one achieved by hardware implemented profiling. If you are interested in keeping up with the sequel of this story ( I am sorry but not with the Kardashians!) please stay tuned. To be continued…
Don’t forget to check my LinkedIn account for more information.