1. Muhammad Omer
  2. Cem Oran
  3. Neli Sedej
  4. George Katsikas
  5. Seán McEntee
  6. Cathal Maguire
  7. Aisling Paterson
  8. Stefan Popov
  9. Nathan Byford
  10. Roberto Rocco
  11. Petar Đekanović
  12. Jerónimo Sánchez García
  13. Irem Kaya
  14. Alexander Julian Pfleger
  15. Antonios-Kyrillos Chatzimichail
  16. Busenur Aktilav
  17. Berker Demirel
  18. Andres Vicente Arevalo
  19. Marco Mattia
  20. Denizhan Tutar
  21. Aitor López Sánchez
  22. Anssi Tapani Manninen
  23. Josip Bobinac
  24. Igor Abramov
  25. Benedict Braunsfeld
  26. Sara Duarri Redondo
  27. Pablo Antonio Martínez Sánchez
  28. Theresa Vock
  29. Francesca Schiavello
  30. Ömer Faruk Karadaş
  31. Davide Crisante
  32. Nursima ÇELİK
  33. Elman Hamdi
  34. Jesús Molina Rodríguez de Vera
  35. Cathal Corbett
  36. Joemah Magenya
  37. Paddy Cahalane
  38. Víctor González Tabernero
  39. Shyam Mohan Subbiah Pillai
  40. İrem Naz Çoçan
  41. Carlos Alejandro Munar Raimundo
  42. Shiva Dinesh Chamarthy
  43. Matthew William Asker
  44. Rafał Felczyński
  45. Ömer Bora Zeybek
  46. Clément Richefort
  47. Kevin Mato
  48. Sanath Keshav
  49. Federico Sossai
  50. Federico Julian Camerota Verdù

About me

Hi, my name is David Mulero and I am a Multimedia Engineering graduate from Elche, Spain. I finished my degree in June 2021. And I have obtained an honourable mention in my bachelor’s degree thesis. (kbeautypharm.com) This work was to develop a system for interacting with objects in virtual environments using hand tracking.

Ever since I was a child I have been a very curious person who could not stop until I understood how things worked. For me, playing with my toys meant taking them apart to see what mechanisms they had inside.

When I was in high school, I discovered Arduino and decided to learn how to program in C to make projects that came to my mind. During those years, I didn’t stop learning until I finally decided to study Multimedia Engineering. In these years of career, I have learned important concepts of web programming, application development and image and audio processing. But the best thing has been all the projects I have developed, both in team work and on my own. I have developed mobile applications, web pages and even several videogames. I have also had the opportunity to create virtual reality environments. (Messinascatering)

Summer of HPC

The stars must have really aligned for me to be participating in this PRACE programme. To begin with, I found out about it because my tutor told me at the last moment. I filled in the application form and handed it in. The projects seemed impressive to me and I really had no hope of being chosen, I was convinced that it was almost impossible. The day when the winners were announced came and I didn’t have any email, I didn’t see it strange, I said: “They didn’t choose me”. But a couple of days later, I received a text message asking me if I hadn’t replied the email and I went crazy. So, yes, that great email was in my spam folder. It was a nice plot twist.

Digital twin of a datacentre
Digital twin of a datacenter

These months, I will be working on the 2112 project ‘Combining Big-data, AI and 3D visualization for data centre optimization’ at CINECA. It is a privilege to work with such a new and powerful supercomputer, Marconi-100. I will do my best to carry out the project and I will keep you updated.

MY ORIGINS

My name is Benet Eiximeno Franch and I come from a beautiful city located in Catalunya called Reus. You may not have heard about it but you might be familiar with some of the illustrious people who were borned there: Antoni Gaudí (the architect of La Sagrada Família), el General Prim (former Spain Prime Minister) or Sergi Roberto (footballer of FC Barcelona). If you ever have the chance, I do encourage you to visit it, you will be able to see some of the most important Art Noveau buildings in Spain, enjoy the tastiest vermouth and watch a great match of roller hockey.

MY PASSION

I have always been a passionate of racing cars and cycling, and since I was a child, my life has turned around both of them. My ambition is to become a Formula 1 aerodynamicist and that is the reason why I studied aerospace engineering in Universitat Politècnica de Catalunya (near Barcelona). Nowadays I am doing the master’s degree in aeronautical and aerospace engineering in that same university while collaborating with a computational aerodynamics research group there.

Moreover, due to my passion for racing cars, I work in Circuit de Barcelona-Catalunya as a technical scrutineer. My role there is ensuring that all cars are legal and safe to race and to do that I am working in the pre-race checks, then I stay in the garage of the teams to check their procedures (and depending on the race I also go to the starting grid) and finally we verify that the top cars were fully legal. It is absolutely magic to leave my passion from such a close point of view, specially during the Spanish Formula 1 Grand Prix.

Besides that, during the last years I have also been highly involved in EUROAVIA the European association of aerospace engineering students, right now I am the vice-president of its International Board. This experience has allowed me to get to know people from all around Europe!

MY PROJECT IN PRACE

And after having introduced myself, I would just like to thank PRACE for giving me the chance of being part of this program which I am so excited to start these days! I will be in one of the projects mentored by the University of Luxembourg, precisely about the CFD resolution of the Drivaer. As I have been mentioning through this post, I love both aerodynamics and cars, so this will be a brilliant opportunity to learn about both.

From this project I do expect to extract a huge amount of knowledge about computational aerodynamics (numerical methods, turbulence models, meshing sizes…) and also about the aerodynamic coherent structures around a car. Understanding the drag generation mechanisms is going to be key in order to reduce the fuel consumption and help the industry to design greener cars.

Besides all the knowledge I will get, I am extremely excited to start a research on that topic as the aerodynamic improvements on cars will for sure have an impact on the mobility of the future.

I just can not wait to put the hands on work and continue updating you with the first results of the project!!

The procedures in research and time to investigate to get a solid work done — surprisingly needs patience and time. Therefore, you try to turn every screw to improve some processes. For me, getting faster results (good or bad) in the simulations we do is one way to tackle it. Besides improving code if you run the same loop or simulation over and over again, parallelize it and distribute the tasks is a promising path!

Last year I started as a PhD-Candidate in the DAFINET-team at the University of Limerick to combine modelling, network analysis and social psychology to get an insight of how attitudes are represented in us and how they are lead by our salient social identities. My part of role is to back up new experiments by simulations, mainly in Python.

I started to look out for learning opportunities to improve coding and their speed of delivering results. I ran into the homepage of PRACE, which offers various training events for my needs (and maybe yours as well). After attending a couple of them, I found out about the Summer of HPC and said if I have time in the coming summer, I will definitely apply! It will give me the chance to dig in to practical programming experience. In the end, I applied for the Performance of Parallel Python Programs on ARCHER2 … and I got IN!

Group photo: All of us getting ready!

So now, we (see us in the group photo – all motivated in the HPC home experience workshop) are doing a preparation week of learning how to use HPC, how to combine it with Python, C and Fortran, and how to run parallel programs with different tools such as Open Multi-Processing (OpenMP).

Can’t wait now for the next step of starting the project at the EPCC in Edinburgh (get some details here)!!!! WAIT there was something special about this year, COVID-19,… Well, great, I will do it from home and enjoy the flexibility. And the project is still the same. It is about performing improvement of a parallel-running Python version of a Computational Fluid Dynamics simulation of fluid flow in a cavity. Although at first we will just focus on the speed and not the outcome, I will try to bring that into my research after we managed to improve the code.

My goals for the Summer of HPC are:

  • do some good team work with my project partner Jiahua
  • enjoy the help of EPCC-team
  • improve speed and options of the python code
  • learn a new language like C (is already on my TODO list for a long time now)
  • use HPC for research

Goals are set, we are ready to run.

Talk to you soon – here in the BLOG!

Alejandro

–> leave some comments on what do you want to hear about & what your interests are. Happy to answer your questions!

I have to say that this is my unconditional favourite “meme” of the year so far because of its relatability. Even though we can probably all say we feel a bit guilty laughing at disaster, sometimes its sheer magnitude, and our absolute impotence makes it the only logical way to react, and we feel like that worker in the digger.

In pictures: Container ship blocking the Suez Canal finally on the move -  BBC News
A not so metaphoric metaphor.
Source: https://ichef.bbci.co.uk/news/976/cpsprodpb/182D6/production/_117703099_5a98eeaa-a480-4cfa-a2c2-88d08336718b.jpg BBC & EPA

This is more or less, also the challenge HPC systems face when dealing with huge chunks of “big” data. They get to work on it with one, or multiple diggers (likely CPUs) bit by bit, until it is done. Legacy and mainstream HPC systems are not well suited to dealing with Big Data, a memory bandwidth bound problem , because they have historically been tailored to expand a different bottleneck in science: how to deal with large expensive computations, or processor bound tasks.

For example, when simulating an evolving system e.g. of differential equations, like fluid flow or electromagnetic particle interactions, initial conditions are generally well understood, and not very “large” in size in comparison with the amount of data that gets generated and periodically saved as the simulation evolves. In a fluid simulation you generate in every time step you are interested, about exactly the same amount of data you started with in your initial conditions, save it, and you “start again” the simulation with these new “initial” conditions for the next time step.

The task instead, of processing credit card registers from millions of customers hunting for fraud is very different. There is a lot of data that needs to be read into the memory and processed continuously, sometimes in real-time, and usually the output “conclusions” of the analysis are much simpler than the input dataset, we are looking at fraud or no fraud, that simple. Mainstream HPC systems are designed to produce fine wine from fine grapes, not to distill vodka from a stenching mix of crushed potatoes, corn and seeds.

This type of Big Data challenge is becoming ever more common as users of HPC systems are expanding. No longer are the main characters of “who gets to use the supercomputer” arguments going to just be quantum physicists and aerodynamicists, but also epidemiologists studying human interactions to fight covid-19 and social scientists unwrapping the spread of digger memes in Twitter. Neither are all HPC systems going to be CPU systems with low memory bandwidth, but likely many more GPU based supercomputers more suited to machine/deep learning workflows.

This untapped growth, and potential to learn a bit more about what is promising to be a new paradigm in HPC use is what makes me so excited to join “Project Reference: 2133 The convergence of HPC and Big Data/HPDA”, led by Giovanna Roda, Dieter Kvasnicka and Claudia Blaas-Schenner, and collaborating with Liana Akobian, Petra Krasnic and my fellow PRACE Summer of HPC student Rajani Kumar at Vienna Scientific Cluster – TU WEIN in Austria.

I think I was also supposed to chat a bit about me! I love espresso coffees, “pay it forward”, want to leave a grain of sand in the world helping fight climate change, and you can always tempt me to try new things, like working with HPCs! I recently graduated from a masters at the University of Glasgow in Aeronautical Engineering, and was supervised by fantastic lecturer and researcher, Dr Kiran Ramesh, who kindly sent me across to the PRACE-Summer of HPC webpage, and well, here we are.

Getting really excited to start my own project but also to get to know fellow PRACE-Summer of HPC students and hear about your amazing projects, feel free to throw me a request on Github or LinkedIn so I can follow you and keep in touch!

Who am I?

Hello! My name is Oliver Legg, I’m 22 years old and I’m from Cheshire, England. I recently graduated from the University of Liverpool with a first-class honours degree in Computer Science /w Software Development. Other than being interested in technology, I enjoy powerlifting, bouldering and playing video games.

I first began my computer science endeavour at a young age from enjoying playing video games. The desire to make them led me to programming and bamboozled me with a lot of information I didn’t understand at that age. Fortunately, over the years I have learnt more and more from the world of computer science, which led me to where I am now.

Why SoHPC?

The summer of HPC was introduced to me by a lecturer who spoke very highly of the programme. Having enjoyed studying HPC at university, I knew I wanted to expand my knowledge in this area. With the opportunity to work on ICHEC’s supercomputer, ‘Kay’, the opportunity was too good to pass up. Without hesitation, I applied for this programme and requested to be assigned to the project titled, “Parallel anytime branch and bound algorithm for finding the treewidth of graphs”.

What is “Parallel anytime branch and bound algorithm for finding the treewidth of graphs?”

I don’t plan to go into too much detail about the project in this post, therefore, a harsh oversimplification of this project would be to call it “a clever algorithm for computing the treewidth of a graph”. The treewidth of a graph is how far the graph is from being a tree. With this treewidth value, you can speed up solving certain computational problems on the graph. These problems could include pathfinding or checking connectivity (does there exist a path from A to B?) and more. You can read more about the project here.

A Graph Drawn from the GraphPlot package in Julia

Despite enjoying the challenge of reciting the lengthy project title, I chose this project because I have previously enjoyed solving graph theory problems. I wrote my final year project dissertation on accelerating a graph drawing algorithm with a GPU.

Teammate, Mentor and ICHEC

The project is split between myself and my teammate, Valentin Trophime – please have a read of his latest post. I have also briefly met my project mentor, Niall Moran and have been working with my project co-mentor, John Brennan. Many thanks to them and everyone at ICHEC as they have been exceptionally welcoming and helpful.

What have we done so far?

In the first week of the PRACE programme, was a training week which involved:

  • An introduction to SoHPC
  • Python Programming on HPC systems
  • OpenMP
  • MPI

For this project, we plan to use Julia (a script-like programming language for HPC), so the first week with our mentor was spent learning Julia and treewidth theory. We needed to implement a min-fill and min-width heuristic from Gogate & Dechter’s paper on “A Complete Anytime Algorithm for Treewidth“. This week’s plan is to implement a depth first search on a graph with an algorithm from the same paper.

Conclusion

Thank you for taking the time to read this post; I intend to write more in-depth posts about the project in the future. If you have any questions, please leave a comment.

Oliver Legg

Hello World! My name is Jenay Patel and I am thrilled to be part of this year’s PRACE Summer of HPC (SoHPC). More specifically, I am super excited to get involved with my project, focusing on molecular dynamics on quantum computers – yes, I’m a nerd.

I’m 23 years old and I was born in Leicester, England in the UK. I studied Chemical Engineering at the University of Nottingham during which, I completed a 12-month placement at IBM as a Software Developer. This is where my interest in computing was sparked. I started to realise how much we rely on technology as a society, and that the solutions to most of our problems facing humanity are found within this field; such as how we’re going to tackle climate change and cure life-changing diseases!

After this realisation, I wanted to use my knowledge in both engineering and computing to contribute to the next digital era. One day, I fell into a YouTube rabbit hole where I found myself watching short videos on quantum computing, and I immediately became obsessed with the idea. I experimented with IBM’s Quantum Experience between my university lectures, and the more I learnt, the more inquisitive I became about how quantum is used across STEM. I was eager to get real experience in this field that I felt so strongly about, which is how I ended up at PRACE SoHPC this summer!

Starting my Quantum Journey

My natural strengths have always lied in the maths and sciences, and they are subjects which I enjoy and appreciate. This led me to pursue a Master’s degree in Engineering, where I was able to use these subjects alongside nurturing creativity, and learning to tackle real-world problems. As I was coming to the end of being in full-time education, I was looking to expend my energy on something I value, and in parallel, steer my career in the quantum direction. 

So, I applied to the SoHPC “Molecular Dynamics on Quantum Computers” project, which seemed like the perfect way to kickstart my quantum career. As you can imagine, when I received the acceptance email for the project, I was over the moon. I was being offered the chance to carry out scientific and computing research with Europe’s top researchers – you just can’t say no!

As a result, I am working with the IT4Innovations National Supercomputing Centre, which is located in Ostrava, Czech Republic. Although I am disappointed that I will not be able to explore the site and country this summer, I have enjoyed meeting my mentor and site co-ordinator virtually, who have offered support during my time at IT4Innovations.

I have already loved being part of a collaborative environment, and working with people from different countries and backgrounds. Having diversity in the scientific and computing community, means more point of views, each of which has been shaped by individual experiences. Before starting SoHPC, I attended a conference called “Diversity in Quantum Computing” which highlighted the need for diversity if we, as scientists and researchers, want to expedite the quantum revolution!

Bootcamp

So, what happens when you start the programme? Well, today I finished my first week of SoHPC, which was a training week or “bootcamp” consisting of lectures and tutorials. This short training programme covered topics such as: Python in HPC, OpenMP, and MPI, and was an excellent way to bring everyone up to speed with HPC fundamental concepts. However, the highlight of my week was joining a zoom call with my mentor and fellow intern, as everyone was so welcoming and instantly made me feel at ease.

A photo of my peers and I during training week.

Next Steps

In the next week, I will keep myself busy with Qiskit, NumPy and SciPy tutorials that my mentor recommended. We’ll need these to understand how to implement linear algebra and numerical methods when working with quantum chemistry. I’m looking forward to reading and understanding them, before meeting my mentor again next week, and diving further into the project!

Hello everyone, my name is Sepideh Shamsizadeh; I’m 29 years old and come from Iran. I am studying for my master’s in computer engineering, Artificial Intelligence and Robotics, at the University of Padova.
This semester, I had a course on big data computing, through which I found myself interested in high performance computing. Considering that I am passionate about deep learning and have some industrial experience, so I know how essential and valuable is to work with HPC.
When my professor emailed about the PRACE program, I got so excited and tried my best to apply, but I was doubtful about the result of my application; being among chosen list!. So, when I received the admission email, my dream came true and I’m pleased that I’m here.

My friends know me as a persistent person who has determined goals and tries hard to complete tasks and achieving the best results. So, I am going to work hard to fulfill my duties in the PRACE program.

My project in the PRACE

I will work on the ‘Combining Big-data, AI and 3D visualization for data center optimization ‘ project. You can find more information about it here. I selected this project as my first option to apply because I was so fascinated to learn about 3D visualization and live data collection. I have some experience with data visualization and ML data analytics solutions during my work experience. But I wanted to improve them and learn new things in these fields. The project’s learning outcomes were exciting for me, and I will try to learn them ultimately. I will post about my experience during the work. Feel free to contact me in case of any questions.

Hi everyone! My name is Carola and I’m a physicist! I come from Italy and I’m a master degree student of condensed matter physics at Università degli studi dell’Aquila. Something about me: I love eating (yes I know, I’m really Italian!), especially sushi, spaghetti, pizza and chips, and also I love watching tv series, listening to rock music, going to concerts and playing sports! I love animals, especially my two cockatiels: Cotton and Thanos.

Regarding my aptitude in the study of physics, my greatest passion is quantum computation, the study of topological materials and quantum topological computation, and I really hope that this project can help me become more practical in computation and acquire the necessary tools !

I’m so excited to start this wonderful experience and so grateful to have been selected for it together with my colleagues! I will work on 2118 project about molecular dynamics on quantum computers, and I will certainly talk about it extensively in the next future posts! The organization is IT4Innovations National Supercomputing Center at VSB – Technical University of Ostrava, Czech Republic. I selected this project as my first choice because I’m very fascinated by the subject and it will certainly be a training opportunity for me, that will be useful for me during my advanced studies. Being able to work with supercomputers is a huge opportunity, and I can’t be more excited about it! My goal is to become very proficient in programming with Python and to reinforce my knowledge of quantum chemistry. The main point of the project is quantum computing, and I hope to become practical in the knowledge of the various algorithms and methods. I’m really motivated and very willing, and I’m sure that together with my colleague (and friend!) we’ll be able to produce a very good job… and I’m sure I’ll be able to make you fall in love with this topic in my next posts!

I met my mentors on the first day of training and they all seem so nice and helpful, but most of all I met my project colleague, and I couldn’t have been luckier than that! I’m sure we’ll have a lot of fun together…I really can’t wait to start!

I wish everyone good luck!

Hello to everyone! My name is María Menéndez Herrero. This is my first post and in it I’m going to make a brief introduction about who I am and what paths have brought me to this project that I’m going to do thanks to the Summer of HPC with PRACE.

Introducing Myself

To begin with, I am 24 years old and I grew up in a small city in Northern Spain called Oviedo, capital of the Principality of Asturias. After a very enriching experience in high school, I decided to pursue a degree in Chemistry where I fell in love with Theoretical and Computational Chemistry. After finishing in 2019, I have studied an International Master in Theoretical Chemistry and Computational Modeling which I have combined with my PhD thesis in a program of the same name.

Llanes, Principado de Asturias. España (2021)

In the Theoretical and Computational Chemistry Group (QTCOVI) of the University of Oviedo, where I am currently working, they are in charge of performing an analysis of chemical descriptors in real space with the aim of relating the concepts that are usually handled in chemistry with the complexity of the results provided by theoretical chemistry. For this reason, during the last year my main job has been to work with the stochastic methods of quantum chemistry called Quantum Monte Carlo in order to analyze the maxima of the square of the wave function, which, according to the Copenhagen Interpretation of Quantum Mechanics, it is interpreted as a probability density and, therefore, it gives us information about the most probable arrangement of electrons in real space.

Cubic atom corresponding to the Neon formed by two interpenetrating tetrahedra of opposite spin electrons. The centre corresponds to the nucleus and the red and blue dots represent the alpha and beta spin electrons in the L shell respectively. This result is obtained by analysing the maxima of the square of the wave function and is in agreement with the cubical atom proposed by Lewis in 1916.

How have I got here?

As I have described above, my current work is mainly focused on Theoretical and Computational Chemistry, which is why at this point in my presentation you may ask yourself, why have you decided to participate in an HPC-related project? Well, as I imagine that all participants in these projects know HPC has a great applicability in many branches of science, for this reason, throughout my learning process, mainly in the Master, I have been introduced to this subject, and as far as I am concerned it has generated a great interest and a great applicability in the field I am working. A professor from the University of Oviedo suggested that participation in one of the proposed projects could be interesting and I saw it as an opportunity to continue my training in this area.

This is where the PRACE Summer of HPC has given me a great opportunity to continue my training in this field by participating in one of its projects. While it is true that I feel really sad that I will not be able to experience the full experience of travelling and meeting interesting people involved in this field because of the Global Pandemic caused by COVID-19, I do feel really motivated to start learning!

Hopes and expectations for the Summer of HPC

The project I have chosen to enjoy this experience is ‘2109. Benchmarking HEP workloads on HPC facilities’. In this project the aim is to try to standardize different workloads with a container-based benchmarking suite that has recently started to be tested on heterogeneous HPC systems.

What I expect from this project is to be able to contribute with my knowledge to the results already achieved by the entities with which we are going to collaborate, both CERN and SURF, and mainly to learn from what both my colleagues and the Mentors of the Project can contribute to me. Furthermore, it will clearly be an honor for me to work with such a renowned organization in the world of science as CERN.

And with this I would like to say goodbye, so last but not least, I hope that despite the sanitary conditions that we have had to live these days due to the restrictions implemented by the COVID-19, all participants of the PRACE Summer of HPC can enjoy to the fullest this enriching experience both intellectually and culturally that gives us the opportunity to learn as a team and, also, that it can inspire others who are considering participating in a project in the coming years.

See you soon on this Blog!

Regards,
Maria.

Who even is this guy??

Hello! The person writing this is Brian O’Sullivan, that would probably be a good place to start. (That’s me in the photo!) I am 21 years old and recently graduated out of the National University of Ireland, Galway with a bachelor’s in physics. Physics is still close to my heart, but now I’m planning on moving into data science with the hopes of applying that background in climate modelling and meteorology! Given the looming threat of climate change and extreme weather, it’s an area I’ve always had a huge interest in, so I’m really excited to go down this new path.

Note: I’m on the left

Summer of HPC

Of course, I am incredibly grateful for PRACE for giving me this opportunity, and I hope to learn as much as I possibly can from this internship. Naturally, I had to apply to the Summer of HPC the minute I heard about it, hearing about the exciting possibilities we can achieve with advanced computing and high performance computing is honestly pretty mind blowing. No doubt, it was an area I needed to explore for myself. My project is with the Barcelona Supercomputing Centre (BSC-CNS), where I will be working on improving the spatial interpolation of NMMB/MONARCH, a state of the art atmospheric chemistry/dust model over Europe and Northern Africa. Attentive readers may notice this project is a perfect pairing, a combination of atmospheric modelling and data science! I believe it will be an excellent way to engage with my area of interest, and I’m looking forward to getting started.

Summer of Dust

During my final undergraduate year, I worked on a year-long project in spatial interpolation. I was trying to find effective ways to fuse satellite and in-situ NO2 air pollution datasets over Western Europe. I finished the project to a varying degree of success, but I can’t wait to see how the experts at BSC have created similar tools and models. What especially interests me is the many different scales and products they are focusing on (Dust!). Being able to create flexible software that can be applied to various quantities in the atmosphere is a must, so I’m eager to discover the intricacies of MONARCH for myself.

A Small Bio to Finish

Working from Galway in Ireland, I like to think of this internship as a sort of virtual “trip” to Barcelona. Summers here are definetely much colder here than those in Catalonia, and there is much more rain (so much rain….) so I hope to vicariously live through my work and experience the sunshine through my laptop screen. Of course, one must give themselves some leisure time. I typically like to practice the piano or play boardgames in my freetime, but I don’t think anything can really beat spending time with friends. Oh, also, here is a photo of my dog, people tend to like those.

My brother Charlie

Award adjudication panel consisting of members of PRACE Management Board, PRACE Board of Directors, Scientific Steering Committee, PRACE communications and SoHPC team announced the winners of awards for Summer of HPC 2020 edition.

PRACE Summer of HPC 2020 Best Performance Award
Igor Abramov & Josip Bobinac

Both Igor and Josip had to deal with a very complicated HPC problem with extensive mathematical knowledge and advanced C++ skills required to write templated code to speed-up the most critical part of their project. Most importantly, both worked very hard on their project and did so as a team, presenting their work in the final video in a way, which despite its complexity, can be understood by most audiences.

PRACE Summer of HPC 2019 HPC Ambassador Award
Antonios-Kyrillos Chatzimichail

Antonios is well aware of the responsibilities of a HPC Ambassador in presenting, disseminating and discussing his HPC experience with others, encouraging and inspiring his peers to follow a similar path to his in the world of HPC. He has done this through his well explained and comprehensible blog posts and his final project video.

Summer of HPC is a PRACE programme that offers summer placements at HPC centres across Europe to late-stage undergraduate and master’s students. Up to 66 top applicants from across Europe will be selected to participate in pairs on 33 projects supported and mentored online from 14 PRACE hosting sites. Participants will spend two months working on projects related to PRACE technical or industrial work and produce a report and video of their results.

Up to 66 top applicants from across Europe will be selected to participate in pairs on 33 projects supported and mentored online from 14 PRACE hosting sites.

Participants will spend two months working on projects related to PRACE technical or industrial work to produce a visualisation or video. PRACE will be financially supporting the selected participants during the programme that will run from 31th June to 31th August 2021, with the amount of €1300 per student during the summer.

Late-stage undergraduate and master’s students are invited to apply for the PRACE Summer of HPC 2021 programme, to be held in July & August 2021. Consisting of a training week and two months with onlone participation at top HPC centres around Europe, the programme offers participants the opportunity to share their experience and learn more about PRACE and HPC.

Due to Covid-19 pandemics during the summer of 2021 the programme will run fully online as an exception and as long as Europe is under strong mobility limitations. Two prizes will be awarded to the participants who produce the best project and best embody the outreach spirit of the programme.

Applications are open until 12 April 2021. Applications are welcome from all disciplines. Previous experience in HPC is not required as training will be provided. Some coding knowledge is a prerequisite, but the most important attribute is a desire to learn, and share experiences with HPC. A visual flair and an interest in blogging, video blogging or social media are desirable.

The programme will run from 1 July to 30 August 2021. It will begin with a kick-off online training week organised Irish Centre for High End Computing (ICHEC) – to be attended by all participants.

Project reference: 2125

A reasonable part of the data analytics and statistical modelling community relies on the Stan language and software to implement particular models based on Bayesian approaches. This language allows users to specify stochastic models that are then translated into C++ code and, upon execution, will return the estimates for the parameters of the model (stochastic parameter fitting).
This project will focus on a more efficient  implementation of the Hamiltonian Monte Carlo that used for sampling from probability distributions in Stan with performance, parallelism and portability as goals.
To this end the method will be implemented using MPI+OpenMP or potentially using Sycl.
Our experience has shown that a reduction in complexity of the code and an introduction of hybrid parallelism can achieve significant (> ~2x) runtime reductions already for models of moderate complexity.
The successful applicant will work on a reimplementation of HMC with different sampling strategies in C++ and its integration in Stan. The project will involve recurring benchmarking and scalability analyses.
As a stretch goal a hybrid python/C++ or a purely python implementation
can be considered that will allow for an execution on the GPU.
The expected  outcome will be a highly parallel and easily understandable implementation of the method to be submitted to the Stan software package.

Project Mentor: Anton Lebedev

Project Co-mentor: Vassil Alexandrov

Site Co-ordinator: Luke Mason

Participants: Tiziano Barbari, Morten Holm

Learning Outcomes:

The student  will also  acquire key skills such as:
– Familiarity with software development for academic software.
– Familiarity with concepts of Bayesian inference and its applications.
– Fundamentals of statistical physics.
The student will also learn to benchmark, profile and modify CPU and multi-GPUs code mainly written in  C++ and CUDA languages. The student will also acquire skills and be able to efficiently implement hybrid programming approaches using MPI/OpenMP.

Student Prerequisites (compulsory):
Necessary: (exclusion constraints)
– Working knowledge of C++

Student Prerequisites (desirable):
Highly desirable (any two): (necessary, but could be acquired before starting the project)
– Familiarity with fundamental concepts of stochastics (PDF, CDF, Bayes’ rule).
– Hamiltonian mechanics or statistical physics.
– MPI/OpenMP programming.

Training Materials:
These can be tailored to the student once he/she is selected.

Workplan:

Week 1/: Training week
Week 2/:  Literature Review Preliminary Report (Plan writing)
Week 3 – 7/: Project Development
Week8/: Final Report write-up

Final Product Description:
The final product will be an efficient HMC parallel implementation together with the corresponding internal report, convertible to a conference or better journal paper.

Adapting the Project: Increasing the Difficulty:
The project is on the appropriate cognitive level, taking into account the timeframe and the need to submit final working product and 1 reports

Adapting the Project: Decreasing the Difficulty:
The topic will be researched and the final product will be designed in full but some of the features may not be developed to ensure working product with some limited features at the end of the project.

Resources:
The student will need access to a multi CPU and multi GPU machines, standard computing resources (laptop, internet connection).

Organisation:

Hartee Centre – STFC

Project reference: 2124

The focus of this project will be on  enhancing further hybrid (e.g. stochastic/ deterministic) methods for Linear Algebra using advanced AI approaches to accelerate the computations. The focus is on Monte Carlo  hybrid methods and algorithms for matrix inversion and solving systems of linear algebraic equations. Recent developments led to efficient approaches  based on building an efficient stochastic preconditioner and then solving the corresponding System of Linear Algebraic Equations (SLAE) by applying an iterative method. The preconditioner is a Monte Carlo preconditioner based on Markov Chain Monte Carlo (MCMC) methods to compute a rough approximate matrix inverse first. The above Monte Carlo preconditioner is further used to solve systems of linear algebraic equations thus delivering hybrid stochastic/deterministic algorithms. The advantage of the proposed approach is that the sparse Monte Carlo matrix inversion has a computational complexity linear of the size of the matrix.  Current implementations are either pure MPI or mixed MPI/OpenMP  and GPU based ones. The efficiency of the approach is usually  tested on a set of different test matrices from several matrix market collections.

The intern have to take the existing codes and will have to integrate some AI approaches based on Deep Learning methods in the hybrid  method and to test the efficiency of the new method applied to variety of matrices as well as Systems of Linear Equations with the same matrix but different right hand side.

Project Mentor: Vassil Alexandrov

Project Co-mentor: Anton Lebedev

Site Co-ordinator: Luke Mason

Participants: Iakov Kharitonov, Adrian Lundell

Learning Outcomes:
The student will learn to design parallel hybrid  Monte Carlo methods as well as how to use advanced ML and Deep Learning techniques.
The student will learn how to implement these methods on modern computer architectures with latest GPU accelerators as well as how to design and develop mixed MPI/CUDA and/or MPI/OpenMP code.

Student Prerequisites (compulsory):
Introductory level of Linear Algebra, some parallel algorithms design and implementation concepts, parallel programming using MPI and CUDA.

Student Prerequisites (desirable):
Some skills in being able to develop mixed code such MPI/OpenMP or MPI/CUDA will be an advantage.

Training Materials:
These can be tailored to the student once he/she is selected.

Workplan:

Week 1/: Training week
Week 2/:  Literature Review Preliminary Report (Plan writing)
Week 3 – 7/: Project Development
Week8/: Final Report write-up

Final Product Description:
The final product will be a parallel application that can be executed on hybrid architectures with  GPU accelerators or a multicore one.  Ideally we would like to publish the results in a paper on a conference or a workshop.

Adapting the Project: Increasing the Difficulty:
The project is on the appropriate cognitive level, taking into account the timeframe and the need to submit final working product and 2 reports

Adapting the Project: Decreasing the Difficulty:
The topic will be researched and the final product will be designed in full but some of the features may not be developed to ensure working product with some limited features at the end of the project.

Resources:
The student will need access to a GPU and/or  a multicore based machines, standard computing resources (laptop, internet connection).

Organisation:

Hartee Centre – STFC

Project reference: 2129

The main objective of this SoHPC project is to test how our existing code for energy consumption prediction scales from a local server to supercomputer.

We have developed Python and R scripts to retrieve data, store it to MongoDB and load it back when needed. Additionally, based on the historical data we have developed scripts that build prediction models.

Using deep neural networks, building each data model takes approx. 2 minutes and approx. 8 MB of memory.

Therefore, the main goal of this project is to test how we can adapt the existing R and Python scripts such that we can build 10.000 models and predictions within time limit of approx. 1 hours using a supercomputer with state-of-the-art computing nodes and storage.

The workflow of the project

Project Mentor: Prof. Janez Povh, PhD

Project Co-mentor: Matic Rogar

Site Co-ordinator: Leon Kos

Participants: Irem Dundar, Omar Patricio Perez Znakar

Learning Outcomes:

  • Student will master R and parallelization in R using RStudio for creating computing jobs;
  • Student will master Hadoop and RHadoop and MongoDB for big data management;
  • Student will master basic analytics methods using RHadoop;

Student Prerequisites (compulsory):
R, Python
Basics from regression and classification

Student Prerequisites (desirable):
Basics from Hadoop.
Basics from data management (NoSQL data bases – MongoDB)

Training Materials:
The candidate should go through PRACE MOOC:
https://www.futurelearn.com/admin/courses/big-data-r-hadoop/7

Workplan:

W1: introductory week;
W2: efficient I/O management of industrial big data files on local HPC
W3-4: studying existing scripts for data management and predictions and parallelizing them;
W5: testing scripts on real data
W6: preparing materials for MOOC entitled MANAGING BIG DATA WITH R AND HADOOP.
W7: final report
W8: wrap up

Final Product Description:

  • Developed scripts to retrieved industrial big data files and store in Hadoop;
  • Created RHadoop scripts for parallel analysis and computing new prediction models;

Adapting the Project: Increasing the Difficulty:
We can increase the size of data or add more demanding visualization task.

Adapting the Project: Decreasing the Difficulty:
We can decrease the size of data or simply the prediction models.

Resources:
Hadoop, R, Rstudio, MongoDB and RHadoop installations at University of Ljubljana, Faculty of mechanical engineering


Organisation:

UL-University of Ljubljana

Project reference: 2117

Geographic information science and remote sensing deal with data that are unique in the sense that they carry spatial information with them. Unlike other data, geospatial data possesses a unique footprint on the surface of the earth and it also has a time dimension to it. Although HPC can be easily implemented in other areas of natural sciences, it is not straightforward in the case of earth observation data because of the complexity attributed to their geospatial information. Also, one snapshot from the satellite instrument orbiting around the earth captures multiple layers of data at several spectral wavelengths which turns satellite data into complex big data. The traditional approach of image manipulation is thus inadequate for these earth observation acquisitions. The need for HPC is very clear due to the very nature of these data sets, but the implementation has several bottlenecks due to the ever-expanding nature of geospatial data. The Parallel Earth project aims to adopt the traditional MPI implementation on the earth observation workflow so that the time- and resource-consuming tasks can be executed in parallel without losing any associated information. For the project, the case study will involve processing satellite images from Sentinel-2 mission to compute Normalized Difference Vegetation Index that will give an indication of the presence of vegetation on the ground or water. The computation workflow will involve Python based processing together with Java based Graph Processing Toolkit present in the SNAP software used to handle imagery. The MPI for Python will be implemented to optimize the workflow. At the end of the project, the optimal, yet scalable approach will be identified where the technique can be replicated for similar processing.

[Graphic in Section 9]
Image from European Space Agency’s Sentinel-2 mission showing the magnitude of Normalized Difference Vegetation Index along the coastal waters of Dublin Bay in Ireland. The image from June 2019 shows the presence of vegetation indicated by the higher values of the index shown in green.

Project Mentor: Sita Karki

Project Co-mentor: Manuel Fernandez

Site Co-ordinator: Simon Wong

Participants: Niels Hvidberg, Rabia Özdoğan

Learning Outcomes:
Students will be able to work with earth observation data and optimize the workflow to deal with big data that are coming from satellite missions. They will get experience on manipulating high-resolution imagery taking advantage of the high-performance computing.

Student Prerequisites (compulsory):
Basic knowledge of Linux, Python Programming, Background in natural or physical science.

Student Prerequisites (desirable):
Familiarity with or interest in geographic information system, remote sensing and natural science.
Familiarity with QGIS and SNAP software.
Experience working with geospatial data, C or Fortran.

Training Materials:
General overview of the Earth Observation Application (Week 1 and 2)
https://www.esaopticaleomooc.org/course-overview

Introduction to SNAP Software:
https://www.youtube.com/channel/UCR_W9FE-AxRHeOQr-hocHqg/playlists

Workplan:

Week 1: Training week
Week 2: Introduction to project case studies and HPC implementation.
Week 3: Submission of work plan, introduction to geographic information system and remote sensing data.
Week 4: Hands on with geospatial data.
Week 5-7: Image processing and HPC implementation.
Week 8: Report preparation, submission, presentation.

Final Product Description:
The expected result will demonstrate the successful adoption of parallelization for computing earth observation data sets. The final result will show the best approach for basic case study.

Adapting the Project: Increasing the Difficulty:
The project will involve the basic workflow involving earth observation data sets and the workflow can easily be replaced with multiple data inputs with varied resolution to increase the complexity level.

Adapting the Project: Decreasing the Difficulty:
The basic workflow will be replaced with simple image manipulation with single input and output to decrease the difficulty level. Also, the resolution and multi-band approach associated with multi-spectral imagery can be dropped in the case of workflow simplification.

Resources:
Only open-source software will be used in the project and any supplementary material or training activities can be completed on a personal computer.

Organisation:

ICHEC-Irish Centre for High-End Computing

Project reference: 2116

In this project, students will develop an efficient parallel algorithm for determining good upper bounds, if not the exact value, of a quantity called treewidth for arbitrary graphs. While computing the treewidth of an arbitrary graph is known to be an NP hard problem, being able to find good upper bounds for the treewidth is very useful for many important applications. One area where this is particularly important is in simulating quantum computers. Notably, in 2019 Google demonstrated the capabilities of their quantum processing unit, known as the sycamore chip. To validate the correctness of the chip’s output, simulations of the chip had to be carried out, a task which can easily become infeasible if not done efficiently. A serial algorithm was used to compute an upper bound for the treewidth of a graph associated with the chip and the result was used to determine an efficient simulation scheme for the device. Better upper bounds lead to better simulation schemes. As such, a tool for utilising parallel computing to find good upper bounds for the treewidth of a graph would be valuable for the development of such devices.

Computing approximations of the treewidth has important applications many other areas including extracting information from social networks (https://arxiv.org/abs/1411.1546), inference of probabilistic graphical models (https://arxiv.org/abs/1206.3240) and was one of the topics of the PACE 2017 challenge.

More concretely, the algorithm to be developed by the student will use the branch and bound paradigm to search through the set of elimination orderings for a graph provided by the user. An elimination ordering of a graph is an ordering of the graph’s vertices for elimination, where eliminating a vertex means connecting all of its neighbours together before removing it from the graph entirely. The width of an elimination ordering is the maximum number of neighbours any one vertex has when it is eliminated according to that ordering. The treewidth of a graph can be defined as the minimum width over all possible elimination orders.

To find the elimination ordering (EO) with the optimal treewidth involves searching over all the possible EOs of the graph. Since the number EOs grows extremely fast as the size of the graph is increased, searching all possibilities quickly becomes intractable for even for modest sized graphs. The branch and bound algorithm works by maintaining upper and lower bounds for the treewidth and using these

to avoid whole ranges of EOs which we can tell in advance will not have optimal treewidth. Implementing this algorithm in parallel presents some interesting trade-offs. It’s possible to implement this algorithm as an embarrassingly parallel problem with no communication between processes, which will avoid synchronisation issues and communication overhead but will mean additional work for each process.

Julialang is the preferred language for this challenge as it provides the high-level abstractions of interpreted languages like Python with the speed of compiled languages like C and Fortran. It is also possible to use Python or C++ if the student has a strong preference.

[Graphic in Section 9]
An example of a tree decomposition. Image taken from Adcock et al “Tree decompositions and social graphs” https://arxiv.org/pdf/2012.13349

Project Mentor: Niall Moran

Project Co-mentor: John Brennan

Site Co-ordinator: Simon Wong

Participants: Oliver Legg, Valentin Trophime

Learning Outcomes:
The students will gain experience of developing, debugging and profiling parallel programs which use MPI as well as experience with graph algorithms. They will also be exposed to the branch and bound programming paradigm which can be applied to many problems. They will also improve their general programming skills and gain experience of working as part of a team.

Student Prerequisites (compulsory):
Some experience and exposure to scientific programming and graph theory would be beneficial. Ability to read and understand technical literature containing equations and algorithm outlines.

Student Prerequisites (desirable):
N/A

Training Materials:

Workplan:

Week 1: Training week

Week 2: Introduction to Julia, treewidth theory, relevant graph packages.

Week 3: (Plan due) Read Gogate and Yang Yuan papers, understand
Basic branch and bound algorithm without sophisticated pruning heuristics.

Week 4: Implement basic anytime algorithm, run several instances in parallel with random search orders.

Week 5: Get different processes to search different branches of the search space. Include Yuan’s lower bound pruning technique (requires inter process communication).

Week 6: Include Yuan’s similar group heuristic. Profile and optimise the implementation.

Week 7: Apply developed algorithms to example problem instances from the PACE 2017 challenge and from examples uses as part of the QuantEx project.

Week 8: Write report and prepare presentation.

Final Product Description:

  1. A program to compute an elimination ordering of minimal width for an arbitrary graph within a duration of time specified by the user.
  2. Benchmark results showing the quality of solutions found and the scaling with different numbers of cores and nodes.

Adapting the Project: Increasing the Difficulty:
There are several non-trivial heuristics described in the papers by Gogate and Yuan for improving the efficiency of the algorithm which could be included.

Adapting the Project: Decreasing the Difficulty:
Inter process communication can be reduced by omitting the need to select branches of the search space for each process, instead branches could be selected at random on each process. Additional optional heuristics can be left out and sample code can be provided to accelerate progress..

Resources:
The students will require access to a laptop/workstation on which they can install development tools and compilers. They will also be provided with access to cluster resources for developing, profiling and benchmarking runs.

Organisation:

ICHEC-Irish Centre for High-End Computing

Project reference: 2131

CFD is solving time dependent partial differential equations (PDE) and PDE is solved by numerical approximation. Which brings the system of equations (matrices and vectors) that need to be solved either by direct or indirect solvers. These solvers need a huge computational power in terms of the chosen problem. Pre-processing and post-processing (time step solutions) also require lot of computational power to visualize or to make a movie.

Efficient car modelling is essential for fuel efficiency, leading to environmental friendly, particularly with CO2 emission and electricity consumption.  A car can be a passenger car or race car, but lift and drag are the main parameters that need to be studied for efficient car modelling.

To understand the cars’ aerodynamics, most of the time, the simple models have been studied a lot and validated against experimental and numerical simulations, for example, Ahmed Body or the SAE body. But in reality, to understand the flow phenomena around the car, a simple car model should be investigated. DrivAer model is a simple car model, similar to a standard vehicle in the present production model. Understanding the aerodynamics of the DrivAer model will open up lots of challenges in the design and understanding of the standard car’s aerodynamical forces.

DrivAer

Project Mentor: Dr. Ezhilmathi Krishnasamy

Project Co-mentor: Dr. Sebastien Varrette

Site Co-ordinator: Dr. Ezhilmathi Krishnasamy

Participants: Benet Eiximeno Franch, Paolo Scuderi

Learning Outcomes:
* Computational fluid dynamics using either open source or commercial tools to solve the given problem on a HPC setting and more insight into Aerodynamics.
* Pre-processing, computation and post processing techniques (data analysis).
* Parallel visualization
* Optimization (load balancing on HPC setting and design parameters for DrivAer model)

Student Prerequisites (compulsory):
* Fluid mechanics and basic programming skills.

Student Prerequisites (desirable):
* Familier with any of the open source or commercial CFD software and computational mathematics.

Training Materials:
* Paraview : https://www.paraview.org/hpc/
* VisIt : https://wci.llnl.gov/simulation/computer-codes/visit/
* OpenFOAM : https://www.openfoam.com/
* Geometry detail : https://www.mw.tum.de/en/aer/research-groups/automotive/drivaer/geometry/
* Experimental results : https://www.mw.tum.de/en/aer/research-groups/automotive/drivaer/

Workplan:

Week 1: HPC training
Week 2: Project preparation
Week 3: Preprocessing
Week 4: Simulation
Week 5: Simulation
Week 6: Post processing
Week 7: Results analysis
Week 8: Report writing

Final Product Description:
* Compare the simulation results against any one of the model with its wind tunnel experimental results.

Adapting the Project: Increasing the Difficulty:
* DrivAer model has up to 18 configurations, so it is better to study only 1 or 2 configurations. Considering more models would make the project more difficult.

Adapting the Project: Decreasing the Difficulty:
* If we do not need to compare the results with experimental results that could make the project just easier.
* Considering just few parameters with simulation makes it even simpler.

Resources:
ANSYS and OpenFOAM are available. Other open source tools (both for simulation and pre-processing) can be installed upon student request.

Paraview and VisIt (data processing and visualization) are also available.

Organisation:

ULux-University of Luxembourg

Project reference: 2133

While High-Performance Computing (HPC) traditionally focuses on compute-intensive tasks, Big Data frameworks are more focused on data-intensive tasks. Our project aims at investigating the intersection of HPC and Big Data on the basis of case studies arising from the two different fields. How can the Big Data frameworks be ported to a traditional HPC architecture? What are the challenges and limitations? How can HPC enhance Big Data analysis? What tools and projects are already available to facilitate the interoperability of Big Data and HPC platforms for facilitating what is also known as HPDA (High Performance Data Analysis). On the one hand, existing Big Data solutions will be run on the local supercomputer and make efficient use of its resources. Performance bottlenecks for Big Data users on HPC include a queuing system, waiting times and access restrictions to data and compute resources. On the other hand, we will use methods taken from Big Data software stacks to solve pre- and post-processing tasks of High-Performance applications. An important topic is to identify and classify tasks that can profit from new implementation ideas, where the interplay of HPC and Big Data may lead to easily implementable solutions to advanced problems. Bringing together HPC and Big Data solutions poses a challenge in terms of programming paradigms for computation (“data parallel” versus “automatic parallelization”) and even in terms of common programming languages. This may be overcome by focusing on Python as a common denominator, not excluding other languages, however. Our project will develop prototypical applications to illustrate the integration of Hadoop and HPC tasks leveraging the Vienna Scientific Cluster (VSC) as well as the Apache Hadoop Little Big Data (LBD) cluster of the Technical University of Vienna.

Person looking at data (source: https://unsplash.com/photos/WiCvC9u7OpE)

Project Mentor: Giovanna Roda

Project Co-mentor: Dieter Kvasnicka

Site Co-ordinator: Claudia Blaas-Schenner

Participants: Pedro Hernandez Gelado, Rajani Kumar Pradhan

Learning Outcomes:
The student will get to know both HPC and Big Data ecosystems and will be able to leverage these technologies for their computational needs.

Student Prerequisites (compulsory):
Familiarity with the Linux shell.

Student Prerequisites (desirable):
Experience with  Hadoop and/or HPC.

Training Materials:
Will be provided at a later time

Workplan:

  • Week 1: training
  • Week 2-3: Introduction to Big Data
  • Week 3-4:  Introduction to HPC
  • Week 5-6: Running Big Data applications on HPC
  • Week 7-8: Further experiments and report.

Final Product Description:
The expected project result consists in a report and software prototypes to illustrate the work done.

Adapting the Project: Increasing the Difficulty:
Packaging the applications in containers for portability

Adapting the Project: Decreasing the Difficulty:
Get familiar with Big Data and HPC but run applications just on Hadoop.

Resources:
A client machine for connecting to the clusters and fast Internet connection.

Organisation:

VSC Research Center, TU Wien

Follow by Email