I am finally participating in the PRACE Summer of HPC, a programme I wanted to be part of a lot!.I followed a parallel computing course during the last semester at university and the teacher of this course  shared with us application details of this program. But I saw this on the last application day and at a few hours. Initially I was very excited and tried to apply to the programme. After submitting the application form, I knew there were some fields I hadn’t filled in. The next day, I received and email from Leon. He wrote to me telling me “You didn’t provided your professor’s email, name, surname, institution, so that we can ask him for your recommendation letter. Please send this right away!”. This email made me believe that I could still be accepted for the programme. I quickly sent Leon all the required information.

3 students had applied to this programme from my University and I was impatiently waiting for the results of my application. The initial communication informed me that I was not accepted to the programme Which disappointed me. However, one of my friends was accepted and of Course I was so happy for him.

After the about 1 month, I received another email from Leon. The selected candidate for Project 1810 could not participate in the project and identified as an alternative candidate to bologna, CINECA, Italy. When I saw this email, I was climbing the walls. The project was also great for me. Everything was so perfect. After this news, I guess I could not sleep for a week :). In little time, I completed visa processes.

The training week of the program was in Edinburgh. We were trained on parallel programming every day for 5 days. We tried to solve big problems using parallel programming upon supercomputers in a remote manner. I had experienced with connecting to remote computers before. We learned the advantages, disadvantages and differences between MPI and OpenMPI. MPI which uses a distributed memory structure was the best solution for us so we analysed nearly all of our exercises and functions using MPI.

At the end of each training day, we all tried to spend time together so we could get better acquainted. We visited Edinburgh sightseeing tour and climbed Arthur’s mountain. Of course we having breakfast, lunch and dinner together. Everyone is wonderful and creative and I am very glad. I believe they will all be successful in their project.

With this my first post at the start of my project. I hope that I will be successful in my project.
See you in another of my future posts about my project in Web Visualization and Data Analysis of Energy Load of a HPC system.

 

I am an Electronics and Telecommunications (i.e. Networking) undergraduate at AGH University of Science and Technology in Cracow, Poland.  My current research interest revolves around machine learning, Big Data architectures and spintronics. Although my background is in engineering, I hope for a future research career in an exotic mixture of artificial intelligence and physics (quantum machine learning anyone?).

As much as I enjoy myself, there is always a certain discomfort when sharing a personal note. Therefore, to indirectly evade this problem (avoiding problems and responsibilities always worked for me) I have put up a quick doodle to highlight the key aspects of my inexplicably complex personality

So far, I’ve had a chance to participate in a training week which is a part of the PRACE Summer of HPC programme – over 20 potential future researchers who a share passion and belief in science (so lofty) have met in Edinburgh to learn the craft of HPC parallel programming.

I think this is a perfect moment and place to list my expectations about my participation in the PRACE Summer of HPC programme. Defining clear goals and hopes helps in focusing on completing tasks effectively. I plan to make the most of the next two months, so here we go:

  • chance to work on the newest HPC technologies and participate in the trending research topics
  • meet interesting and inspiring people
  • have an opportunity to work with top-notch researchers
  • experience a vibrant city for 2 months, possibly do lots of sightseeing

In the last post I will attempt to compare the outcomes of my PRACE SoHPC experience against this list above, but for now I’m left with an itching curiosity and positive attitude.

That’s all folks, see you till the next post!

*footnote was removed due to copyright infringement*

Successful man’s bio

 

 

 

At the top of Kriváň. The most famous peak in Slovakia.

Hi there! My name is Filip Kuklis. I am a 25-year-old IT guy and I am coming from Slovak “city of dreams” – Piestany. Piestany is also known as “Little Amsterdam” of Slovakia, because of its bikes. But for now, I live in the second largest city of Czechia also called “Silicon Valley” of central Europe.

I decided to follow the Slovak tradition of conquering Czech universities, so I live and study in Brno at the Brno University of Technology, Faculty of Information Technology, where I have actually graduated three weeks ago with a Master’s degree.

The first time I got know about HPC was my during Bachelor thesis “Fast Reconstruction of Photoacoustic Images“, which was an acceleration of Matlab code using C++/OpenPM as part of the k-Wave project. k-Wave  focuses on medical applications of high intensity focused ultrasound. This was the first time I worked with supercomputers (using the IT4Innovations Anselm and Salomon clusters). This year, I finished my Master‘s degree in a branch of Bioinformatics which has a strong connection with supercomputing. The title of my Master’s thesis was “Acceleration of Axisymmetric Ultrasound Simulations“ which is also the C++/OpenMP implementation of Matlab code as a part of the k-Wave project. This implementation was also carried out on the Anselm and Salomon supercomputers.
After summer, I am going to enroll in the Ph.D. programme, where I would like to continue working on the k-Wave project. My research is supposed to include many optimization techniques such as evolutionary algorithms, neural networks, deep learning etc. Furthermore, it should use extensive ultrasound models implemented on supercomputers using OpenMP, MPI, and CUDA.

In my opinion, the PRACE Summer of HPC programme is a great opportunity for me to gain many experiences. First of all, I would like to improve my HPC skills and learn new HPC approaches. I would also like to improve my language skills, as this will be my first experience working abroad. Because I am very interested in HPC, I am really looking forward to meeting new people who are specialists in HPC and who can teach and motivate me.
I am also very interested in quantum physics so I am happy that I was selected for the Summer of HPC project about quantum computing at SURFsara Netherlands.

When I am not doing any stuff on the computer I really like hiking – especially in High Tatras (smaller Slovak Alps). Of course in Brno, I like to enjoy a good Czech beer. I also love mountain biking on my beautiful bike “Shrek” (actually it is a TREK). In the winter, I like to ski in the Slovak mountains with my best ski instructor Daniela. All of the year I enjoy walking around the city and the parks in Piestany. All these activities I enjoy the most with my partner Daniela and all of my friends.

Working on some C++ codes sitting in a lab at Jaume I University, Castellón de la Plana, Spain, I just thought why not to break the ice and write my first blogpost to introduce myself. I am Sukhminder Singh, an MSc student in Computational Engineering at Friedrich-Alexander-University, Erlangen, Germany. Before beginning my Master’s studies in Germany, I studied Mechanical Engineering and worked for an automotive company in India. Since my childhood, I am very passionate about simulation technologies which can help us to simulate the real world on computers, for instance, crashing a car in the digital world and predicting what will happen to the occupants in the real world. So, after working for 3 years in manufacturing, I decided to change my career and to study Computational Mechanics and High Performance Computing (HPC). Now, it’s been two years in my Master’s studies and I have already started my two months’ journey with the PRACE Summer of HPC 2018 programme.

Last week, I attended a training week at the University of Edinburgh, organised by PRACE at EPCC. The training helped me to refresh my basics of parallel programming with MPI. I also got a chance to run codes on ARCHER (Advanced Research Computing High End Resource), which is UK’s national high-end supercomputing system. Although the training was quite intensive, I got some time in the evenings to see the beautiful city of Edinburgh. The city reminded me of the epic Harry Potter movies I used to watch when I was a kid. It is really a dreamy place for Hogwarts fans, including me. The architecture and the design of the city attracted my close attention. In the middle of the week, I had a hiking trip to Arthur’s Seat. The views from the peak were phenomenal. I was able to have 360° views of the city including the North Sea. Having my ears wired, listening to Classic FM (as far as I remember, it was 101.7 FM) and observing the beautiful sunset, I would say, that was the most wonderful and soulful experience I ever had in my life.

Photograph taken at Arthur’s Seat

I can mention one of my hobbies with which I fell in love recently. I learned to dance basic Salsa this semester and I hope to get perfect in the next semester. I will try to find some clubs in Castellón to practice it. Sometimes, I like just not to do anything, which I think is also important.

Now, I am going to work on my project for the next 7 weeks. The goal is to make LAMMPS, a classical molecular dynamics code, malleable. In other words, I am working to enable LAMMPS to be resized, in terms of number of processes, during its execution time. In the coming days, I will write more about my project and will share with you my experiences in Spain. Until then, have a nice time!

Hello everyone. My name is George and I am a 24 year old student from Greece studying for my Master in Computational Physics at the Aristotle University of Thessaloniki.

During my Bachelor studies in Physics, I  had a strong interest in any kind of programming, so naturally it became a hobby of mine in my free time. Low level programming like Assembly or GPU shader programming looked really exotic and appealing to me. Starting my digging for information throughout the Internet, I made my first steps towards the huge world of programming. On my way, I met my supervisor, a computational solid state physicist, who guided me and gave me the opportunity to get involved in programming in a more serious way. Doing my BSc thesis on visualization and manipulation of crystals, I built my first big program from scratch which was my initial big step that kept me going, giving me a reason to make coding part of my studies. The thesis turned out to be a big success and was presented at the International Materials Researcher Society Conference in Mexico in 2018. After one year of developing my thesis and finishing my Bachelor, we keep building and improving the code and hope to publicly release it one day.

Combining physics and coding was a “dream come true” for me, so I immediately jumped into my Master studies of Computational Physics. There, I learned various numerical algorithms applied to fields such as quantum mechanics, electromagnetics, solid stated physics, data analysis etc. but more importantly, it was an opportunity for me to meet new people, work and interact with them and make new friends. I learned that working on what you like is fun, but working with good friends and nice company is even better. One of those friends of mine, mentioned the PRACE Summer of HPC programme and he insisted that it was a great opportunity for me. So I followed his advice and applied!

My experience in Edinburgh with the PRACE Summer of HPC programme was really great. In just one week, I met a lot of people from all around the world and made some friends. Hanging around with them, during that week, for almost the whole duration of my day, created some very joyful moments.

I wish every one good luck with their projects.

George

The Guesthouse is home to visiting students and researchers.

The Jülich Förschungszentrum lends bikes to visiting students!

Moving countries is always a bit of a hassle, even if you are only coming for a summer internship. Things can, and often will, go slightly wrong, but I don’t think it should stop anyone from signing up to new challenges! Here’s a couple of impressive blunders you can maybe avoid though:

1: Arriving at the airport, grab a train with very small time margins for transfers, rush to the wrong platform, and end up at a station called Kuhbrücke (literally “Cowbridge”). Bonus points are given for running with heavy luggage in 30⁰C, and accidentally smudging your face with dirt at some point, so that you can make a great first impression when meeting your colleague an hour late at the station.

2: Grabbing an ice cream when you go grocery shopping is actually a good idea. However, avoid eating it on a bench where an ant army invades your grocery bag, providing a nice surprise (for you and your new flatmates!) when OKunpacking in the kitchen.

3: Big research institutes, like the Jülich Forschungszentrum (=research center), can easily get you confused and/or lost in the first few days. Don’t be suprised if the staircase you took just an hour before suddenly teleports you to the opposite side of the building. These are high-tech research facilities after all, who knows what kind of quantum tunneling experiments they’re conducting?

Slight challenges aside, my first week in Germany has gone really well! My supervisor, Dr. Stefan Krieg, encouraged me to do as much back-ground reading in graphene as I like, before we start tackling the computational problem. So, days at work have consisted of reading a textbook and papers on modeling quantum fields in a lattice. When I don’t understand something, I can just knock on someone’s door and chat about it; there’s lots of friendly help available. Hopefully in my next post I’ll have some news on my actual project!

I’ve also been getting to know other students working on their MSc’s, summer projects, and PhD’s at the Jülich Supercomputing Center, as well as my flatmates at the “Guesthouse”, an 11-story building housing students and visitors to the Forschungszentrum. It’s actually very nice, even though my expectations weren’t that high after we were warned that the accommodation is “designed for a student budget”. I have a very large room on the 8th floor, with a window facing the fortress in the middle of the town. I share the kitchen with my three flatmates, but each of us has our own mini-fridge, just like you’d find in a hotel! And downstairs there’s a TV, where we’ve been gathering to watch the football World Cup, and a bookshelf I’ve already started to explore.

On my first day, I was lent a bike by the research institute, and I’ve really loved riding to work each day through the field and woods bordering the Forschungszentrum. I’m looking forward to exploring my surroundings by bike this weekend, and will try to remember to take a few pictures for the blog as well!

 

 

 

 

 

Hello world! I am Marius Neumann, a 22-year old physics student from Germany. Currently I’m studying for my Master’s degree at Bielefeld University, Germany. Bielefeld is located close to the border between North Rhine Westphalia and Lower Saxony (and despite certain conspiracies indeed does exist).

During my studies I haven’t managed to travel a lot yet, so I see my PRACE Summer of HPC opportunity as a chance to visit some parts of Europe – which in my case are Scotland and Cyprus, where I expect insight in a culture quite different from Germany, hot weather, little rain and much to learn.

While studying physics, I somehow got into the theoretical department by writing my Bachelor’s thesis in Lattice QCD and since these calculations tend to be quite computationally expensive, I got into high performance computing as well. I like to describe QCD with a quote from Goethes Dr. Faust as “what holds the world together at its innermost”, since it is the theory of the force which binds the quarks together to form protons and neutrons – and these make up the world as we know it.

I do not only see High Performance Computing as a way to hurry up my calculations, but also as the door to some new interesting fields such as Big Data and Artificial Intelligence, which may or probably will have an impact on our future. Thus, I consider HPC beyond physics as an important field to be trained in.

During the Summer of HPC program I will spend two months in Nicosia at the Cyprus Institute, where I will try to enable lattice QCD simulations on GPUs by optimizing solver performance on Piz Daint, the now sixth fastest supercomputer in the world.

When I´m not debugging code, I enjoy playing chess, grabbing a beer or sometimes even hiking around.

See you around Europe!

Hello everyone!

I am writing this post from Ljubljana, the capital of Slovenia, where my PRACE Summer of HPC project is taking place. The training week has been fabulous, but now it’s time to pack and start a new individual adventure!

Although we have had much fun together, and we have learnt so much from each other, the past week’s main task has been to significantly improve our HPC knowledge! Each of us has a different background, so from now on, I will talk about me.

I study Physics, I’m twenty years old, and next year I will finish my degree.

It’s me!

My interest in HPC comes from my interest in Computational Physics. I have always included different kinds of visualizations and simulations for my projects in the university, but as the problem’s size grows up, things become more and more interesting! That’s the reason why I applied for the PRACE SoHPC programme. Because combining computation and physics is pretty pretty good!

Although I am quite interested in HPC, I have never built complex programs with MPI nor have I ever had access to supercomputers like ARCHER. About MPI, I needed a few extra hours in my room in order to properly understand the lectures, but the outcome has been great, and I am so happy with that.

Referring now to ARCHER, knowing now how to use its resources, it will be great to have the possibility to make an account and use them in the future (remember, by passing the “driving test” available here ). It’s awesome the fact that we can have the opportunity to access these kinds of supercomputers.

To sum up, the lectures have been considerably useful and so well organized. The people from EPCC has actually done a great job. On the other hand, I have been wondering if I will someday see again the ones that I have met this week. I really hope we can meet again whenever. We have spent together very funny moments (such as the evening on the beach, the one in Hollyrood’s mountain, or the last night in the common room, among many more), but if I had to choose one, definitely I would select this one!:

 

Even the ascent was not easy, we had such a great time together in Hollywood park’s mountain. I actually will never forget that day.

On the other hand, I have already started my project at the University of Ljubljana. I am currently learning a huge amount of interesting things which will form the basis of my work, related to organized storage of data via schemes and its visualization in 3D, and I am getting very familiar with some powerful clusters around Europe. Also, my project has a huge amazing physics background, which is awesome for me. But we will leave it for the next post!

Thank you for reading me!

Check out my video on Youtube which is related to this post:

Stay happy!

Mario

Me at Virigina Beach, USA

Hi there, my name is Marc, I’m 24 years old, and I’m in my first year of PhD at the Universitat de Barcelona, in Catalonia. I’ve done all my studies in Barcelona, starting with a Bachelor degree in Physics, then a Master’s in particle Physics, and now a PhD in nuclear Physics.

Don’t be fooled by the picture I have posted, it was the only recent one I had (I don’t like taking pictures, especially of myself). I’m not a beach person. I prefer the mountains. Maybe this is because I’m from a small town in the center of Catalonia, called “les Masies de Voltregà”, where one can go for a walk without needing to take the car. Or maybe because I get sunburned very easily. One of the two (or both).

My research area, as I’ve said before, is nuclear physics. In later posts I’ll talk more about it, but for now, it’s enough to say that I try to simulate the most fundamental particles of matter (quarks and gluons) and see if what we get is what we have in our “real” world. In this way we can study systems that are very difficult to test experimentally. For that, we need powerful computers to do all the calculations, and that’s why I applied for the Summer of HPC project at CaSToRC, The Cyprus Institute, in Nicosia, where they research exactly the same field of particles physics as I am studying. These two months are going to help me get familiarized with supercomputers, and how to write parallel code, so that in the future I can use MareNostrum – the Spanish national supercomputer hosted at the Barcelona Supercomputing Center (BSC). Furthermore, to be able to work with one of the leading groups in my research area is going to be very challenging, but at the same time very enriching.

When I’m not doing physics, I like to listen to music, read or go hiking, and now I’m traveling quite a lot for my PhD (as you can see by my picture), killing two birds with one stone: I learn about new areas of physics that I’ve never heard before and meet new people at the same time. To end, I’ll leave you with a cool time-lapse I took during one of my flights.

Airplane flying through clouds

I hope you like it and see you in Cyprus! (and remember to bring sunscreen, lots of sunscreen!!)

 

This is a great experience, I am very honoured to participate in the PRACE Summer of HPC 2018 programme.

Not only do I have access to the knowledge of high-performance parallel computing, but I have also met people and made friends from all over the world. Through various events in a brief training week – composed of study and life lessons, we really got to know each other.

My name is Zheqi Yu, I am from China and I am studying in the UK. Currently, I am completing my PhD in electrical engineering from the University of Wolverhampton. Electronic engineering is an area I dabbled in as early as the beginning of my college years. To enhance my practical skills, I joined the electronic laboratory of the University of Wolverhampton to conduct research on hardware development – including software development for embedded systems. Exposed to the dynamic environment which characterises the University of Wolverhampton, I have gained an in-depth understanding of computer science, software tools and research methods in computing. This eventually gave me the opportunity for PhD studies in my intended reseach field. During three years of research in my PhD, I have developed an embedded system for pedestrian detection. The algorithm and integrated hardware and software design has been implemented on Xilinx ZYNQ hardware. Moreover, I had improved and enhanced this project as a demo that was recognised as a finalist of the European at the Xilinx OpenHW 2016 competition.

As for my future academic or career objectives, I wish to achieve something in the frontier research of electronic engineering, and fulfill my research aims. After that, I may continue with post-doctoral research, in the hope of catching up with the technological innovation and becoming a pioneer in the field of electronic engineering. In the meantime, I already launched a road show for my projects – to gain a venture investment, with a sincere wish of contributing to the society.

I am very happy to participate in the “Automatic Frequency Scaling for Embedded Co-processor Acceleration” Summer of HPC project. For the next two months, I will be studying at the Barcelona Supercomputer Center (BSC). This project provides different experimental methods for my PhD research, which allows me to study hardware energy optimisation more deeply. Finally, I am ready to face any challenges posed by my project at the BSC of this summer. With the support of the academic foundation built up at my current research, I am optimistic that I will smoothly adapt to a new way of studying in a different culture.

“Hey, thought this might be of interest…” A one line email with a link from my supervisor this spring. The website was of course, summerofhpc.prace-ri.eu, and I did in fact, find it of interest. The opportunity to work in a European research institute, learning more about how we can use powerful computers to model the world around us? I could not wait for the application period to start!

Therefore, when in April I was invited by PRACE to work on graphene modeling at the Jülich Supercomputing Center (JSC) in Germany over the summer, I was extremely excited! As a University of Manchester student, I already had the chance to hear about graphene from the very best working in the field of nanomaterials – including Nobel laureate Sir Andre Geim himself. Now I would have the chance to explore how its electronic properties are modeled. On the other hand, I suddenly grew nervous. What if all the other participants were computer geniuses, talking jibberish Linux jargon, while I only have a bit of experience with C++ programming?

The Summer of HPC 2018 students.

Well, on this 1st of July 2018 Sunday, after arriving in the beautiful Edinburgh, I finally met the other PRACE Summer of HPC students. Everyone turned out to be very friendly and actually, there were plenty of other non-computer scientists in our group – physicists, mathematicians, engineers… The first night was reserved for getting to know each others in the traditional British way, i.e. in the pub. Sure, we talked a bit about our projects, but mostly it was like any other night out with a group of students, talking about our home countries and universities, struggling to choose from the long list of burgers…

The first day of training seemed to go well for everyone. We learned about Edinburgh’s ARCHER supercomputer, a group of 4920 computers which can co-operate to solve large problems. Although I’d never even logged on to a computer using remote access, with the training team’s clear instructions and hands-on exercises, I was soon running my first program on ARCHER. I learned a lot, and after a long day involving lots of new terminology (did you know a program can be “embarrassingly parallel”?) and screen-time, I even had time for a jog around Arthur’s seat, the prominent and beautiful hill just next to our accommodation.

Our second training day was cut short, as we left for a bus tour around the town. Despite the traffic, we had the chance to see Edinburgh castle, and maybe even a glimpse of some royal visitors in the distance (or at least a very large hat). We finished off the day at Illegal Jack’s Mexican restaurant. I have to say, vegetarian haggis actually makes for a great burrito topping!

We still have a couple of days left in Edinburgh, and then each of us will fly off to get started on our research projects. It’s great to know that we will be keeping in touch with each other though, both to hear about everyone’s interesting projects, and of course to maintain and strengthen the friendships already forming. I for one am very much looking forward to this Summer of HPC!

The 23 projects for the Summer of HPC 2018 will start with the selected students listed at the application page. The programme will start with the training week at EPCC in Edinburgh and then continue directly at 11 PRACE hosting sites across Europe.

Early-stage postgraduate and late-stage undergraduate students are invited to apply for the PRACE Summer of HPC 2018 programme, to be held in July & August 2018. Consisting of a training week and two months on placement at top HPC centres around Europe, the programme offers participants the opportunity to learn and share more about PRACE and HPC, and includes accommodation, a stipend and travel to their HPC centre. Applications open 11 January 2018.

Current Application Deadline: 25 February 2018

About the PRACE Summer of HPC Programme

PRACE Summer of HPC is a PRACE outreach and training programme that offers summer placements at top HPC centres across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants will spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualisation or video of their results.

PRACE SoHPC 2017 participants and trainers during the training week in Ostrava, Czech Republic. The photo was taken during the trip to Dolní oblast Vítkovice, Hlubina a Landek, an old steel factory in Ostrava.

The programme will run from 2 July to 31 August 2018. It will begin with a kick-off training week at EPCC Supercomputing Centre in Edinburgh – to be attended by all participants.

Flights, accommodation and a stipend will be provided to all successful applicants. Two prizes will be awarded to the participants who produce the best project and best embody the outreach spirit of the programme.

PRACE Summer of HPC 2017 Award winners Mahmoud Elbattah and Arnau Miro Jane

Participating in the PRACE Summer of HPC Programme

Applications are welcome from all disciplines. Previous experience in HPC is not required as training will be provided. Some coding knowledge is a prerequisite, but the most important attribute is a desire to learn, and share experiences with HPC. A strong visual flair and an interest in blogging, video blogging or social media are desirable.

Project Descriptions

Project descriptions with more detailed prerequisites and more information on applying are available on the PRACE Summer of HPC website www.summerofhpc.prace-ri.eu.

Applications

Applications open on 11 January 2018 and applications can be submitted on the Summer of HPC website:
www.summerofhpc.prace-ri.eu/apply

Follow Us

For the latest information on the programme follow us on Twitter @summerofhpc or visit us on Facebook http://www.facebook.com/SummerOfHPC

About PRACE
The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Horizon 2020 Research and Innovation Programme (2014-2020) under grant agreement 730913. For more information, see www.prace-ri.eu.

Do you want more information? Do you want to subscribe to our mailing lists?
Please visit the PRACE website: http://www.prace-ri.eu

Project reference: 1823

ParaView is ubiquitously used in HPC to provide insight to complex data structures often stored in custom data formats that require transformation before they can be visualised. Furthermore, information is dispersed at many places and one needs to extract it from “databases”. This project aims to provide a  visualization schema  (visualization description) for data used by HPC codes together with the description of data.

The ParaView plugin framework shown in Fig. above uses various components to produce the final visualization. The plugin mainly consists of two parts: Client-side features that include the GUI and Properties window for visualization and Server-Side features such as the VTK algorithm and UAL Protocol that add on to the algorithmic capabilities of ParaView for fusion visualization. The client-side features are implemented with a Qt based GUI using a ServerManager XML file to expose parameters of the plugin GUI to the user. The parameters for plugin are specified within the XML file. It activates corresponding input fields within the properties window as seen in Fig on the left side of the ParaView tool.

Project Mentor: Dejan Penko, MSc

Project Co-mentor: Prof. Leon Kos, PhD

Site Co-ordinator: Prof. Leon Kos, PhD

Learning Outcomes:

  • Student will master:
    • XML
    • ParaView internals
  • Student will obtain/improve their skills and knowledge in the use of:
    • Linux OS
    • GIT version control system
    • Makefiles
    • visualization utilities
    • HPC

Student Prerequisites (compulsory):

  • C++
  • Basic programming skills in Python
  • Familiar with basic concepts of grid geometry and its components (point, line, 2D cell…)

Student Prerequisites (desirable):

  • Experience in 2D and 3D visualization
  • Familiar with:
    • Linux OS
    • GIT version control system
    • XML and XSLT programing language and usage of Makefiles

Training Materials:

ParaView material:

ParaView Custom Plugin material:

VTK material:

Using VTK with Python material:

Workplan:

  • W1: Introductory week;
  • W2: Creating and describing schema with XSL Translation
  • W3-5: Coding and evaluating 2D representaions in ParaView
  • W6: ParaView Plugin finalization
  • W7: Final report and video recording
  • W8: Wrap up

Final Product Description:

  • XML language for Visualisation representations
  • ParaView Plugin in C++

Adapting the Project – Increasing the Difficulty:
We can increase the size of data or add an additional IDS to be implemented.

Adapting the Project – Decreasing the Difficulty:
We can decrease the size of data.

Resources:
HPC cluster at University of Ljubljana, Faculty of Mechanical Engineering, and other available HPCs.

Organisation:
University of Ljubljana
project_1620_logo-uni-lj

31 October 2017, Ostrava

Mr. Arnau Miro Jane – Best Visualisation Award
Arnau’s video is understandable for a lay person’s audience and links to current events such as climate change. The video will be an excellent example to show students who do not have a connection yet to HPC, what powerful tools are currently available, without becoming too technical.

Mr. Mahmoud Elbattah – HPC Ambassador Award
Mahmoud has shown himself to be a hard worker with a keen interest in various aspects of HPC. Mahmoud showed his enthusiasm for HPC by going above and beyond what was expected from him in the project. He created an extra website (http://top500centre.apphb.com) with a visualisation that can be used for several purposes. He was an active blogger, and we hope to see more of him in the near future.

Project reference: 1820

There is a lot of research around scaling out convolutional neural networks to a large number of compute nodes, as the computational requirements when training complex networks on large-scale datasets become prohibitive. However, most if not all of these work employ data-parallel training techniques, where a batch of image samples is evenly split across the number of workers. Then each worker independently processes in the forward propagation stage, with gradient communication among workers being performed in the backward propagation pass. Although this technique proved quite successful, it has its own drawbacks. One of the most important is that scaling out to a very large number of nodes implies increasing the batch size, and this leads to more difficulty in the SGD optimization. The second drawback is that data-parallel training works only if the model fits in memory. However, when using model parallelism, there is much more communication involved – particularly in the forward propagation pass.  As part of our research as an Intel Parallel Computing Center, we have done quite some research on deep neural network training and particularly on data-parallel scaling here, presented here: https://arxiv.org/abs/1711.04291.

For performing our data parallel research, we have used the Intel’s fork of the Caffe framework in combination with Intel Machine Learning Scaling Library (ML-SL), and managed to scale the training of a single neural networks to up to 1536 Knights Landing compute nodes. We propose to make use of the same software (and hardware) infrastructure, as it allows for model parallelism as well. Then, we envision the hybrid model in the following fashion: in a multi-socket system or in a Knights Landing system configured with sub-NUMA clustering, each separate NUMA domain will work on training part of the model – thus employing model parallelism within the computing node. When going across compute nodes, we will integrate our data parallel approach, as the interconnect (Infiniband,OPA) usually does not support the communication requirements of model parallelism. This has the potential to lower the total batch size, while also increasing the throughput achieved per node.

The student is expected to make use of the functionality already present in MLSL for model parallelism, and to evaluate several schemes of the hybrid approach (model-parallel within node, data-parallel across nodes). Profiling will be necessary in order to maximize the intra-node bandwidth. The techniques will be tested on both Intel Skylake clusters and Intel Knights Landing clusters.

Project Mentor: Valeriu Codreanu

Project Co-mentor: Damian Podareanu

Site Co-ordinator: Zheng Meyer-Zhao

Learning Outcomes:
The student will learn how to perform large-scale neural network training, how to balance the trade-offs between data and model-parallel training, as well as how to profile and optimize code running on state-of-the-art hardware platforms such as the Intel Skylake and Knights Landning architectures

Student Prerequisites (compulsory):

  • Basic Knowledge of Machine Learning, particularly Convolutional Neural Networks
  • Knowledge of C++/MPI

Student Prerequisites (desirable):
Some skills in being able to develop mixed code such MPI/OpenMP will be an advantage.

Training Materials:

  • Some general materials on C/C++ and Python from the web should be read to be up to date prior to arrival.
  • Machine-learning wise, a brief read through this book would be great: http://www.deeplearningbook.org/

Workplan:

  • Week 1/: Training week
  • Week 2/:  Literature Review Preliminary Report (Plan writing)
  • Week 3 – 7/: Project Development
  • Week 8/: Final Report write-up

Adapting the Project: Increasing the Difficulty:
In order to increase the project difficulty, one can think of adapting this hybrid parallelism approach to clusters of multi-GPU servers.

Adapting the Project: Decreasing the Difficulty
The topic will be researched and the final product will be designed in full but some of the features may not be developed to ensure working product with some limited features at the end of the project (e.g. excluding the Knights Landing architecture).

Resources:
The student will need access to a cluster with Intel Skylake and Intel Knights Landing systems (provided by us), standard computing resources (laptop) as well as an account on the Cartesius supercomputer (provided by us).

Organisation:
SURFsara

Project reference: 1822

The project will consist of:

  • Getting to know Hadoop and RHadoop;
  • Creating and storing big data files (BD);
  • Preparing BD for clustering task;
  • Writing RHadoop code for performing clustering tasks with two clustering algorithms (e.g., k-means and some variant of local density based algorithms)
  • Evaluating the clustering algorithms.

The student will create a big data file, store it in DFS and perform and evaluate 2 clustering algorithms with RHadoop on this data.

Project Mentor: Prof. Janez Povh, PhD

Project Co-mentor: Prof. Leon Kos, PhD

Site Co-ordinator: Prof. Leon Kos, PhD

Learning Outcomes:

  • Student will master Hadoop and RHadoop;
  • Student will master at least two clustering algorithms

Student Prerequisites (compulsory):

  • Basics from data management;
  • R;
  • Basics from clustering.

Student Prerequisites (desirable):

  • Basics from Hadoop.

Training Materials:
The candidate should go through the following PRACE MOOC:
https://www.futurelearn.com/courses/big-data-r-hadoop/2/todo/17356

Workplan:

  • W1: Introductory week;
  • W2: Creating and storing big data file
  • W3-5: Coding and evaluating 2 clustering algorithms;
  • W6: Creating final video presentations;
  • W7: Final report;
  • W8: Wrap up.

Final Product Description:

  • Created big data files and stored in Hadoop;
  • Created RHadoop scripts for clustering;
  • Created video on this example.

Adapting the Project: Increasing the Difficulty:
We can increase the size of data or add an additional clustering algorithm to be implemented.

Adapting the Project: Decreasing the Difficulty
We can decrease the size of data or remove one clustering algorithm.

Resources:
RHadoop installation at University of Ljubljana, Faculty of mechanical engineering

Organisation:
University of Ljubljana
project_1620_logo-uni-lj

Project reference: 1821

Although the topic of creating a scalable, efficient, quantum simulator has been actively pursued by multiple entities in the past years, due to the complexity of the solution required (for example multiple types of gates and formalisms) it is still very much open and interesting at least for the HPC, Physics, Chemistry and Machine Learning communities. At the moment, there are multiple solutions available, varying in quality and completeness. What might be one of the more interesting debates is the suitability of GPUs for offloading the dense kernels, considering the overarching issue of steep memory requirements resulting from the use of multiple qubits (memory usage increase grows exponentially).

In this project, we plan to investigate the validity of offloading parts of the computation on two different compute clusters, one capable of supporting RDMA and high throughput, low latency interconnect, and the other without a RDMA capability and serviced by a high throughput, high latency interconnect.

The project can benefit from multiple areas of expertise and is adjustable to a range of possible final solutions. These can go from simple shared memory compiler offloading of existing simulator code, to more efficient CUDA based implementations, to distributed RMDA aware variants.

The average is thus a focus on existing distributed simulators that don’t benefit from accelerators.

Project Mentor: Damian Podareanu

Site Co-ordinator: Zheng Meyer-Zhao

Learning Outcomes:
The student will learn more about various HPC topics like accelerator offloading, distributed programming in a heterogeneous compute cluster, and performance monitoring. Another learning outcome is related to quantum computing and the challenges posed by simulating quantum processes.

Student Prerequisites (compulsory):

  • Basic knowledge about accelerators and accelerator programming (at least basic CUDA)
  • Basic physics knowledge (in order to grasp the minimal information needed from quantum computing required)
  • Knowledge of C++/MPI

Student Prerequisites (desirable):
Some skills in being able to develop mixed code such MPI/OpenMP will be an advantage.

Training Materials:

Workplan:

  • Week 1: Training week
  • Week 2: Discussion about the project / fine tuning the goals and outcomes in accordance with the student / plan writing
  • Week 3 – 7: Project Development (Accelerator versions and benchmarks)
  • Week 8: Final report write-up

Adapting the Project: Increasing the Difficulty:
A natural extension (and much more difficult) is to fully extend the implementation to use both MPI and GPU offloading.

Adapting the Project: Decreasing the Difficulty
The simplest version of this project would be to make smart use of modern compiler offloading capabilities (in OpenMP 4.0+ or PGI + openACC) to simply experiment with existing simulators in a GPU enabled, shared memory environment.

Resources:
We will provide access to the two computer clusters described:

  • Cartesius (true super computer – Infiniband connected)
  • LISA (cluster computer – 40G Ethernet connected)

We will provide access to the source code (baseline without accelerator capabilities)
In addition, the student will need his/her own laptop.

Organisation:
SURFsara

Project reference: 1819

Today’s multi-core, many-core and accelerator hardware provides a tremendous amount of floating point operations (FLOPs). However, CPU and GPU FLOPs cannot be harvested in the same manner. While the CPU is designed to minimize the latency of of a stream of individual operations, the GPU tries to maximize the throughput. Even worse, porting modern C++ codes to GPUs via CUDA limits oneself to a single vendor. However, there exist other powerful accelerators, that would need yet another code path and thus code duplication.
The great diversity and short life cycle of todays HPC hardware does not allow for hand-written, well-optimized assembly code anymore. This begs the question if we can utilize portability and performance from high-level languages with greater abstraction possibilities like C++.

Wouldn’t it be ideal, if we had a single code base capable of running on all devices, like CPUs, GPUs (Nvidia and AMD), accelerators (like Intel Xeon Phi) or even FPGAs?

With the help of Khronos open-source SYCL standard it is possible to develop one high-level C++ code that can run on all hardware platforms alike.

In this project, we turn our efforts towards a performance-portable fast multipole method (FMM). Depending on the interest of the student, we will pursue different goals. First, the already available CPU version of the FMM can be adapted to support SYCL kernels for the construction and efficient use of multiple sparse/dense octree datastructures. Second, a special (compute-intense) kernel for more advanced algorithmic computations can be adapted to SYCL.

The challenge of both assignments is to embed/extend the code in a performance-portable way. This ensures minimized adaptation efforts when changing from one HPC platform to another.

What is the fast multipole method? The FMM is a Coulomb solver and allows to compute long-range forces for molecular dynamics, e.g. GROMACS. A straightforward approach is limited to small particle numbers N due to the O(N^2) scaling. Fast summation methods like PME, multigrid or the FMM are capable of reducing the algorithmic complexity to O(N log N) or even O(N). However, each fast summation method has auxiliary parameters, data structures and memory requirements which need to be provided. The layout and implementation of such algorithms on modern hardware strongly depends on the available features of the underlying architecture.

Assumed workplace of a 2018 PRACE student at JSC

Project Mentor: Andreas Beckmann

Project Co-mentor: Ivo Kabadshow

Site Co-ordinator: Ivo Kabadshow

Learning Outcomes:
The student will familiarize themself with current state-of-the art CPUs (e.g. Intel Skylake) and accelerators (Nvidia P100/AMD R9 Fury). He/she will learn how the GPU/accelerator functions on a low level and use this knowledge to optimize scientific software for CPUs/GPUs and accelerators in a unified code-base. He/she will use state-of-the art benchmarking tools to achieve optimal performance portability for the kernels and kernel drivers which are time-critical in the application.

Student Prerequisites (compulsory):
Prerequisites

  • Programming knowledge for at least 5 years in C++
  • Basic understanding of template metaprogramming
  • “Extra-mile” mentality

Student Prerequisites (desirable):

  • CUDA or general GPU knowledge desirable, but not required
  • C++ template metaprogramming
  • Interest in C++11/14/17 features
  • Interest in low-level performance optimizations
  • Ideally student of computer science, mathematics, but not required
  • Basic knowledge on benchmarking, numerical methods
  • Mild coffee addiction
  • Basic knowledge of git, LaTeX, TikZ

Training Materials:
https://developer.codeplay.com/computecppce/latest/computecpp-for-cuda-developers

Just send an email … training material strongly depends on your personal level of knowledge. We can provide early access to the GPU cluster as well as technical reports from former students on the topic. If you feel unsure about the requirements, but do like the project send an email to the mentor and ask for a small programming exercise.

Workplan:
Week:

  1. Training and introduction to FMMs and hardware
  2. Benchmarking of kernel variants on the CPU/GPU/accelerator
  3. Initial port to SYCL of one or two kernel variants
  4. Unifying the octree data structure for the CPU/GPU/accelerator
  5. Performance tuning of the code
  6. Optimization and benchmarking, documentation
  7. Optimization and benchmarking, documentation
  8. Generating of final performance results. Preparation of plots/figures. Submission of results.

Final Product Description:
The end product will be an extended FMM code with SYCL to support CPUs/GPUs/accelerators. The benchmarking results, especially the gain in performance can be easily illustrated in appropriate figures, as is routinely done by PRACE and HPC vendors. Such plots could be used by PRACE.

Adapting the Project: Increasing the Difficulty:
The kernels are used differently in many places in the code. For example, it may or may not be required to use a certain implementation of the an FMM operator. A particularly able student may also port multiple kernels. Depending on the knowledge level, a larger number of access/storage strategies can be ported/extended or performance optimization within SYCL can be intensified.

Adapting the Project: Decreasing the Difficulty
As explained above, a student that finds the task of porting/optimizing a full kernel too challenging, could very well restrict themself to a simpler model or partial port.

Resources:
The student will have his own desk in an air-conditioned open-plan office (12 desks in total) or in a separate office (2-3 desks in total), will get access (and computation time) on the required HPC resources for the project and have his own workplace with fully equipped workstation for the time of the program. A range of performance and benchmarking tools are available on site and can be used within the project. No further resources are required.
Hint: We do have experts on all advanced topics, e.g. C++11/14/17, CUDA in house. Hence, the student will be supported when battling with ‘bleeding-edge’ technology.

Organisation:
Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH
JULICH

Project reference: 1818

Simulations of classical or quantum field theories often rely on a lattice discretized version of the underlying theory. For example, simulations of Lattice Quantum Chromodynamics (QCD, the theory of quarks and gluons) are used to study properties of strongly interacting matter and can, e.g., be used to calculate properties of the quark-gluon plasma, a phase of matter that existed a few milliseconds after the Big Bang (at temperatures larger than a trillion degrees Celsius). Such simulations take up a large fraction of the available supercomputing resources worldwide.

Other theories have a lattice structure already “build in”, as is the case for graphene, with its famous honeycomb structure. Simulations studying this material can build on the experience gathered in Lattice QCD. These simulations require, e.g., the repeated computation of solutions of extremely sparse linear systems and update their degrees of freedom using symplectic integrators.

Depending on personal preference, the student can decide to work on graphene or on Lattice QCD. He/she will be involved in tuning and scaling the most critical parts of a specific method, or attempt to optimize for a specific architecture in the algorithm space.

In the former case, the student can select among different target architectures, ranging from Intel XeonPhi (KNL), Intel Xeon (Haswell/Skylake) or GPUs (OpenPOWER), which are available in different installations at the institute. To that end, he/she will benchmark the method and identify the relevant kernels. He/she will analyse the performance of the kernels, identify performance bottlenecks, and develop strategies to solve these – if possible taking similarities between the target architectures (such as SIMD vectors) into account. He/she will optimize the kernels and document the steps taken in the optimization as well as the performance results achieved.

In the latter case, the student will, after getting familiar with the architectures, explore different methods by either implementing them or using those that have already been implemented. He/she will explore how the algorithmic properties match the hardware capabilities. He/she will test the archived total performance, and study bottlenecks e.g. using profiling tools. He/she will then test the method at different scales and document the findings.

In any case, the student is embedded in an extended infrastructure of hardware, computing, and benchmarking experts at the institute.

Project Mentor: Dr. Stefan Krieg

Project Co-mentor: Dr. Eric Gregory

Site Co-ordinator: Ivo Kabadshow

Learning Outcomes:
The student will familiarize themself with important new HPC architectures, Intel Xeon Phi, Intel Xeon, and OpenPOWER. He/she will learn how the hardware functions on a low level and use this knowledge to devise optimal software and algorithms. He/she will use state-of-the art benchmarking tools to achieve optimal performance.

Student Prerequisites (compulsory):

  • Programming experience in C/C++

Student Prerequisites (desirable):

  • Knowledge of computer architectures
  • Basic knowledge on numerical methods
  • Basic knowledge on benchmarking
  • Computer science, mathematics, or physics background

Training Materials:
Supercomputers @ JSC

Architectures

Paper on MG with introduction to LQCD from the mathematicians point of view:

Introductory text for LQCD:

Introduction to simulations of graphene:

Workplan:
Week:

  1. Training and introduction
  2. Introduction to architectures
  3. Introductory problems
  4. Introduction to methods
  5. Optimization and benchmarking, documentation
  6. Optimization and benchmarking, documentation
  7. Optimization and benchmarking, documentation

Generation of final performance results. Preparation of plots/figures. Submission of results.

Final Product Description:
The end product will be a student educated in the basics of HPC, optimized kernel routines and/or optimized methods. These results can be easily illustrated in appropriate figures, as is routinely done by PRACE and HPC vendors. Such plots could be used by PRACE.

Adapting the Project: Increasing the Difficulty:
A) Different kernels require different levels of understanding of the hardware and of optimization strategies. For example, it may or may not be required to optimize memory access patterns to improve cache utilization. A particularly able student may work on such a kernel.
B) Methods differ greatly in terms of complexity. A particularly able student may choose to work on more advanced algorithms.

Adapting the Project: Decreasing the Difficulty
A) As explained above, a student that finds the task of optimizing a complex kernel too challenging, could restrict themself to kernels with simple memory access patterns.
B) If the student finds a particular method too complex for the time available, a less involved algorithm can be selected.

Resources:
The student will have their own desk in an open-plan office (12 desks in total) or in a separate office (2-3 desks in total), will get access (and computation time) on the required HPC hardware for the project and have their own workplace with fully equipped workstation for the time of the program. A range of performance and benchmarking tools are available on site and can be used within the project. No further resources are required.

Organisation:
Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH
JULICH

Follow by Email