The SoHPC 2022 lives on…

Every phase has to come to an end, and it is inevitable to exonerate this moment from such a reality. Talking about my gains during this program. Hitherto, my exposure in Europe has been strictly about academic experiences. But, henceforth, I believe that I can share buttressing ideas if invited to talk on professionalism as a topic of discussion. Working as an intern at Capgemini Engineering Company during this program exposed me to life outside the walls of the lecture rooms. I got to learn not only about the project but also the etiquettes of working in a multinational company. It granted me the opportunity to learn, wine and dine with a highly professional group of people. Most especially, working with my mentor Yassine El Assami I learned good characters and awesome manners on how to nurture a mentee. Thus, regardless the Summer of HPC 2022 has finally come to an end, these experiences are priceless and will forever be part of me.

Coming to the overview of my final report. I believe that my previous posts have highlighted the aim and objectives of this study. And also, the perceived difficulty towards reaching our goal.
Now, I’m going to be walking you through the summary of the project, so stay tuned …

To refresh our minds, the concept of neural networks has been proposed in place of the conventional approaches for solving problems related to physical models due to its capability to learn any kind of continuous function. The project focused on building neural networks to predict a maximum resistance or reserve factor of mechanical components. And to improve the precision relative to the lowest possible error thresholds.

Here, the accuracy is defined as the ratio of errors under a threshold and the total number of samples. While the precision of the models corresponds to the smallest threshold with a perfect accuracy.

The project considered two mechanical cases which are linear and nonlinear components. Used synthetic datasets for each case, which means that both the size and the quality are controlled. The datasets contain parameters of geometry, material properties and applied forces as features while the target is maximum resistance.

The models have been built leveraging Keras modules of Tensorflow. Mixed precision policy was incorporated to train the dense layered models. The Kerastuner module was quite useful to search for the most suitable parameters while the callbacks module helped in saving the best model configurations during training.

Discussing the results

The linear case is a set of parallel beams that have only widths as variables and the other parameters are constant.
It is noticeable that numerical precision has significant influence in the model training of this case.
The obtained results show that both single perceptron model and multi-layer perceptron models produced similar performance, yielding perfect accuracy down until 10−6 for a single numerical precision. Conversely, an improvement in performance was obtained using double precision and half precision exhibited very poor performance compared to the other two (see fig1).

Figure 1: Comparison of different numerical precisions.

The nonlinear case is a beam under tension with more variable parameters such as geometrical and material properties. Here, the precision achieved is good but bounded around 1% error threshold for both the single layer and multi-layer models. Numerical precision seems less significant compared to training and capacity error (see fig2 and fig3).

Figure 2: Comparison of the best performant models.
Figure 3: Comparison of the performance of models of fixed widths and different depths

The conclusion

This project buttresses the use of neural networks as universal approximators to solve mechanical models. In general, other studies for which the components share similar parameters can leverage the concept too. It tends to be more useful when the components to be studied require various levels of complexity, as there is no need for significant modifications. Although the precision of a prediction appears to be bounded but still good for lots of applications. Finally, high performance computing has been used for the computational demands required for this study. Parallelization using MPI has been implemented to better manage processing resources and realize statistical approaches.

Working on this project has enhanced my understanding of machine learning practical applications and most importantly working on high performance computing clusters. Hence, I shall be on the lookout for more opportunities within this domain to further my understanding. I think the next opportunity I’m looking forward to is the High Performance Parallel IO and post-processing @ MdIS.

Working on this project has enhanced my understanding of machine learning practical applications and most importantly working on high performance computing clusters. Hence, I shall be on the lookout for more opportunities within this domain to further develop my understanding. I think the next opportunity I’m looking forward to is the High Performance Parallel IO and post-processing @ MdIS.

Finally, my profound gratitude goes to the organizers of PRACE SoHPC for such an awesome opportunity to be part of this project. I would like to thank Mr. Karim Hasnaoui from CNRS, IDRIS laboratory. Also, I extend my gratitude to Capgemini Engineering Company. Lastly, my special appreciation goes to my mentors for their relentless guidance towards the success of this project. I thank you all.

One comment on “The SoHPC 2022 lives on…
  1. Adesokan oyindamola says:

    Congratulations on the astounding execution of the project. Much obliged to you and your team at large. All the best are dependable on you.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.