SoHPC2019.Finalize() – An Overview of  the Story

SoHPC2019.Finalize() – An Overview of the Story

Hello ladies and gentlemen, as we start our descent, please make sure your seatbelt is securely fastened.

It was a great month for me with awesome stories. SoHPC2019 introduced new topics, nice friends and beautiful places to me.

Have you ever realized the waiting time for the machines decreasing day by day in our daily life? Of course, nobody wants to wait for slow automated machines or have slow smartphones, but do you remember how long you were waiting for the old devices to open?

How to express the real-world problems to the machines?

We can describe the problems with the help of the linear algebra and solve the problems by expressing as systems of linear algebraic equations. The machines understand things as numbers in a matrix, and any manipulation means matrix operations for the machines. The speed of the calculations can be increased. However, the speed can be the enemy of the crucial calculations that require to be precise at the same time.

Let’s get back to my project

In my project, the aim is improving the non-recovered output of “Markov Chain Monte Carlo matrix inversion” using a stochastic gradient algorithm method.

What is Stochastic Gradient Descent (SGD)?

Gradient Descent is an iterative method that is used to minimize an objective function. If randomly selected part of the samples used in every iteration of gradient descent, it is called stochastic gradient descent.

With the help of the mSGD method which purposed on the paper, the error of the inverse from MCMCMI method is decreased. After successful results in Python, the algorithm is implemented in C++.

”The Cherry on the cake”

“Life is not fair. Why should random selection be fair?”

In mSGD, the rows are selected with uniform probability. The results are slightly better if probability of selecting a row is proportional to the norm of the row.

The results of the implementation.

“Last touch: Adding Batches to Parallelization”

Stochastic Gradient Descent and Batching are as like as two peas in a pod. Instead of using whole matrix A for the method, rows are divided into the subsets for each process in the parallelization with Hybrid MPI/OpenMP. When rows are not equally divided, it is a good trick to give lower amount of rows to the master process since it has more work than others

Batching by dividing into processes.

Thats All folks!

Thank you for joining my adventure!
Please follow and like us:
error

From playing games in his childhood to developing his own game, the journey of designing and programming became a passion for him. While studying Electronics and Communication Engineering, exploring new computer science areas with successful projects encouraged him to be a part of cutting-edge technologies by combining academic knowledge with the industry. Also, he is interested in Olive and Mastic Trees, Green Technologies, Repairing and Recycling. In his free time, he loves to learn and practice tango, lute, clarinet, talk about philosophy.

Tagged with: , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.