Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the basics of MPI, including the distributed memory programming paradigm, point-to-point communications, collective communications, and synchronizations. It explains how to run multiple instances of a program, different types of communications in MPI, and the importance of matching sends with receives to avoid race conditions or deadlocks. The lecture also delves into blocking and non-blocking point-to-point communications, MPI barriers, wildcard receives, and collective communications like MPI_Bcast, MPI_Scatter, MPI_Gather, and MPI_Reduce. Additionally, it discusses receiving image parts in order, out-of-order, and with a collective, as well as the importance of avoiding collective calls within conditional clauses and customizing communicators and datatypes.