Lecture

MPI Basics: Point-to-Point and Collective Communications

Description

This lecture covers the basics of MPI, including the distributed memory programming paradigm, point-to-point communications, collective communications, and synchronizations. It explains how to run multiple instances of a program, different types of communications in MPI, and the importance of matching sends with receives to avoid race conditions or deadlocks. The lecture also delves into blocking and non-blocking point-to-point communications, MPI barriers, wildcard receives, and collective communications like MPI_Bcast, MPI_Scatter, MPI_Gather, and MPI_Reduce. Additionally, it discusses receiving image parts in order, out-of-order, and with a collective, as well as the importance of avoiding collective calls within conditional clauses and customizing communicators and datatypes.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.