Lecture

Parallel Programming I

In course
DEMO: est exercitation
Mollit elit sunt laborum occaecat sit cillum velit sunt Lorem ad exercitation incididunt pariatur quis. Enim eiusmod excepteur occaecat ex sunt proident eu laboris eiusmod incididunt adipisicing Lorem quis. Aliqua laboris proident velit aliquip velit cillum amet ea. Ullamco sit enim laboris exercitation et consequat quis cillum laboris non.
Login to see this section
Description

This lecture introduces the fundamentals of parallel programming, covering concepts such as concurrency, parallelism, forms of parallelism (throughput, functional, pipeline, data), task granularity, division of work, asynchrony, creating parallel programs, shared memory programming, automatic parallel program creation, forms of communication (shared memory, message passing), programming models (sequential, shared memory, message passing, data parallel, dataflow), software layering, synchronization, and examples of parallel programming using PThreads and OpenMP.

Instructor
esse sunt occaecat incididunt
Enim deserunt excepteur enim eiusmod ut consequat enim eu adipisicing proident anim. Velit consequat deserunt officia duis ex commodo aliqua fugiat voluptate dolor duis mollit est. Eiusmod commodo ipsum qui qui excepteur ad nulla et dolore commodo et in officia cillum.
Login to see this section
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (75)
Parallel Computing: Principles and OpenMP
Covers the principles of parallel computing and introduces OpenMP for creating concurrent code from serial code.
Principles of Parallel Computing: OpenMP
Explores the principles of parallel computing, focusing on OpenMP as a tool for creating concurrent code from serial code.
Parallelism: Programming and Performance
Explores parallelism in programming, emphasizing trade-offs between programmability and performance, and introduces shared memory parallel programming using OpenMP.
Concurrency and Mutual Exclusion
Covers the importance of concurrency, atomic operations, and mutual exclusion using locks.
GPUs: Introduction to CUDA
Introduces the basics of GPUs, CUDA programming, and thread synchronization for parallel computing applications.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.