Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture introduces the concepts of parallelism in programming, focusing on the trade-offs between programmability and performance. It covers the reasons for the shift towards parallel computing, such as the limitations of frequency scaling and the end of the 'free lunch'. The lecture discusses the parallelization of code using various programming abstractions like OpenMP and pthreads, emphasizing the importance of expressing parallelism in code. It also explores the execution models of simple and multicore CPUs, as well as GPUs, highlighting the principles of parallel computing and the implications of Amdahl's Law. The lecture concludes with an overview of shared memory parallel programming using OpenMP, synchronization techniques, and the importance of load balancing and communication models.