Covers the principles of parallel computing and introduces OpenMP for creating concurrent code from serial code.
Explores parallelism in programming, emphasizing trade-offs between programmability and performance, and introduces shared memory parallel programming using OpenMP.
Covers the basics of parallel programming, including concurrency, forms of parallelism, synchronization, and programming models like PThreads and OpenMP.
Covers the principles of synchronization in parallel computing, focusing on shared memory synchronization and different methods like locks and barriers.
Covers the evolution and challenges of multiprocessors, emphasizing energy efficiency, parallel programming, cache coherence, and the role of GPUs.
Explores the principles of parallel computing, focusing on OpenMP as a tool for creating concurrent code from serial code.
Explores synchronization principles using locks and barriers, emphasizing efficient hardware-supported implementations and coordination mechanisms like OpenMP.
Explores hardware synchronization methods, including locks, barriers, and critical sections in parallel computing.
Explores data-parallel programming with vector processors and SIMD, and introduces MapReduce, Pregel, and TensorFlow.
Explores GPU architecture, multithreading, SIMD processors, and CUDA programming for parallel computing.