Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture delves into data-parallel programming, focusing on vector processing and SIMD within a node, and MapReduce, Pregel, and TensorFlow across multiple nodes. It covers the taxonomy of computer architectures, the basics of SIMD, the limitations of scalar pipelines, the benefits of vector processors, and the design of vector functional units. The lecture also explores the concepts of MapReduce, Pregel for graph processing, and TensorFlow for deep learning, emphasizing their respective programming models and optimizations.