Lecture

Optimizing Software: Locality & Scheduling

Description

This lecture by the instructor covers software optimizations focusing on locality, memory access, and scheduling strategies for parallel execution. It delves into cache hierarchy, latency considerations, cache miss models, coherence misses, and techniques to reduce true and false sharing. The lecture also includes examples like histogram computation, parallel work division, and matrix multiplication to illustrate optimization strategies. It emphasizes the importance of the locality principle, blocking for cache efficiency, and load balancing through dynamic work distribution. Additionally, it discusses loop optimizations, task queues for parallel processing, and functional parallelism for independent tasks.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.