In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a sequential language, as an extension to an existing language, or as an entirely new language. Consensus around a particular programming model is important because it leads to different parallel computers being built with support for the model, thereby facilitating portability of software. In this sense, programming models are referred to as bridging between hardware and software. Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition. Process interaction relates to the mechanisms by which parallel processes are able to communicate with each other. The most common forms of interaction are shared memory and message passing, but interaction can also be implicit (invisible to the programmer). Shared memory (interprocess communication) Shared memory is an efficient means of passing data between processes. In a shared-memory model, parallel processes share a global address space that they read and write to asynchronously. Asynchronous concurrent access can lead to race conditions, and mechanisms such as locks, semaphores and monitors can be used to avoid these. Conventional multi-core processors directly support shared memory, which many parallel programming languages and libraries, such as Cilk, OpenMP and Threading Building Blocks, are designed to exploit. Message passing In a message-passing model, parallel processes exchange data through passing messages to one another.

À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Cours associés (9)
CS-302: Parallelism and concurrency in software
From sensors,to smart phones,to the world's largest datacenters and supercomputers, parallelism & concurrency is ubiquitous in modern computing.There are also many forms of parallel & concurrent execu
CS-453: Concurrent computing
With the advent of modern architectures, it becomes crucial to master the underlying algorithmics of concurrency. The objective of this course is to study the foundations of concurrent algorithms and
CS-307: Introduction to multiprocessor architecture
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
Afficher plus
Séances de cours associées (48)
Conc-Trees: Programmation parallèle en Scala
Couvre la mise en œuvre de Conc-Trees à Scala pour la programmation parallèle.
Plier Réduire les opérations
Explore les opérations de pliage (réduction) dans la programmation parallèle à l'aide de Scala, couvrant les opérations associatives, les arbres d'expression, la réduction parallèle et la réduction du réseau.
Programmation parallèle en Scala: Computing p-norm
Explore le p-norm informatique en utilisant la programmation parallèle en Scala et son impact sur la performance.
Afficher plus
Publications associées (190)

DBFS: Dynamic Bitwidth-Frequency Scaling for Efficient Software-defined SIMD

Giovanni Ansaloni, Alexandre Sébastien Julien Levisse, Pengbo Yu, Flavio Ponzina

Machine learning algorithms such as Convolutional Neural Networks (CNNs) are characterized by high robustness towards quantization, supporting small-bitwidth fixed-point arithmetic at inference time with little to no degradation in accuracy. In turn, small ...
2024

Swift : a modern highly parallel gravity and smoothed particle hydrodynamics solver for astrophysical and cosmological applications

Yves Revaz, Loïc Hausammann, Matthieu Schaller, Mladen Ivkovic, Zhen Xiang

Numerical simulations have become one of the key tools used by theorists in all the fields of astrophysics and cosmology. The development of modern tools that target the largest existing computing systems and exploit state-of-the-art numerical methods and ...
2024

High-Throughput and Flexible Belief Propagation List Decoder for Polar Codes

Andreas Peter Burg, Alexios Konstantinos Balatsoukas Stimming, Andreas Toftegaard Kristensen, Yifei Shen, Yuqing Ren, Chuan Zhang

Due to its high parallelism, belief propagation (BP)decoding is amenable to high-throughput applications and thusrepresents a promising solution for the ultra-high peak datarate required by future communication systems. To bridge theperformance gap compare ...
Ieee-Inst Electrical Electronics Engineers Inc2024
Afficher plus
Concepts associés (15)
Microprocesseur multi-cœur
vignette|Un processeur quad-core AMD Opteron. vignette|L’Intel Core 2 Duo E6300 est un processeur double cœur. Un microprocesseur multi-cœur (multi-core en anglais) est un microprocesseur possédant plusieurs cœurs physiques fonctionnant simultanément. Il se distingue d'architectures plus anciennes (360/91) où un processeur unique commandait plusieurs circuits de calcul simultanés. Un cœur (en anglais, core) est un ensemble de circuits capables d’exécuter des programmes de façon autonome.
Chapel (langage)
Chapel, the Cascade High Productivity Language, is a parallel programming language that was developed by Cray, and later by Hewlett Packard Enterprise which acquired Cray. It was being developed as part of the Cray Cascade project, a participant in DARPA's High Productivity Computing Systems (HPCS) program, which had the goal of increasing supercomputer productivity by 2010. It is being developed as an open source project, under version 2 of the Apache license. The Chapel compiler is written in C and C++ (C++14).
Task parallelism
Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrently performed by processes or threads—across different processors. In contrast to data parallelism which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data.
Afficher plus

Graph Chatbot

Chattez avec Graph Search

Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.

AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.