Summary
In computer science, distributed memory refers to a multiprocessor computer system in which each processor has its own private memory. Computational tasks can only operate on local data, and if remote data are required, the computational task must communicate with one or more remote processors. In contrast, a shared memory multiprocessor offers a single memory space used by all processors. Processors do not have to be aware where data resides, except that there may be performance penalties, and that race conditions are to be avoided. In a distributed memory system there is typically a processor, a memory, and some form of interconnection that allows programs on each processor to interact with each other. The interconnect can be organised with point to point links or separate hardware can provide a switching network. The network topology is a key factor in determining how the multiprocessor machine scales. The links between nodes can be implemented using some standard network protocol (for example Ethernet), using bespoke network links (used in for example the transputer), or using dual-ported memories. The key issue in programming distributed memory systems is how to distribute the data over the memories. Depending on the problem solved, the data can be distributed statically, or it can be moved through the nodes. Data can be moved on demand, or data can be pushed to the new nodes in advance. As an example, if a problem can be described as a pipeline where data x is processed subsequently through functions f, g, h, etc. (the result is h(g(f(x)))), then this can be expressed as a distributed memory problem where the data is transmitted first to the node that performs f that passes the result onto the second node that computes g, and finally to the third node that computes h. This is also known as systolic computation. Data can be kept statically in nodes if most computations happen locally, and only changes on edges have to be reported to other nodes. An example of this is simulation where data is modeled using a grid, and each node simulates a small part of the larger grid.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related publications (78)

Imaging sensor device using an array of single-photon avalanche diode photodetectors

Edoardo Charbon, Andrei Ardelean

The invention relates to an Imaging sensor device in a stacked arrangement comprising: - a pixel array tier comprising a plurality of pixel segments each having a plurality of pixels for photon detection each providing a digital pixel output; - a processin ...
2024

Reliable Microsecond-Scale Distributed Computing

Athanasios Xygkis

The landscape of computing is changing, thanks to the advent of modern networking equipment that allows machines to exchange information in as little as one microsecond. Such advancement has enabled microsecond-scale distributed computing, where entire dis ...
EPFL2023

Overflow-free compute memories for edge AI acceleration

David Atienza Alonso, Giovanni Ansaloni, Alexandre Sébastien Julien Levisse, Marco Antonio Rios, Flavio Ponzina

Compute memories are memory arrays augmented with dedicated logic to support arithmetic. They support the efficient execution of data-centric computing patterns, such as those characterizing Artificial Intelligence (AI) algorithms. These architectures can ...
2023
Show more
Related concepts (2)
Computer cluster
A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The components of a cluster are usually connected to each other through fast local area networks, with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups (e.
Multiprocessing
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple dies in one package, multiple packages in one system unit, etc.).
Related courses (1)
PHYS-743: Parallel programming
Learn the concepts, tools and API's that are needed to debug, test, optimize and parallelize a scientific application on a cluster from an existing code or from scratch. Both OpenMP (shared memory) an
Related lectures (5)
Quasi-Stationary Distribution: Molecular Dynamics Modeling
Explores the quasi-stationary distribution approach in molecular dynamics modeling, covering Langevin dynamics, metastability, and kinetic Monte Carlo models.
Cache Coherence
Covers cache coherence in multiprocessor systems and the challenges of maintaining coherence and consistency in modern processors.
Cache Coherence: Concepts and Protocols
Explores cache coherence in multiprocessor systems, including protocols and challenges.
Show more