In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.
The word "non-blocking" was traditionally used to describe telecommunications networks that could route a connection through a set of relays "without having to re-arrange existing calls" (see Clos network). Also, if the telephone exchange "is not defective, it can always make the connection" (see nonblocking minimal spanning switch).
Disadvantages of locks
The traditional approach to multi-threaded programming is to use locks to synchronize access to shared resources. Synchronization primitives such as mutexes, semaphores, and critical sections are all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.
Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything: if the blocked thread had been performing a high-priority or real-time task, it would be highly undesirable to halt its progress.
Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such as deadlock, livelock, and priority inversion. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities for parallelism, and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
With the advent of modern architectures, it becomes crucial to master the underlying algorithmics of concurrency. The objective of this course is to study the foundations of concurrent algorithms and
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
Computing is nowadays distributed over several machines, in a local IP-like network, a cloud or a P2P network. Failures are common and computations need to proceed despite partial failures of machin
In concurrent programming, an operation (or set of operations) is linearizable if it consists of an ordered list of invocation and response events, that may be extended by adding response events such that: The extended list can be re-expressed as a sequential history (is serializable). That sequential history is a subset of the original unextended list. Informally, this means that the unmodified list of events is linearizable if and only if its invocations were serializable, but some of the responses of the serial schedule have yet to return.
Un verrou informatique permet de s'assurer qu'une seule personne, ou un seul processus accède à une ressource à un instant donné. Ceci est souvent utilisé dans le domaine des accès à des fichiers sur des systèmes d'exploitation multi-utilisateur, car si deux programmes modifient un même fichier au même moment, le risque est de : provoquer des erreurs dans un des deux programmes, voire dans les deux ; laisser le fichier en fin de traitement dans une complète incohérence ; endommager le fichier manipulé.
L'inversion de priorité est un phénomène qui peut se produire en programmation concurrente. Il s'agit d'une situation dans laquelle un processus de haute priorité ne peut pas avoir accès au processeur car il est utilisé par un processus de plus faible priorité. Pour les exemples, nous utiliseront des tâches A, B, et C, A ayant la plus forte priorité, C la plus faible, et B une priorité intermédiaire, et un mutex X qui sert à gérer une ressource partagée entre la tâche A et la tâche C.
Couvre les implémentations sans attente d'objets atomiques, en se concentrant sur les compteurs et les instantanés, en discutant des idées clés pour renforcer l'atomicité et la liberté d'attente.
Explore la sécurité des fils, la synchronisation et les collections simultanées dans Scala, en mettant l'accent sur une implémentation de file d'attente simultanée sans verrou.
Cycles are one of the fundamental subgraph patterns and being able to enumerate them in graphs enables important applications in a wide variety of fields, including finance, biology, chemistry, and network science. However, to enable cycle enumeration in r ...
New York2023
, ,
Finding cycles in directed graphs enables important applications in various domains such as finance, biology, chemistry, and network science. However, as the size of graph datasets continues to grow, it becomes increasingly difficult to discover cycles wit ...
Verification and testing of hardware heavily relies on cycle-accurate simulation of RTL.As single-processor performance is growing only slowly, conventional, single-threaded RTL simulation is becoming impractical for increasingly complex chip designs and s ...