In computer science and engineering, transactional memory attempts to simplify concurrent programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. Transactional memory systems provide high-level abstraction as an alternative to low-level thread synchronization. This abstraction allows for coordination between concurrent reads and writes of shared data in parallel systems. In concurrent programming, synchronization is required when parallel threads attempt to access a shared resource. Low-level thread synchronization constructs such as locks are pessimistic and prohibit threads that are outside a critical section from running the code protected by the critical section. The process of applying and releasing locks often functions as an additional overhead in workloads with little conflict among threads. Transactional memory provides optimistic concurrency control by allowing threads to run in parallel with minimal interference. The goal of transactional memory systems is to transparently support regions of code marked as transactions by enforcing atomicity, consistency and isolation. A transaction is a collection of operations that can execute and commit changes as long as a conflict is not present. When a conflict is detected, a transaction will revert to its initial state (prior to any changes) and will rerun until all conflicts are removed. Before a successful commit, the outcome of any operation is purely speculative inside a transaction. In contrast to lock-based synchronization where operations are serialized to prevent data corruption, transactions allow for additional parallelism as long as few operations attempt to modify a shared resource. Since the programmer is not responsible for explicitly identifying locks or the order in which they are acquired, programs that utilize transactional memory cannot produce a deadlock.

À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Cours associés (7)
CS-453: Concurrent computing
With the advent of modern architectures, it becomes crucial to master the underlying algorithmics of concurrency. The objective of this course is to study the foundations of concurrent algorithms and
CS-307: Introduction to multiprocessor architecture
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
CS-300: Data-intensive systems
This course covers the data management system design concepts using a hands-on approach.
Afficher plus
Séances de cours associées (32)
Transactions
Explore l'élégance et les défis des transactions dans la structuration des systèmes étatiques, en mettant l'accent sur les propriétés ACID et les compromis de la mémoire transactionnelle.
Mémoire Transactionnelle: Hardware Concurrency Control
Explore la mémoire transactionnelle pour le contrôle de la concurrence matérielle, en discutant des mécanismes de verrouillage, des compromis de performance et des modifications matérielles.
Mémoire transactionnelle: Simplification matérielle
Explore la mémoire transactionnelle et la simplification matérielle pour le contrôle de la concurrence dans les logiciels, mettant l'accent sur les avantages de la spéculation matérielle et de la concurrence déclarative.
Afficher plus
Publications associées (111)

When is it safe to run a transactional workload under Read Committed?

Christoph Koch, Bas Ketsman

The popular isolation level multiversion Read Committed (RC) exchanges some of the strong guarantees of serializability for increased transaction throughput. Nevertheless, transaction workloads can sometimes be executed under RC while still guaranteeing se ...
ASSOC COMPUTING MACHINERY2023

Robustness Against Read Committed: A Free Transactional Lunch

Christoph Koch, Bas Ketsman

Transaction processing is a central part of most database applications. While serializability remains the gold standard for desirable transactional semantics, many database systems offer improved transaction throughput at the expense of introducing potenti ...
ASSOC COMPUTING MACHINERY2022

Robustness against Read Committed for Transaction Templates

Christoph Koch, Bas Ketsman

The isolation level Multiversion Read Committed (RC), offered by many database systems, is known to trade consistency for increased transaction throughput. Sometimes, transaction workloads can be safely executed under RC obtaining the perfect isolation of ...
ASSOC COMPUTING MACHINERY2021
Afficher plus
Concepts associés (6)
Serializability
In concurrency control of databases, transaction processing (transaction management), and various transactional applications (e.g., transactional memory and software transactional memory), both centralized and distributed, a transaction schedule is serializable if its outcome (e.g., the resulting database state) is equal to the outcome of its transactions executed serially, i.e. without overlapping in time. Transactions are normally executed concurrently (they overlap), since this is the most efficient way.
Linearizability
In concurrent programming, an operation (or set of operations) is linearizable if it consists of an ordered list of invocation and response events, that may be extended by adding response events such that: The extended list can be re-expressed as a sequential history (is serializable). That sequential history is a subset of the original unextended list. Informally, this means that the unmodified list of events is linearizable if and only if its invocations were serializable, but some of the responses of the serial schedule have yet to return.
Verrou (informatique)
Un verrou informatique permet de s'assurer qu'une seule personne, ou un seul processus accède à une ressource à un instant donné. Ceci est souvent utilisé dans le domaine des accès à des fichiers sur des systèmes d'exploitation multi-utilisateur, car si deux programmes modifient un même fichier au même moment, le risque est de : provoquer des erreurs dans un des deux programmes, voire dans les deux ; laisser le fichier en fin de traitement dans une complète incohérence ; endommager le fichier manipulé.
Afficher plus
MOOCs associés (15)
Parallelism and Concurrency
(merge of parprog1, scala-reactive, scala-spark-big-data)
Functional Programming
In this course you will discover the elements of the functional programming style and learn how to apply them usefully in your daily programming tasks. You will also develop a solid foundation for rea
Functional Programming Principles in Scala [retired]
This advanced undergraduate programming course covers the principles of functional programming using Scala, including the use of functions as values, recursion, immutability, pattern matching, higher-
Afficher plus

Graph Chatbot

Chattez avec Graph Search

Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.

AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.