In computer science, read-copy-update (RCU) is a synchronization mechanism that avoids the use of lock primitives while multiple threads concurrently read and update elements that are linked through pointers and that belong to shared data structures (e.g., linked lists, trees, hash tables).
Whenever a thread is inserting or deleting elements of data structures in shared memory, all readers are guaranteed to see and traverse either the older or the new structure, therefore avoiding inconsistencies (e.g., dereferencing null pointers).
It is used when performance of reads is crucial and is an example of space–time tradeoff, enabling fast operations at the cost of more space. This makes all readers proceed as if there were no synchronization involved, hence they will be fast, but also making updates more difficult.
The name comes from the way that RCU is used to update a linked structure in place. A thread wishing to do this uses the following steps:
create a new structure,
copy the data from the old structure into the new one, and save a pointer to the old structure,
modify the new, copied, structure,
update the global pointer to refer to the new structure,
sleep until the operating system kernel determines that there are no readers left using the old structure, for example, in the Linux kernel, by using ,
once awakened by the kernel, deallocate the old structure.
So the structure is read concurrently with a thread copying in order to do an update, hence the name "read-copy update". The abbreviation "RCU" was one of many contributions by the Linux community. Other names for similar techniques include passive serialization and MP defer by VM/XA programmers and generations by K42 and Tornado programmers.
A key property of RCU is that readers can access a data structure even when it is in the process of being updated: RCU updaters cannot block readers or force them to retry their accesses. This overview starts by showing how data can be safely inserted into and deleted from linked structures despite concurrent readers.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Multiprocessors are a core component in all types of computing infrastructure, from phones to datacenters. This course will build on the prerequisites of processor design and concurrency to introduce
With the advent of modern architectures, it becomes crucial to master the underlying algorithmics of concurrency. The objective of this course is to study the foundations of concurrent algorithms and
Introduces the readers-writer lock for handling concurrency in scenarios with frequent reads and occasional updates.
Explores the power of registers in linearizability and introduces the SCAN algorithm.
Explores synchronization principles using locks and barriers, emphasizing efficient hardware-supported implementations and coordination mechanisms like OpenMP.
In information technology and computer science, especially in the fields of computer programming, operating systems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible. Computer systems, both software and hardware, consist of modules, or components. Each component is designed to operate correctly, i.e., to obey or to meet certain consistency rules.
In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive: a mechanism that enforces limits on access to a resource when there are many threads of execution. A lock is designed to enforce a mutual exclusion concurrency control policy, and with a variety of possible methods there exists multiple unique implementations for different applications. Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data.
In concurrent computing, deadlock is any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock. Deadlocks are a common problem in multiprocessing systems, parallel computing, and distributed systems, because in these contexts systems often use software or hardware locks to arbitrate shared resources and implement process synchronization.
, ,
Key-Value Stores (KVS) are foundational infrastructure components for online services. Due to their latency-critical nature, today’s best-performing KVS contain a plethora of full-stack optimizations commonly targeting read-mostly, popularity-skewed worklo ...
ACM2023
, ,
Consider a stream of status updates generated by a source, where each update is of one of two types: high priority or ordinary (low priority). These updates are to be transmitted through a network to a monitor. However, the transmission policy of each pack ...
Geo-replicated data platforms are the backbone of several large-scale online services. Transactional Causal Consistency (TCC) is an attractive consistency level for building such platforms. TCC avoids many anomalies of eventual consistency, eschews the syn ...