In computer science, region-based memory management is a type of memory management in which each allocated object is assigned to a region. A region, also called a zone, arena, area, or memory context, is a collection of allocated objects that can be efficiently reallocated or deallocated all at once. Like stack allocation, regions facilitate allocation and deallocation of memory with low overhead; but they are more flexible, allowing objects to live longer than the stack frame in which they were allocated. In typical implementations, all objects in a region are allocated in a single contiguous range of memory addresses, similarly to how stack frames are typically allocated.
As a simple example, consider the following C code which allocates and then deallocates a linked list data structure:
Region *r = createRegion();
ListNode *head = NULL;
for (int i = 1; i next = head;
head = newNode;
}
// ...
// (use list here)
// ...
destroyRegion(r);
Although it required many operations to construct the linked list, it can be quickly deallocated in a single operation by destroying the region in which the nodes were allocated. There is no need to traverse the list.
Simple explicit regions are straightforward to implement; the following description is based on the work of Hanson. Each region is implemented as a linked list of large blocks of memory; each block should be large enough to serve many allocations. The current block maintains a pointer to the next free position in the block, and if the block is filled, a new one is allocated and added to the list. When the region is deallocated, the next-free-position pointer is reset to the beginning of the first block, and the list of blocks can be reused for the next allocated region. Alternatively, when a region is deallocated, its list of blocks can be appended to a global freelist from which other regions may later allocate new blocks. With either case of this simple scheme, it is not possible to deallocate individual objects in regions.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Students learn several implementation techniques for modern functional and object-oriented programming languages. They put some of them into practice by developing key parts of a compiler and run time
Rust is a multi-paradigm, general-purpose programming language that emphasizes performance, type safety, and concurrency. It enforces memory safety—ensuring that all references point to valid memory—without requiring the use of a garbage collector or reference counting present in other memory-safe languages. To simultaneously enforce memory safety and prevent data races, its "borrow checker" tracks the object lifetime of all references in a program during compilation.
In computer programming, tracing garbage collection is a form of automatic memory management that consists of determining which objects should be deallocated ("garbage collected") by tracing which objects are reachable by a chain of references from certain "root" objects, and considering the rest as "garbage" and collecting them. Tracing garbage collection is the most common type of garbage collection – so much so that "garbage collection" often refers to tracing garbage collection, rather than other methods such as reference counting – and there are a large number of algorithms used in implementation.
In computer storage, fragmentation is a phenomenon in which storage space, main storage or secondary storage, is used inefficiently, reducing capacity or performance and often both. The exact consequences of fragmentation depend on the specific system of storage allocation in use and the particular form of fragmentation. In many cases, fragmentation leads to storage space being "wasted", and in that case the term also refers to the wasted space itself.
Covers the importance of constructors, copy constructors, and destructors in managing initialization and memory allocation in C++.
Covers the fundamentals of memory management in computer systems.
Introduces SuSLik, a program synthesizer generating safe low-level programs from logical specifications, showcasing its capabilities in handling linked data structures.
, , ,
Most modern in-memory online transaction processing (OLTP) engines rely on multi-version concurrency control (MVCC) to provide data consistency guarantees in the presence of conflicting data accesses. MVCC improves concurrency by generating a new version o ...
ACM2023
Type systems are a device for verifying properties of programs without running them. Many programming languages used in the industry have always had a type system, while others were initially created without a type system and later adopted one, when the ad ...
Comprehensive memory safety validation identifies the memory objects whose accesses provably comply with all classes of memory safety, protecting them from memory errors elsewhere at low overhead. We assess the breadth and depth of comprehensive memory saf ...