A computer program or subroutine is called reentrant if multiple invocations can safely run concurrently on multiple processors, or if on a single-processor system its execution can be interrupted and a new execution of it can be safely started (it can be "re-entered"). The interruption could be caused by an internal action such as a jump or call, or by an external action such as an interrupt or signal, unlike recursion, where new invocations can only be caused by internal call.
This definition originates from multiprogramming environments, where multiple processes may be active concurrently and where the flow of control could be interrupted by an interrupt and transferred to an interrupt service routine (ISR) or "handler" subroutine. Any subroutine used by the handler that could potentially have been executing when the interrupt was triggered should be reentrant. Similarly, code shared by two processors accessing shared data should be reentrant. Often, subroutines accessible via the operating system kernel are not reentrant. Hence, interrupt service routines are limited in the actions they can perform; for instance, they are usually restricted from accessing the and sometimes even from allocating memory.
Reentrancy is neither necessary not sufficient for thread-safety in multi-threaded environments. In other words, a reentrant subroutine can be thread-safe, but . Conversely, thread-safe code need be reentrant (see below for examples).
Other terms used for reentrant programs include "sharable code". Reentrant subroutines are sometimes marked in reference material as being "signal safe". Reentrant programs are often "pure procedures".
Reentrancy is not the same thing as idempotence, in which the function may be called more than once yet generate exactly the same output as if it had only been called once. Generally speaking, a function produces output data based on some input data (though both are optional, in general). Shared data could be accessed by any function at any time.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In computer science, a call stack is a stack data structure that stores information about the active subroutines of a computer program. This kind of stack is also known as an execution stack, program stack, control stack, run-time stack, or machine stack, and is often shortened to just "the stack". Although maintenance of the call stack is important for the proper functioning of most software, the details are normally hidden and automatic in high-level programming languages.
In computer programming, a function or subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Functions may be defined within programs, or separately in libraries that can be used by many programs. In different programming languages, a function may be called a routine, subprogram, subroutine, or procedure; in object-oriented programming (OOP), it may be called a method.
In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science. The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement.
Software development has taken a fundamental turn. Software today has gone from simple, closed programs running on a single machine, to massively open programs, patching together user experiences byway of responses received via hundreds of network requests ...
Provides an overview of MicroC/OS-II, a real-time kernel with multitasking capabilities and deterministic functions, covering topics such as task management, kernel, and intertask communication.
The recent proliferation of multi-core processors has moved concurrent programming into mainstream by forcing increasingly more programmers to write parallel code. Using traditional concurrency techniques, such as locking, is notoriously difficult and has ...
The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Re ...