In analytic philosophy and computer science, referential transparency and referential opacity are properties of linguistic constructions, and by extension of languages. A linguistic construction is called referentially transparent when for any expression built from it, replacing a subexpression with another one that denotes the same value does not change the value of the expression. Otherwise, it is called referentially opaque. Each expression built from a referentially opaque linguistic construction states something about a subexpression, whereas each expression built from a referentially transparent linguistic construction states something not about a subexpression, meaning that the subexpressions are ‘transparent’ to the expression, acting merely as ‘references’ to something else. For example, the linguistic construction ‘_ was wise’ is referentially transparent (e.g., Socrates was wise is equivalent to The founder of Western philosophy was wise) but ‘_ said _’ is referentially opaque (e.g., Xenophon said ‘Socrates was wise’ is not equivalent to Xenophon said ‘The founder of Western philosophy was wise’).
Referential transparency depends on the values associated to expressions, that is on the semantics of the language. So, both declarative languages and imperative languages can be referentially transparent or referentially opaque, according to the semantics they are given.
The importance of referential transparency is that it allows the programmer and the compiler to reason about program behavior as a rewrite system. This can help in proving correctness, simplifying an algorithm, assisting in modifying code without breaking it, or optimizing code by means of memoization, common subexpression elimination, lazy evaluation, or parallelization.
The concept originated in Alfred North Whitehead and Bertrand Russell's Principia Mathematica (1910–1913):
A proposition as the vehicle or truth or falsehood is a particular occurrence, while a proposition considered factually is a class of similar occurrences.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In computer science, declarative programming is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow. Many languages that apply this style attempt to minimize or eliminate side effects by describing what the program must accomplish in terms of the problem domain, rather than describing how to accomplish it as a sequence of the programming language primitives (the how being left up to the language's implementation).
In computer programming, a variable is an abstract storage location paired with an associated symbolic name, which contains some known or unknown quantity of data or object referred to as a value; or in simpler terms, a variable is a named container for a particular set of bits or type of data (like integer, float, string etc...). A variable can eventually be associated with or identified by a memory address. The variable name is the usual way to reference the stored value, in addition to referring to the variable itself, depending on the context.
In computer programming, the scope of a name binding (an association of a name to an entity, such as a variable) is the part of a program where the name binding is valid; that is, where the name can be used to refer to the entity. In other parts of the program, the name may refer to a different entity (it may have a different binding), or to nothing at all (it may be unbound). Scope helps prevent name collisions by allowing the same name to refer to different objects – as long as the names have separate scopes.
Common subexpression elimination is a well-known compiler optimisa- tion that improves running time of compiled applications by avoiding the repetition of the same computation. Although it has been implemented on a low level such as bytecode, it misses mul ...
2015
, ,
New non-volatile memory (NVM) technologies enable direct, durable storage of data in an application's heap. Durable, randomly accessible memory facilitates the construction of applications that do not lose data at system shutdown or power failure. Existing ...
ACM2018
This contribution presents control methods for the regulation of the voltage and the frequency on AC minigrids, fed by voltage source inverters (VSI). In order to avoid a complex communication network for the synchronization of the various VSI on a minigri ...