Operational semanticsOperational semantics is a category of formal programming language semantics in which certain desired properties of a program, such as correctness, safety or security, are verified by constructing proofs from logical statements about its execution and procedures, rather than by attaching mathematical meanings to its terms (denotational semantics).
Lambda calculusLambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. Lambda calculus consists of constructing lambda terms and performing reduction operations on them.
Hoare logicHoare logic (also known as Floyd–Hoare logic or Hoare rules) is a formal system with a set of logical rules for reasoning rigorously about the correctness of computer programs. It was proposed in 1969 by the British computer scientist and logician Tony Hoare, and subsequently refined by Hoare and other researchers. The original ideas were seeded by the work of Robert W. Floyd, who had published a similar system for flowcharts. The central feature of Hoare logic is the Hoare triple.
Imperative programmingIn computer science, imperative programming is a programming paradigm of software that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates step by step, rather than on high-level descriptions of its expected results.
Complete partial orderIn mathematics, the phrase complete partial order is variously used to refer to at least three similar, but distinct, classes of partially ordered sets, characterized by particular completeness properties. Complete partial orders play a central role in theoretical computer science: in denotational semantics and domain theory. A complete partial order, abbreviated cpo, can refer to any of the following concepts depending on context. A partially ordered set is a directed-complete partial order (dcpo) if each of its directed subsets has a supremum.
Abstract interpretationIn computer science, abstract interpretation is a theory of sound approximation of the semantics of computer programs, based on monotonic functions over ordered sets, especially lattices. It can be viewed as a partial execution of a computer program which gains information about its semantics (e.g., control-flow, data-flow) without performing all the calculations.
Scott continuityIn mathematics, given two partially ordered sets P and Q, a function f: P → Q between them is Scott-continuous (named after the mathematician Dana Scott) if it preserves all directed suprema. That is, for every directed subset D of P with supremum in P, its has a supremum in Q, and that supremum is the image of the supremum of D, i.e. , where is the directed join. When is the poset of truth values, i.e. Sierpiński space, then Scott-continuous functions are characteristic functions of open sets, and thus Sierpiński space is the classifying space for open sets.
Concurrency (computer science)In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation.
Algebraic semantics (computer science)In computer science, algebraic semantics is a form of axiomatic semantics based on algebraic laws for describing and reasoning about program specifications in a formal manner. The syntax of an algebraic specification is formulated in two steps: (1) defining a formal signature of data types and operation symbols, and (2) interpreting the signature through sets and functions. The signature of an algebraic specification defines its formal syntax. The word "signature" is used like the concept of "key signature" in musical notation.
Fixed-point theoremIn mathematics, a fixed-point theorem is a result saying that a function F will have at least one fixed point (a point x for which F(x) = x), under some conditions on F that can be stated in general terms. The Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, the procedure of iterating a function yields a fixed point.