Combinatory logicCombinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators, which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic.
Fixed-point combinatorIn mathematics and computer science in general, a fixed point of a function is a value that is mapped to itself by the function. In combinatory logic for computer science, a fixed-point combinator (or fixpoint combinator) is a higher-order function that returns some fixed point of its argument function, if one exists. Formally, if the function f has one or more fixed points, then and hence, by repeated application, In the classical untyped lambda calculus, every function has a fixed point.
Type systemIn computer programming, a type system is a logical system comprising a set of rules that assigns a property called a type (for example, integer, floating point, string) to every "term" (a word, phrase, or other set of symbols). Usually the terms are various constructs of a computer program, such as variables, expressions, functions, or modules. A type system dictates the operations that can be performed on a term. For variables, the type system determines the allowed values of that term.
Nominal type systemIn computer science, a type system is nominal, nominative, or name-based if compatibility and equivalence of data types is determined by explicit declarations and/or the name of the types. Nominal systems are used to determine if types are equivalent, as well as if a type is a subtype of another. Nominal type systems contrast with structural systems, where comparisons are based on the structure of the types in question and do not require explicit declarations.
Substructural type systemSubstructural type systems are a family of type systems analogous to substructural logics where one or more of the structural rules are absent or only allowed under controlled circumstances. Such systems are useful for constraining access to system resources such as , locks, and memory by keeping track of changes of state that occur and preventing invalid states. Several type systems have emerged by discarding some of the structural rules of exchange, weakening, and contraction: Ordered type systems (discard exchange, weakening and contraction): Every variable is used exactly once in the order it was introduced.
Pure type systemNOTOC In the branches of mathematical logic known as proof theory and type theory, a pure type system (PTS), previously known as a generalized type system (GTS), is a form of typed lambda calculus that allows an arbitrary number of sorts and dependencies between any of these. The framework can be seen as a generalisation of Barendregt's lambda cube, in the sense that all corners of the cube can be represented as instances of a PTS with just two sorts. In fact, Barendregt (1991) framed his cube in this setting.
Type theoryIn mathematics, logic, and computer science, a type theory is the formal presentation of a specific type system, and in general, type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that were proposed as foundations are Alonzo Church's typed λ-calculus and Per Martin-Löf's intuitionistic type theory. Most computerized proof-writing systems use a type theory for their foundation, a common one is Thierry Coquand's Calculus of Inductive Constructions.
DataflowIn computing, dataflow is a broad concept, which has various meanings depending on the application and context. In the context of software architecture, data flow relates to stream processing or reactive programming. Dataflow computing is a software paradigm based on the idea of representing computations as a directed graph, where nodes are computations and data flow along the edges. Dataflow can also be called stream processing or reactive programming. There have been multiple data-flow/stream processing languages of various forms (see Stream processing).
Dataflow programmingIn computer programming, dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing dataflow principles and architecture. Dataflow programming languages share some features of functional languages, and were generally developed in order to bring some functional concepts to a language more suitable for numeric processing. Some authors use the term datastream instead of dataflow to avoid confusion with dataflow computing or dataflow architecture, based on an indeterministic machine paradigm.
Stream processingIn computer science, stream processing (also known as event stream processing, data stream processing, or distributed stream processing) is a programming paradigm which views streams, or sequences of events in time, as the central input and output objects of computation. Stream processing encompasses dataflow programming, reactive programming, and distributed data processing. Stream processing systems aim to expose parallel processing for data streams and rely on streaming algorithms for efficient implementation.