Type classIn computer science, a type class is a type system construct that supports ad hoc polymorphism. This is achieved by adding constraints to type variables in parametrically polymorphic types. Such a constraint typically involves a type class T and a type variable a, and means that a can only be instantiated to a type whose members support the overloaded operations associated with T.
Dynamic program analysisDynamic program analysis is analysis of computer software that involves executing the program in question (as opposed to static program analysis, which does not). Dynamic program analysis includes familiar techniques from software engineering such as unit testing, debugging, and measuring code coverage, but also includes lesser-known techniques like program slicing and invariant inference. Dynamic program analysis is widely applied in security in the form of runtime memory error detection, fuzzing, dynamic symbolic execution, and taint tracking.
Side effect (computer science)In computer science, an operation, function or expression is said to have a side effect if it modifies some state variable value(s) outside its local environment, which is to say if it has any observable effect other than its primary effect of returning a value to the invoker of the operation. Example side effects include modifying a non-local variable, modifying a static local variable, modifying a mutable argument passed by reference, performing I/O or calling other functions with side-effects.
Pure functionIn computer programming, a pure function is a function that has the following properties: the function return values are identical for identical arguments (no variation with local static variables, non-local variables, mutable reference arguments or input streams), and the function has no side effects (no mutation of local static variables, non-local variables, mutable reference arguments or input/output streams). Some authors, particularly from the imperative language community, use the term "pure" for all functions that just have the above property 2 (discussed below).
Composition operatorIn mathematics, the composition operator with symbol is a linear operator defined by the rule where denotes function composition. The study of composition operators is covered by AMS category 47B33. In physics, and especially the area of dynamical systems, the composition operator is usually referred to as the Koopman operator (and its wild surge in popularity is sometimes jokingly called "Koopmania"), named after Bernard Koopman. It is the left-adjoint of the transfer operator of Frobenius–Perron.
Operator (mathematics)In mathematics, an operator is generally a mapping or function that acts on elements of a space to produce elements of another space (possibly and sometimes required to be the same space). There is no general definition of an operator, but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to characterize explicitly (for example in the case of an integral operator), and may be extended so as to act on related objects (an operator that acts on functions may act also on differential equations whose solutions are functions that satisfy the equation).
Composition of relationsIn the mathematics of binary relations, the composition of relations is the forming of a new binary relation R; S from two given binary relations R and S. In the calculus of relations, the composition of relations is called relative multiplication, and its result is called a relative product. Function composition is the special case of composition of relations where all relations involved are functions. The word uncle indicates a compound relation: for a person to be an uncle, he must be the brother of a parent.
Induction of regular languagesIn computational learning theory, induction of regular languages refers to the task of learning a formal description (e.g. grammar) of a regular language from a given set of example strings. Although E. Mark Gold has shown that not every regular language can be learned this way (see language identification in the limit), approaches have been investigated for a variety of subclasses. They are sketched in this article. For learning of more general grammars, see Grammar induction.
Regular grammarIn theoretical computer science and formal language theory, a regular grammar is a grammar that is right-regular or left-regular. While their exact definition varies from textbook to textbook, they all require that all production rules have at most one non-terminal symbol; that symbol is either always at the end or always at the start of the rule's right-hand side. Every regular grammar describes a regular language.
Partial applicationIn computer science, partial application (or partial function application) refers to the process of fixing a number of arguments to a function, producing another function of smaller arity. Given a function , we might fix (or 'bind') the first argument, producing a function of type . Evaluation of this function might be represented as . Note that the result of partial function application in this case is a function that takes two arguments. Partial application is sometimes incorrectly called currying, which is a related, but distinct concept.