Dynamic program analysisDynamic program analysis is analysis of computer software that involves executing the program in question (as opposed to static program analysis, which does not). Dynamic program analysis includes familiar techniques from software engineering such as unit testing, debugging, and measuring code coverage, but also includes lesser-known techniques like program slicing and invariant inference. Dynamic program analysis is widely applied in security in the form of runtime memory error detection, fuzzing, dynamic symbolic execution, and taint tracking.
Data-flow analysisData-flow analysis is a technique for gathering information about the possible set of values calculated at various points in a computer program. A program's control-flow graph (CFG) is used to determine those parts of a program to which a particular value assigned to a variable might propagate. The information gathered is often used by compilers when optimizing a program. A canonical example of a data-flow analysis is reaching definitions.
Combinatory logicCombinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators, which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic.
Covariance and contravariance (computer science)Many programming language type systems support subtyping. For instance, if the type is a subtype of , then an expression of type should be substitutable wherever an expression of type is used. Variance is how subtyping between more complex types relates to subtyping between their components. For example, how should a list of s relate to a list of s? Or how should a function that returns relate to a function that returns ? Depending on the variance of the type constructor, the subtyping relation of the simple types may be either preserved, reversed, or ignored for the respective complex types.
Strong and weak typingIn computer programming, one of the many ways that programming languages are colloquially classified is whether the language's type system makes it strongly typed or weakly typed (loosely typed). However, there is no precise technical definition of what the terms mean and different authors disagree about the implied meaning of the terms and the relative rankings of the "strength" of the type systems of mainstream programming languages.
Critical path methodThe critical path method (CPM), or critical path analysis (CPA), is an algorithm for scheduling a set of project activities. A critical path is determined by identifying the longest stretch of dependent activities and measuring the time required to complete them from start to finish. It is commonly used in conjunction with the program evaluation and review technique (PERT). The CPM is a project-modeling technique developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley Jr. of Remington Rand.
Quantifier (logic)In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier in the first order formula expresses that everything in the domain satisfies the property denoted by . On the other hand, the existential quantifier in the formula expresses that there exists something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula.
Second-order logicIn logic and mathematics, second-order logic is an extension of first-order logic, which itself is an extension of propositional logic. Second-order logic is in turn extended by higher-order logic and type theory. First-order logic quantifies only variables that range over individuals (elements of the domain of discourse); second-order logic, in addition, also quantifies over relations. For example, the second-order sentence says that for every formula P, and every individual x, either Px is true or not(Px) is true (this is the law of excluded middle).
Foreach loopIn computer programming, foreach loop (or for-each loop) is a control flow statement for traversing items in a collection. is usually used in place of a standard loop statement. Unlike other loop constructs, however, loops usually maintain no explicit counter: they essentially say "do this to everything in this set", rather than "do this times". This avoids potential off-by-one errors and makes code simpler to read. In object-oriented languages, an iterator, even if implicit, is often used as the means of traversal.
Branching quantifierIn logic a branching quantifier, also called a Henkin quantifier, finite partially ordered quantifier or even nonlinear quantifier, is a partial ordering of quantifiers for Q ∈ {∀,∃}. It is a special case of generalized quantifier. In classical logic, quantifier prefixes are linearly ordered such that the value of a variable ym bound by a quantifier Qm depends on the value of the variables y1, ..., ym−1 bound by quantifiers Qy1, ..., Qym−1 preceding Qm. In a logic with (finite) partially ordered quantification this is not in general the case.