Low (complexity)In computational complexity theory, a language B (or a complexity class B) is said to be low for a complexity class A (with some reasonable relativized version of A) if AB = A; that is, A with an oracle for B is equal to A. Such a statement implies that an abstract machine which solves problems in A achieves no additional power if it is given the ability to solve problems in B at unit cost. In particular, this means that if B is low for A then B is contained in A.
Worst-case complexityIn computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as n in asymptotic notation). It gives an upper bound on the resources required by the algorithm. In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given any input of size n, and thus guarantees that the algorithm will finish in the indicated period of time.
Property testingIn computer science, a property testing algorithm for a decision problem is an algorithm whose query complexity to its input is much smaller than the instance size of the problem. Typically property testing algorithms are used to distinguish if some combinatorial structure S (such as a graph or a boolean function) satisfies some property P, or is "far" from having this property (meaning an ε-fraction of the representation of S need be modified in order to make S satisfy P), using only a small number of "local" queries to the object.
Amortized analysisIn computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is that looking at the worst-case run time can be too pessimistic. Instead, amortized analysis averages the running times of operations in a sequence over that sequence. As a conclusion: "Amortized analysis is a useful tool that complements other techniques such as worst-case and average-case analysis.
Exponential time hypothesisIn computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved in subexponential time, i.e., for all constant , where n is the number of variables in the formula. The exponential time hypothesis, if true, would imply that P ≠ NP, but it is a stronger statement.
Randomized roundingWithin computer science and operations research, many combinatorial optimization problems are computationally intractable to solve exactly (to optimality). Many such problems do admit fast (polynomial time) approximation algorithms—that is, algorithms that are guaranteed to return an approximately optimal solution given any input. Randomized rounding is a widely used approach for designing and analyzing such approximation algorithms.
DPLL algorithmIn logic and computer science, the Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a complete, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solving the CNF-SAT problem. It was introduced in 1961 by Martin Davis, George Logemann and Donald W. Loveland and is a refinement of the earlier Davis–Putnam algorithm, which is a resolution-based procedure developed by Davis and Hilary Putnam in 1960.
♯P-completeThe #P-complete problems (pronounced "sharp P complete" or "number P complete") form a complexity class in computational complexity theory. The problems in this complexity class are defined by having the following two properties: The problem is in #P, the class of problems that can be defined as counting the number of accepting paths of a polynomial-time non-deterministic Turing machine. The problem is #P-hard, meaning that every other problem in #P has a Turing reduction or polynomial-time counting reduction to it.
Gap theoremSee also Gap theorem (disambiguation) for other gap theorems in mathematics. In computational complexity theory, the Gap Theorem, also known as the Borodin–Trakhtenbrot Gap Theorem, is a major theorem about the complexity of computable functions. It essentially states that there are arbitrarily large computable gaps in the hierarchy of complexity classes. For any computable function that represents an increase in computational resources, one can find a resource bound such that the set of functions computable within the expanded resource bound is the same as the set computable within the original bound.
Turing AwardThe ACM A. M. Turing Award is an annual prize given by the Association for Computing Machinery (ACM) for contributions of lasting and major technical importance to computer science. It is generally recognized as the highest distinction in computer science and is colloquially known as or often referred to as the "Nobel Prize of Computing". The award is named after Alan Turing, who was a British mathematician and reader in mathematics at the University of Manchester.