Computational problemIn theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring "Given a positive integer n, find a nontrivial prime factor of n." is a computational problem. A computational problem can be viewed as a set of instances or cases together with a, possibly empty, set of solutions for every instance/case. For example, in the factoring problem, the instances are the integers n, and solutions are prime numbers p that are the nontrivial prime factors of n.
Circuit complexityIn theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits (see below). Proving lower bounds on size of Boolean circuits computing explicit Boolean functions is a popular approach to separating complexity classes.
Regular languageIn theoretical computer science and formal language theory, a regular language (also called a rational language) is a formal language that can be defined by a regular expression, in the strict sense in theoretical computer science (as opposed to many modern regular expression engines, which are augmented with features that allow the recognition of non-regular languages). Alternatively, a regular language can be defined as a language recognized by a finite automaton.
Complete (complexity)In computational complexity theory, a computational problem is complete for a complexity class if it is, in a technical sense, among the "hardest" (or "most expressive") problems in the complexity class. More formally, a problem p is called hard for a complexity class C under a given type of reduction if there exists a reduction (of the given type) from any problem in C to p. If a problem is both hard for the class and a member of the class, it is complete for that class (for that type of reduction).
Alternating Turing machineIn computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981. The definition of NP uses the existential mode of computation: if any choice leads to an accepting state, then the whole computation accepts.
Log-space reductionIn computational complexity theory, a log-space reduction is a reduction computable by a deterministic Turing machine using logarithmic space. Conceptually, this means it can keep a constant number of pointers into the input, along with a logarithmic number of fixed-size integers. It is possible that such a machine may not have space to write down its own output, so the only requirement is that any given bit of the output be computable in log-space. Formally, this reduction is executed via a log-space transducer.
Complement (complexity)In computational complexity theory, the complement of a decision problem is the decision problem resulting from reversing the yes and no answers. Equivalently, if we define decision problems as sets of finite strings, then the complement of this set over some fixed domain is its complement problem. For example, one important problem is whether a number is a prime number. Its complement is to determine whether a number is a composite number (a number which is not prime). Here the domain of the complement is the set of all integers exceeding one.
Quantum complexity theoryQuantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical (i.e., non-quantum) complexity classes. Two important quantum complexity classes are BQP and QMA.
Cobham's thesisCobham's thesis, also known as Cobham–Edmonds thesis (named after Alan Cobham and Jack Edmonds), asserts that computational problems can be feasibly computed on some computational device only if they can be computed in polynomial time; that is, if they lie in the complexity class P. In modern terms, it identifies tractable problems with the complexity class P.
Las Vegas algorithmIn computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas algorithm differs depending on the input. The usual definition of a Las Vegas algorithm includes the restriction that the expected runtime be finite, where the expectation is carried out over the space of random information, or entropy, used in the algorithm.