NP (complexity)In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine. NP is the set of decision problems solvable in polynomial time by a nondeterministic Turing machine.
Computational complexity theoryIn theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used.
Fractional coloringFractional coloring is a topic in a young branch of graph theory known as fractional graph theory. It is a generalization of ordinary graph coloring. In a traditional graph coloring, each vertex in a graph is assigned some color, and adjacent vertices — those connected by edges — must be assigned different colors. In a fractional coloring however, a set of colors is assigned to each vertex of a graph. The requirement about adjacent vertices still holds, so if two vertices are joined by an edge, they must have no colors in common.
Parameterized complexityIn computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input.
Kneser graphIn graph theory, the Kneser graph K(n, k) (alternatively KGn,k) is the graph whose vertices correspond to the k-element subsets of a set of n elements, and where two vertices are adjacent if and only if the two corresponding sets are disjoint. Kneser graphs are named after Martin Kneser, who first investigated them in 1956. The Kneser graph K(n, 1) is the complete graph on n vertices. The Kneser graph K(n, 2) is the complement of the line graph of the complete graph on n vertices.
Tensor product of graphsIn graph theory, the tensor product G × H of graphs G and H is a graph such that the vertex set of G × H is the Cartesian product V(G) × V(H); and vertices (g,h) and math|(''g,h' ) are adjacent in G × H if and only if g is adjacent to g' in G, and h is adjacent to h' in H. The tensor product is also called the direct product, Kronecker product, categorical product, cardinal product, relational product, weak direct product, or conjunction'''.
Space complexityThe space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space. Similar to time complexity, space complexity is often expressed asymptotically in big O notation, such as etc.
Cayley graphIn mathematics, a Cayley graph, also known as a Cayley color graph, Cayley diagram, group diagram, or color group, is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayley's theorem (named after Arthur Cayley), and uses a specified set of generators for the group. It is a central tool in combinatorial and geometric group theory. The structure and symmetry of Cayley graphs makes them particularly good candidates for constructing families of expander graphs.
Kolmogorov complexityIn algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy.
Average-case complexityIn computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs. There are three primary motivations for studying average-case complexity.