In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second.
A polynomial-time reduction proves that the first problem is no more difficult than the second one, because whenever an efficient algorithm exists for the second problem, one exists for the first problem as well. By contraposition, if no efficient algorithm exists for the first problem, none exists for the second either. Polynomial-time reductions are frequently used in complexity theory for defining both complexity classes and complete problems for those classes.
The three most common types of polynomial-time reduction, from the most to the least restrictive, are polynomial-time many-one reductions, truth-table reductions, and Turing reductions. The most frequently used of these are the many-one reductions, and in some cases the phrase "polynomial-time reduction" may be used to mean a polynomial-time many-one reduction. The most general reductions are the Turing reductions and the most restrictive are the many-one reductions with truth-table reductions occupying the space in between.
A polynomial-time many-one reduction from a problem A to a problem B (both of which are usually required to be decision problems) is a polynomial-time algorithm for transforming inputs to problem A into inputs to problem B, such that the transformed problem has the same output as the original problem. An instance x of problem A can be solved by applying this transformation to produce an instance y of problem B, giving y as the input to an algorithm for problem B, and returning its output.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In computational complexity theory, a problem is NP-complete when: It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no". When the answer is "yes", this can be demonstrated through the existence of a short (polynomial length) solution. The correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions.
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A sufficiently efficient reduction from one problem to another may be used to show that the second problem is at least as difficult as the first. Intuitively, problem A is reducible to problem B, if an algorithm for solving problem B efficiently (if it existed) could also be used as a subroutine to solve problem A efficiently. When this is true, solving A cannot be harder than solving B.
In computability theory and computational complexity theory, a many-one reduction (also called mapping reduction) is a reduction which converts instances of one decision problem (whether an instance is in ) to another decision problem (whether an instance is in ) using an effective function. The reduced instance is in the language if and only if the initial instance is in its language . Thus if we can decide whether instances are in the language , we can decide whether instances are in its language by applying the reduction and solving .
Tris-(2-carboxyethyl)phosphine (TCEP) linked to agarose beads is widely used for reducing disulfide bridges in proteins and peptides. The immobilization of TCEP on beads allows efficient removal after reduction to prevent its reaction with alkylating reage ...
In this thesis, we give new approximation algorithms for some NP-hard problems arising in resource allocation and network design. As a resource allocation problem, we study the Santa Claus problem (also known as the MaxMin Fair Allocation problem) in which ...
The students learn the theory and practice of basic concepts and techniques in algorithms. The course covers mathematical induction, techniques for analyzing algorithms, elementary data structures, ma
This course is an introduction to linear and discrete optimization.Warning: This is a mathematics course! While much of the course will be algorithmic in nature, you will still need to be able to p
This course reviews some failure cases in public-key cryptography. It introduces some cryptanalysis techniques. It also presents fundamentals in cryptography such as interactive proofs. Finally, it pr
Given two elliptic curves and the degree of an isogeny between them, finding the isogeny is believed to be a difficult problem—upon which rests the security of nearly any isogeny-based scheme. If, however, to the data above we add information about the beh ...