In computability theory and computational complexity theory, a many-one reduction (also called mapping reduction) is a reduction which converts instances of one decision problem (whether an instance is in ) to another decision problem (whether an instance is in ) using an effective function. The reduced instance is in the language if and only if the initial instance is in its language . Thus if we can decide whether instances are in the language , we can decide whether instances are in its language by applying the reduction and solving . Thus, reductions can be used to measure the relative computational difficulty of two problems. It is said that reduces to if, in layman's terms is harder to solve than . That is to say, any algorithm that solves can also be used as part of a (otherwise relatively simple) program that solves .
Many-one reductions are a special case and stronger form of Turing reductions. With many-one reductions, the oracle (that is, our solution for B) can be invoked only once at the end, and the answer cannot be modified. This means that if we want to show that problem A can be reduced to problem B, we can use our solution for B only once in our solution for A, unlike in Turing reduction, where we can use our solution for B as many times as needed while solving A.
This means that many-one reductions map instances of one problem to instances of another, while Turing reductions compute the solution to one problem, assuming the other problem is easy to solve. The many-one reduction is more effective at separating problems into distinct complexity classes. However, the increased restrictions on many-one reductions make them more difficult to find.
Many-one reductions were first used by Emil Post in a paper published in 1944. Later Norman Shapiro used the same concept in 1956 under the name strong reducibility.
Suppose and are formal languages over the alphabets and , respectively. A many-one reduction from to is a total computable function that has the property that each word is in if and only if is in .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The classical distinction between polynomial time solvable and NP-hard problems is often too coarse. This course covers techniques for proving more fine-grained lower and upper bounds on complexity of
In computability theory, a Turing reduction from a decision problem to a decision problem is an oracle machine which decides problem given an oracle for (Rogers 1967, Soare 1987). It can be understood as an algorithm that could be used to solve if it had available to it a subroutine for solving . The concept can be analogously applied to function problems. If a Turing reduction from to exists, then every algorithm for can be used to produce an algorithm for , by inserting the algorithm for at each place where the oracle machine computing queries the oracle for .
In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second.
In computer science and mathematical logic the Turing degree (named after Alan Turing) or degree of unsolvability of a set of natural numbers measures the level of algorithmic unsolvability of the set. The concept of Turing degree is fundamental in computability theory, where sets of natural numbers are often regarded as decision problems. The Turing degree of a set is a measure of how difficult it is to solve the decision problem associated with the set, that is, to determine whether an arbitrary number is in the given set.
Given two elliptic curves and the degree of an isogeny between them, finding the isogeny is believed to be a difficult problem—upon which rests the security of nearly any isogeny-based scheme. If, however, to the data above we add information about the beh ...
The lattice Green's function method (LGFM) is the discrete counterpart of the continuum boundary element method and is a natural approach for solving intrinsically discrete solid mechanics problems that arise in atomistic-continuum coupling methods. Here, ...
In this thesis we present and analyze approximation algorithms for three different clustering problems. The formulations of these problems are motivated by fairness and explainability considerations, two issues that have recently received attention in the ...