In computability theory and computational complexity theory, a many-one reduction (also called mapping reduction) is a reduction which converts instances of one decision problem (whether an instance is in ) to another decision problem (whether an instance is in ) using an effective function. The reduced instance is in the language if and only if the initial instance is in its language . Thus if we can decide whether instances are in the language , we can decide whether instances are in its language by applying the reduction and solving . Thus, reductions can be used to measure the relative computational difficulty of two problems. It is said that reduces to if, in layman's terms is harder to solve than . That is to say, any algorithm that solves can also be used as part of a (otherwise relatively simple) program that solves .
Many-one reductions are a special case and stronger form of Turing reductions. With many-one reductions, the oracle (that is, our solution for B) can be invoked only once at the end, and the answer cannot be modified. This means that if we want to show that problem A can be reduced to problem B, we can use our solution for B only once in our solution for A, unlike in Turing reduction, where we can use our solution for B as many times as needed while solving A.
This means that many-one reductions map instances of one problem to instances of another, while Turing reductions compute the solution to one problem, assuming the other problem is easy to solve. The many-one reduction is more effective at separating problems into distinct complexity classes. However, the increased restrictions on many-one reductions make them more difficult to find.
Many-one reductions were first used by Emil Post in a paper published in 1944. Later Norman Shapiro used the same concept in 1956 under the name strong reducibility.
Suppose and are formal languages over the alphabets and , respectively. A many-one reduction from to is a total computable function that has the property that each word is in if and only if is in .
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
The classical distinction between polynomial time solvable and NP-hard problems is often too coarse. This course covers techniques for proving more fine-grained lower and upper bounds on complexity of
In computability theory, a Turing reduction from a decision problem to a decision problem is an oracle machine which decides problem given an oracle for (Rogers 1967, Soare 1987). It can be understood as an algorithm that could be used to solve if it had available to it a subroutine for solving . The concept can be analogously applied to function problems. If a Turing reduction from to exists, then every algorithm for can be used to produce an algorithm for , by inserting the algorithm for at each place where the oracle machine computing queries the oracle for .
Une réduction polynomiale est un outil d'informatique théorique, plus particulièrement de théorie de la complexité. C'est une classe particulière de réductions particulièrement importante, notamment pour le problème P = NP. Dans le cadre des langages formels pour les problèmes de décision, on dit qu'un langage est réductible en temps polynomial à un langage (noté ) s'il existe une fonction calculable en temps polynomial telle que pour tout , si et seulement si .
En informatique et en logique mathématique, le degré de Turing (nommé d'après Alan Turing) ou le degré d'insolubilité d'un ensemble d'entiers naturels mesure le niveau d'insolubilité algorithmique de l'ensemble. Le concept de degré de Turing est fondamental dans la théorie de la calculabilité, où des ensembles d'entiers naturels sont souvent considérés comme des problèmes de décision. Le degré de Turing d'un ensemble révèle combien il est difficile de résoudre le problème de décision associé à cet ensemble, à savoir, déterminer si un nombre arbitraire est dans l'ensemble donné.
Given two elliptic curves and the degree of an isogeny between them, finding the isogeny is believed to be a difficult problem—upon which rests the security of nearly any isogeny-based scheme. If, however, to the data above we add information about the beh ...
In this thesis we present and analyze approximation algorithms for three different clustering problems. The formulations of these problems are motivated by fairness and explainability considerations, two issues that have recently received attention in the ...
EPFL2023
, ,
The lattice Green's function method (LGFM) is the discrete counterpart of the continuum boundary element method and is a natural approach for solving intrinsically discrete solid mechanics problems that arise in atomistic-continuum coupling methods. Here, ...