Concept# Problème P ≟ NP

Résumé

vignette|400px|Représentation visuelle des deux configurations possibles.
Le problème P ≟ NP est une conjecture en mathématiques, et plus précisément en informatique théorique, considérée par de nombreux chercheurs comme une des plus importantes conjectures du domaine, et même des mathématiques en général. L'Institut de mathématiques Clay a inclus ce problème dans sa liste des sept problèmes du prix du millénaire, et offre à ce titre un million de dollars à quiconque sera en mesure de démontrer P = NP ou P ≠ NP ou de démontrer que ce n'est pas démontrable. Ce problème est également le troisième problème de Smale.
Très schématiquement, il s'agit de déterminer si le fait de pouvoir vérifier rapidement une solution à un problème implique de pouvoir la trouver rapidement ; ou encore, si ce que nous pouvons trouver rapidement lorsque nous avons de la chance peut être trouvé aussi vite par un calcul intelligent.
Plus précisément, il s'agit de savoir si la classe de complexité P des problè

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Personnes associées (8)

Unités associées (3)

Concepts associés (70)

Théorie de la complexité (informatique théorique)

vignette|Quelques classes de complexité étudiées dans le domaine de la théorie de la complexité. Par exemple, P est la classe des problèmes décidés en temps polynomial par une machine de Turing déterm

Problème NP-complet

En théorie de la complexité, un problème NP-complet ou problème NPC (c'est-à-dire un problème complet pour la classe NP) est un problème de décision vérifiant les propriétés suivantes :

- il est pos

Problème SAT

vignette|Une instance du Sudoku peut être transformée en une formule de logique propositionnelle à satisfaire. Une assignation des variables propositionnelles donne une grille complétée.
En informatiq

Publications associées (100)

Chargement

Chargement

Chargement

Cours associés (34)

MATH-467: Probabilistic methods in combinatorics

We develop a sophisticated framework for solving problems in discrete mathematics through the use of randomness (i.e., coin flipping). This includes constructing mathematical structures with unexpected (and sometimes paradoxical) properties for which no other methods of construction are known.

COM-406: Foundations of Data Science

We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas and techniques that come from probability, information theory as well as signal processing.

COM-501: Advanced cryptography

This course reviews some failure cases in public-key cryptography. It introduces some cryptanalysis techniques. It also presents fundamentals in cryptography such as interactive proofs. Finally, it presents some techniques to validate the security of cryptographic primitives.

Séances de cours associées (44)

Over the past few decades we have been experiencing an explosion of information generated by large networks of sensors and other data sources. Much of this data is intrinsically structured, such as traffic evolution in a transportation network, temperature values in different geographical locations, information diffusion in social networks, functional activities in the brain, or 3D meshes in computer graphics. The representation, analysis, and compression of such data is a challenging task and requires the development of new tools that can identify and properly exploit the data structure. In this thesis, we formulate the processing and analysis of structured data using the emerging framework of graph signal processing. Graphs are generic data representation forms, suitable for modeling the geometric structure of signals that live on topologically complicated domains. The vertices of the graph represent the discrete data domain, and the edge weights capture the pairwise relationships between the vertices. A graph signal is then defined as a function that assigns a real value to each vertex. Graph signal processing is a useful framework for handling efficiently such data as it takes into consideration both the signal and the graph structure. In this work, we develop new methods and study several important problems related to the representation and structure-aware processing of graph signals in both centralized and distributed settings. We focus in particular in the theory of sparse graph signal representation and its applications and we bring some insights towards better understanding the interplay between graphs and signals on graphs. First, we study a novel yet natural application of the graph signal processing framework for the representation of 3D point cloud sequences. We exploit graph-based transform signal representations for addressing the challenging problem of compression of data that is characterized by dynamic 3D positions and color attributes. Next, we depart from graph-based transform signal representations to design new overcomplete representations, or dictionaries, which are adapted to specific classes of graph signals. In particular, we address the problem of sparse representation of graph signals residing on weighted graphs by learning graph structured dictionaries that incorporate the intrinsic geometric structure of the irregular data domain and are adapted to the characteristics of the signals. Then, we move to the efficient processing of graph signals in distributed scenarios, such as sensor or camera networks, which brings important constraints in terms of communication and computation in realistic settings. In particular, we study the effect of quantization in the distributed processing of graph signals that are represented by graph spectral dictionaries and we show that the impact of the quantization depends on the graph geometry and on the structure of the spectral dictionaries. Finally, we focus on a widely used graph process, the problem of distributed average consensus in a sensor network where sensors exchange quantized information with their neighbors. We propose a novel quantization scheme that depends on the graph topology and exploits the increasing correlation between the values exchanged by the sensors throughout the iterations of the consensus algorithm.

Slash-and-burn agriculture is considered as an important driver of deforestation in the tropics. In Madagascar, a biodiversity hotspot, this type of agriculture particularly threatens the tropical forest cover. On the south-western coast of the island, in Central Menabe, the dry tropical forest of Kirindy suffers from an annual deforestation of 2.6%. At such alarming rate, the forest will have entirely disappeared by 2050. The attention of past research programs and conservation actions were mainly on studying the forest or the general interactions between local populations and the forest, with little focus on the slash-and-burn agricultural system itself. Hence, understanding the cultivation practices and assessing alternatives to them is crucial to improve the current forest conservation initiatives. The present thesis answers to the need of an in-depth study of the agricultural practices in Central Menabe. The project aims to (1) analyze the practices of slash-and-burn cultivation in Kirindy forest, (2) explore alternative slash-and-burn practices which may be more sustainable for the forest, (3) determine the implementation perspectives of these alternatives. In a first stage, we have characterized soil productivity along repetitive slash-and-burn cultivation cycles and the farmerâs perception of their agricultural system. Then, we experimented a selective slash-and-burn agriculture (where trees are left intentionally within the cultivated fields), coupled with compost amendment. Finally, we have assessed the implementation potential of that technique through interviews and participating workshops with local farmers. Our findings highlighted that soil depletion due to plant uptake and erosion or leaching by heavy rains, as well as weed invasion were important problems of slash-and-burn cultivation in Central Menabe. Farmers perceive weeding as a work overload. Decrease in grain yield due to slash-and-burn agriculture reaches about 40% after three years of cultivation on the same field (from 4 to 2.5 t ha-1), and decreases up to 75% after a single cyclonic rainy season (from 4 to 1 t ha-1). Further, we demonstrated that combining compost and wood ashes (from the burnings) seems a promising solution to sustain cultivation on the fields. Compost, combined with wood ashes, multiplied corn yield by 5 compared to traditional slash-and-burn agriculture. Growth rates were accelerated and phenology stages of maize plants advanced. Furthermore, this combination also increased soil security by improving the remaining stock of nutrients after a rain simulation. On the other hand, a partial remaining tree cover (selective slash-and-burn agriculture) improved corn yields only when the soil was amended with ashes, not with compost, nor the combination of both. Implementing composting in the forest dwelling communities appears to be a challenging task. Several barriers were highlighted, among them the most important was the water constraints during a consequent period of time related to compost maintenance. Alternative slash-and-burn practices have not been much studied so far because the use of fire was always considered as destructive. Thus, our result will contribute to the development of new forest management strategies which would balance conservation of habitats and rural communities agricultural needs, while minimizing the impact on primary forest.

Monocular depth estimation is the task of obtaining a measure of distance for each pixel using a single image. It is an important problem in computer vision and is usually solved using neural networks. Though recent works in this area have shown significant improvement in accuracy, the state-of-the-art methods tend to require massive amounts of memory and time to process an image. The main purpose of this work is to improve the performance of the latest solutions with no decrease in accuracy. To this end, we introduce the Double Refinement Network architecture. The proposed method achieves state-of-the-art results on the standard benchmark RGB-D dataset NYU Depth v2, while its frames per second rate is significantly higher (up to 18 times speedup per image at batch size 1) and the RAM usage is lower.