**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Some combinatorial optimization problems in graphs with applications in telecommunications and tomography

Résumé

The common point between the different chapters of the present work is graph theory. We investigate some well known graph theory problems, and some which arise from more specific applications. In the first chapter, we deal with the maximum stable set problem, and provide some new graph classes, where it can be solved in polynomial time. Those classes are hereditary, i.e. characterized by a list of forbidden induced subgraphs. The algorithms proposed are purely combinatorial. The second chapter is devoted to the study of a problem linked to security purposes in mobile telecommunication networks. The particularity is that there is no central authority guaranteeing security, but it is actually managed by the users themselves. The network is modelled by an oriented graph, whose vertices represent the users, and whose arcs represent public key certificates. The problem is to associate to each vertex a subgraph with some requirements on the size of the subgraphs, the number of times a vertex is taken in a subgraph and the connectivity between any two users as they put their subgraphs together. Constructive heuristics are proposed, bounds on the optimal solution and a tabu search are described and tested. The third chapter is on the problem of reconstructing an image, given its projections in terms of the number of occurrences of each color in each row and each column. The case of two colors is known to be polynomially solvable, it is NP-complete with four or more colors, and the complexity status of the problem with three colors is open. An intermediate case between two and three colors is shown to be solvable in polynomial time. The last two chapters are about graph (vertex-)coloring. In the fourth, we prove a result which brings a large collection of NP-hard subcases, characterized by forbidden induced subgraphs. In the fifth chapter, we approach the problem with the use of linear programming. Links between different formulations are pointed out, and some families of facets are characterized. In the last section, we study a branch and bound algorithm, whose lower bounds are given by the optimal value of the linear relaxation of one of the exposed formulations. A preprocessing procedure is proposed and tested.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Publications associées (118)

Chargement

Chargement

Chargement

Concepts associés (21)

Optimisation combinatoire

L’optimisation combinatoire, (sous-ensemble à nombre de solutions finies de l'optimisation discrète), est une branche de l'optimisation en mathématiques appliquées et en informatique, également liée

Combinatoire

En mathématiques, la combinatoire, appelée aussi analyse combinatoire, étudie les configurations de collections finies d'objets ou les combinaisons d'ensembles finis, et les dénombrements.
Géné

Optimisation linéaire

thumb|upright=0.5|Optimisation linéaire dans un espace à deux dimensions (x1, x2). La fonction-coût fc est représentée par les lignes de niveau bleues à gauche et par le plan bleu à droite. L'ensemble

Optimization is a fundamental tool in modern science. Numerous important tasks in biology, economy, physics and computer science can be cast as optimization problems. Consider the example of machine learning: recent advances have shown that even the most sophisticated tasks involving decision making, can be reduced to solving certain optimization problems. These advances however, bring several new challenges to the field of algorithm design. The first of them is related to the ever-growing size of instances, these optimization problems need to be solved for. In practice, this forces the algorithms for these problems to run in time linear or nearly linear in their input size. The second challenge is related to the emergence of new, harder and harder problems which need to be dealt with. These problems are in most cases considered computationally intractable because of complexity barriers such as NP completeness, or because of non-convexity. Therefore, efficiently computable relaxations for these problems are typically desired.
The material of this thesis is divided into two parts. In the first part we attempt to address the first challenge. The recent tremendous progress in developing fast algorithm for such fundamental problems as maximum flow or linear programming, demonstrate the power of continuous techniques and tools such as electrical flows, fast Laplacian solvers and interior point methods. In this thesis we study new algorithms of this type based on continuous dynamical systems inspired by the study of a slime mold Physarum polycephalum. We perform a rigorous mathematical analysis of these dynamical systems and extract from them new, fast algorithms for problems such as minimum cost flow, linear programming and basis pursuit.
In the second part of the thesis we develop new tools to approach the second challenge. Towards this, we study a very general form of discrete optimization problems and its extension to sampling and counting, capturing a host of important problems such as counting matchings in graphs, computing permanents of matrices or sampling from constrained determinantal point processes. We present a very general framework, based on polynomials, for dealing with these problems computationally. It is based, roughly, on encoding the problem structure in a multivariate polynomial and then recovering the solution by means of certain continuous relaxations. This leads to several questions on how to reason about such relaxations and how to compute them. We resolve them by relating certain analytic properties of the arising polynomials, such as the location of their roots or convexity, to the combinatorial structure of the underlying problem.
We believe that the ideas and mathematical techniques developed in this thesis are only a beginning and they will inspire more work on the use of dynamical systems and polynomials in the design of fast algorithms.

Approximation algorithms are a commonly used tool for designing efficient algorithmic solutions for intractable problems, at the expense of the quality of the output solution. A prominent technique for designing such algorithms is the use of Linear Programming (LP) relaxations. An optimal solution to such a relaxation provides a bound on the objective value of the optimal integral solution, to which we compare the integral solution we return. In this context, when studying a specific problem, two natural questions often arise: What is a strong LP relaxation for this problem, and how can we exploit it? Over the course of the past few decades, a significant amount of effort has been expended by the research community in order to answer these questions for a variety of interesting intractable problems. Although there exist multiple problems for which we have designed LP relaxations that achieve best-possible guarantees, there still exist numerous problems for which we either have no strong LP relaxations, or do not know how to use them. The main focus of this thesis is extending our understanding of such strong relaxations. We focus on designing good approximation algorithms for certain allocation problems, by employing a class of strong LP relaxations, called configuration-LPs. For many such allocation problems, the best-known results are derived by using simple and natural LP relaxations, whereas configuration-LPs have been used successfully on several occasions in order to break pre-existing barriers set by weaker relaxations. However, our understanding of configuration-LPs is far from complete for many problems. Therefore, understanding and using these relaxations to the farthest extent possible is a quite intriguing question. Answering this question could result in improved approximation algorithms for a wide variety of allocation problems. The first problem we address in this thesis is the restricted max-min fair allocation problem. Prior to our work, the best known result provided an $\Omega(1)$-approximation that ran in polynomial time. Also, it was known how to estimate the value of an optimal solution to the problem within a factor of $1/(4+c)$, for any $c>0$, by solving the corresponding configuration-LP. Our first contribution in this thesis is the design of a $1/13$-approximation algorithm for the problem, using the configuration-LP. Specifically, although our algorithm is fully combinatorial, it consists of a local-search procedure that is guaranteed to succeed only when the configuration-LP is feasible. In order to establish the correctness and running time of the algorithm, it is crucial to use the configuration-LP in our analysis. The second problem we study is the scheduling of jobs on unrelated machines in order to minimize the sum of weighted completion times. For this problem, the best known approximation algorithm achieves a ratio of $3/2-r$, for some small $r>0$. Our second contribution in this thesis is the improvement of this ratio to $(1+\sqrt{2})/2+c$, for any $c>0$, for the special case of the problem where the jobs have uniform Smith ratios. To achieve this ratio, we design a randomized rounding algorithm that rounds solutions to the corresponding configuration-LP. Through a careful examination of the distribution this randomized algorithm outputs, we identify the one that maximizes the approximation ratio, and we then upper bound the ratio this worst-case distribution exhibits by $(1+\sqrt{2})/2+c$.

Many of the currently best-known approximation algorithms for NP-hard optimization problems are based on Linear Programming (LP) and Semi-definite Programming (SDP) relaxations. Given its power, this class of algorithms seems to contain the most favourable candidates for outperforming the current state-of-the-art approximation guarantees for NP-hard problems, for which there still exists a gap between the inapproximability results and the approximation guarantees that we know how to achieve in polynomial time. In this thesis, we address both the power and the limitations of these relaxations, as well as the connection between the shortcomings of these relaxations and the inapproximability of the underlying problem. In the first part, we study the limitations of LP relaxations of well-known graph problems such as the Vertex Cover problem and the Independent Set problem. We prove that any small LP relaxation for the aforementioned problems, cannot have an integrality gap strictly better than $2$ and $\omega(1)$, respectively. Furthermore, our lower bound for the Independent Set problem also holds for any SDP relaxation. Prior to our work, it was only known that such LP relaxations cannot have an integrality gap better than $1.5$ for the Vertex Cover Problem, and better than $2$ for the Independent Set problem. In the second part, we study the so-called knapsack cover inequalities that are used in the current best relaxations for numerous combinatorial optimization problems of covering type. In spite of their widespread use, these inequalities yield LP relaxations of exponential size, over which it is not known how to optimize exactly in polynomial time. We address this issue and obtain LP relaxations of quasi-polynomial size that are at least as strong as that given by the knapsack cover inequalities. In the last part, we show a close connection between structural hardness for k-partite graphs and tight inapproximability results for scheduling problems with precedence constraints. This connection is inspired by a family of integrality gap instances of a certain LP relaxation. Assuming the hardness of an optimization problem on k-partite graphs, we obtain a hardness of $2-\varepsilon$ for the problem of minimizing the makespan for scheduling with preemption on identical parallel machines, and a super constant inapproximability for the problem of scheduling on related parallel machines. Prior to this result, it was only known that the first problem does not admit a PTAS, and the second problem is NP-hard to approximate within a factor strictly better than 2, assuming the Unique Games Conjecture.