**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# New Results in Integer and Lattice Programming

Résumé

An integer program (IP) is a problem of the form $\min \{f(x) : \, Ax = b, \ l \leq x \leq u, \ x \in \Z^n\}$, where $A \in \Z^{m \times n}$, $b \in \Z^m$, $l,u \in \Z^n$, and $f: \Z^n \rightarrow \Z$ is a separable convex objective function. The problem of finding an optimal solution for an integer program is known as integer programming. Integer programming is NP-hard in general, though several algorithms exist: Lenstra provided an algorithm that is polynomial if the dimension $n$ is fixed. For variable dimension, the best known algorithm depends linearly on $n$, and exponentially on the number of equalities as well as the largest absolute value of an entry in the matrix $A$.

The first part of this thesis considers integer programming for variable dimensions and sparse matrices. We measure the sparsity of a matrix by the tree-depth of the dual graph of $A$. A typical example for these integer programs are $N$-fold IPs, used for scheduling and social choice problems. We obtain the currently fastest fixed-parameter tractable algorithm with parameters tree-depth and the largest absolute value of the entries in $A$. The running time we achieve is near-linear in the dimension. With a slightly worse running time, we are able to show that $N$-fold integer programs of constant block size can be solved in strongly polynomial time. Assuming the exponential time hypothesis, we complement these results with a lower bound on the parameter dependency that almost matches the parameter dependency of the running time. As a consequence, we provide the currently strongest lower bound for $N$-fold integer programs.

Another problem closely related to integer programming is the closest vector problem. A lattice is a discrete additive subgroup of $\R^n$. The closest vector problem (CVP) asks for a lattice point closest to a given target vector. An important tool for solving the closest vector problem is the Voronoi cell $\vc$ of a lattice $\Lambda \subseteq \R^n$, which is the set of all points for which $0$ is a closest lattice point. It is a polytope whose facets are induced by a set of lattice vectors, the Voronoi relevant vectors. A generic lattice has exponentially many Voronoi relevant vectors, leading to exponential space for certain CVP algorithms.

In the second part of this thesis, we introduce the notion of a $c$-compact lattice basis $B \in \R^{n \times n}$ that facilitates to represent the Voronoi relevant vectors with coefficients bounded by $c$. Such a basis allows to reduce the space requirement of Micciancio's & Voulgaris' algorithm for the closest vector problem from exponential to polynomial, while the running time becomes exponential in $c$. We show that for every lattice an $n^2$-compact basis exists, but there are lattices for which we cannot choose $c \in o (n)$. If the Voronoi cell is a zonotope, we can choose $c=1$, providing a single-exponential time and polynomial space algorithm for CVP, assuming a $1$-compact basis is known.

Deciding whether a given lattice has a certain structure that helps to solve the closest vector problem more efficiently is a reappearing and non-trivial problem. The third part of this thesis is concerned with the specific structure of having an orthonormal basis. We show that this problem belongs to NP $\cap$ co-NP. Moreover, it can be reduced to solving a single closest vector problem. We also show that if a separation oracle for the Voronoi cell is provided, CVP is solvable in polynomial time.

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés (11)

Publications associées (3)

MOOCs associés (11)

Digital Signal Processing [retired]

The course provides a comprehensive overview of digital signal processing theory, covering discrete time, Fourier analysis, filter design, sampling, interpolation and quantization; it also includes a

Digital Signal Processing I

Basic signal processing concepts, Fourier analysis and filters. This module can
be used as a starting point or a basic refresher in elementary DSP

Digital Signal Processing II

Adaptive signal processing, A/D and D/A. This module provides the basic
tools for adaptive filtering and a solid mathematical framework for sampling and
quantization

Algorithme

thumb|Algorithme de découpe d'un polygone quelconque en triangles (triangulation). Un algorithme est une suite finie et non ambiguë d'instructions et d’opérations permettant de résoudre une classe de problèmes. Le domaine qui étudie les algorithmes est appelé l'algorithmique. On retrouve aujourd'hui des algorithmes dans de nombreuses applications telles que le fonctionnement des ordinateurs, la cryptographie, le routage d'informations, la planification et l'utilisation optimale des ressources, le , le traitement de textes, la bio-informatique L' algorithme peut être mis en forme de façon graphique dans un algorigramme ou organigramme de programmation.

Diagramme de Voronoï

En mathématiques, un diagramme de Voronoï est un pavage (découpage) du plan en cellules (régions adjacentes) à partir d'un ensemble discret de points appelés « germes ». Chaque cellule enferme un seul germe, et forme l'ensemble des points du plan plus proches de ce germe que d'aucun autre. La cellule représente en quelque sorte la « zone d'influence » du germe. Le diagramme doit son nom au mathématicien russe Gueorgui Voronoï (1868-1908). Le découpage est aussi appelé décomposition de Voronoï, partition de Voronoï ou tessellation de Dirichlet.

Graphe dual

En théorie des graphes, le graphe dual d'un graphe plongé dans une surface est défini à l'aide des composantes de son complémentaire, lesquelles sont reliées entre elles par les arêtes du graphe de départ. Cette notion généralise celle de dualité dans les polyèdres. Il faut noter qu'un même graphe abstrait peut avoir des graphes duaux non isomorphes en fonction du plongement choisi, même dans le cas de plongements dans le plan. Un graphe (plongé) isomorphe à son dual est dit autodual.

Optimization is a fundamental tool in modern science. Numerous important tasks in biology, economy, physics and computer science can be cast as optimization problems. Consider the example of machine learning: recent advances have shown that even the most sophisticated tasks involving decision making, can be reduced to solving certain optimization problems. These advances however, bring several new challenges to the field of algorithm design. The first of them is related to the ever-growing size of instances, these optimization problems need to be solved for. In practice, this forces the algorithms for these problems to run in time linear or nearly linear in their input size. The second challenge is related to the emergence of new, harder and harder problems which need to be dealt with. These problems are in most cases considered computationally intractable because of complexity barriers such as NP completeness, or because of non-convexity. Therefore, efficiently computable relaxations for these problems are typically desired.
The material of this thesis is divided into two parts. In the first part we attempt to address the first challenge. The recent tremendous progress in developing fast algorithm for such fundamental problems as maximum flow or linear programming, demonstrate the power of continuous techniques and tools such as electrical flows, fast Laplacian solvers and interior point methods. In this thesis we study new algorithms of this type based on continuous dynamical systems inspired by the study of a slime mold Physarum polycephalum. We perform a rigorous mathematical analysis of these dynamical systems and extract from them new, fast algorithms for problems such as minimum cost flow, linear programming and basis pursuit.
In the second part of the thesis we develop new tools to approach the second challenge. Towards this, we study a very general form of discrete optimization problems and its extension to sampling and counting, capturing a host of important problems such as counting matchings in graphs, computing permanents of matrices or sampling from constrained determinantal point processes. We present a very general framework, based on polynomials, for dealing with these problems computationally. It is based, roughly, on encoding the problem structure in a multivariate polynomial and then recovering the solution by means of certain continuous relaxations. This leads to several questions on how to reason about such relaxations and how to compute them. We resolve them by relating certain analytic properties of the arising polynomials, such as the location of their roots or convexity, to the combinatorial structure of the underlying problem.
We believe that the ideas and mathematical techniques developed in this thesis are only a beginning and they will inspire more work on the use of dynamical systems and polynomials in the design of fast algorithms.

Matthias Schymura, Christoph Hunkenschröder

In a seminal work, Micciancio & Voulgaris (2010) described a deterministic single-exponential time algorithm for the Closest Vector Problem (CVP) on lattices. It is based on the computation of the Voronoi cell of the given lattice and thus may need exponential space as well. We address the major open question whether there exists such an algorithm that requires only polynomial space. To this end, we define a lattice basis to be c-compact if every facet normal of the Voronoi cell is a linear combination of the basis vectors using coefficients that are bounded by c in absolute value. Given such a basis, we get a polynomial space algorithm for CVP whose running time naturally depends on c. Thus, our main focus is the behavior of the smallest possible value of c, with the following results: There always exist c-compact bases, where c is bounded by n^2 for an n-dimension lattice; there are lattices not admitting a c-compact basis with c growing sublinearly with the dimension; and every lattice with a zonotopal Voronoi cell has a 1-compact basis.

Many of the currently best-known approximation algorithms for NP-hard optimization problems are based on Linear Programming (LP) and Semi-definite Programming (SDP) relaxations. Given its power, this class of algorithms seems to contain the most favourable candidates for outperforming the current state-of-the-art approximation guarantees for NP-hard problems, for which there still exists a gap between the inapproximability results and the approximation guarantees that we know how to achieve in polynomial time. In this thesis, we address both the power and the limitations of these relaxations, as well as the connection between the shortcomings of these relaxations and the inapproximability of the underlying problem. In the first part, we study the limitations of LP relaxations of well-known graph problems such as the Vertex Cover problem and the Independent Set problem. We prove that any small LP relaxation for the aforementioned problems, cannot have an integrality gap strictly better than $2$ and $\omega(1)$, respectively. Furthermore, our lower bound for the Independent Set problem also holds for any SDP relaxation. Prior to our work, it was only known that such LP relaxations cannot have an integrality gap better than $1.5$ for the Vertex Cover Problem, and better than $2$ for the Independent Set problem. In the second part, we study the so-called knapsack cover inequalities that are used in the current best relaxations for numerous combinatorial optimization problems of covering type. In spite of their widespread use, these inequalities yield LP relaxations of exponential size, over which it is not known how to optimize exactly in polynomial time. We address this issue and obtain LP relaxations of quasi-polynomial size that are at least as strong as that given by the knapsack cover inequalities. In the last part, we show a close connection between structural hardness for k-partite graphs and tight inapproximability results for scheduling problems with precedence constraints. This connection is inspired by a family of integrality gap instances of a certain LP relaxation. Assuming the hardness of an optimization problem on k-partite graphs, we obtain a hardness of $2-\varepsilon$ for the problem of minimizing the makespan for scheduling with preemption on identical parallel machines, and a super constant inapproximability for the problem of scheduling on related parallel machines. Prior to this result, it was only known that the first problem does not admit a PTAS, and the second problem is NP-hard to approximate within a factor strictly better than 2, assuming the Unique Games Conjecture.