**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# New Results in the Theory of Linear and Integer Programming

Abstract

The polynomial Hirsch conjecture states that the vertex-edge diameter of a d-dimensional polyhedron with n facets is bounded by a polynomial in d and n. For the special case where the polyhedron is defined as the set of points satisfying a system Ax ≤ b of linear inequalities where A is an integer matrix with subdeterminants bounded by ∆, we prove an upper bound of O(∆2d4 logd∆) on the diameter. This special case includes the case of totally unimodular matrices, which are common in fundamental problems of combinatorial optimization such as network flows. Our bound can be improved slightly when the polyhedron is known to be bounded. We obtain this result by assigning a volume to every vertex of the polyhedron via its normal fan and proving a volume expansion property using isoperimetric inequalities. We show a (1 + ε)-approximation algorithm for the closest lattice vector problem in the l∞- norm with a running time bounded by (2log(1/ε))O(d) ·bO(1), where d is the dimension and b thebinaryencodinglengthoftheprobleminstance. Thisimprovesuponthe(2+1/ε)O(d)·bO(1) running time of the previously fastest known algorithm. Our approach is based on a geometric covering problem: find a family of parallelepipeds that cover the cube [−1 + ε, 1 − ε]d , while at the same time every parallelepiped remains within the cube [−1, 1]d even when scaled around its centroid by a factor of 2. We then reduce the original problem to the problem of solving an approximate integer program for each of the parallelepipeds, which can be done using existing approximation algorithms for the closest vector problem. We also discuss some variants of the geometric covering problem. We investigate two questions related to parameterized families of integer programs. First, we relatetheadditiveintegralitygapofafamilyofintegerprogramsmin{cTx : Ax=b,x≥0,x∈ Zn }, where b ∈ Zd is a parameter, to a generalization of the Hilbert basis of a convex cone. We show that under certain conditions, such as when d is fixed and c = 1 is the all ones vector, we can decide in polynomial time whether the additive integrality gap is bounded by a constant γ for all choices of parameter. This is achieved by partitioning the space of parameters using a hyperplane arrangement, and reducing the problem to one integer program for every cell in the arrangement. This decision procedure also yields an alternative method for deciding whether a set of generating vectors of a cone is a Hilbert basis. On the other hand, the general problem of deciding whether the additive integrality gap of such a family of integer programs is bounded is N P -hard even when d is fixed. We show this by a reduction from the coin problem, also known as the Frobenius problem. Second, consider a family of integer programs Ax ≤ b,x ∈ Zd, where b ∈ B is a parameter. Parametric integer programming is the problem of deciding whether all integer programs in vii the family are feasible. We investigate the related problem of finding a parameter b ∈ B that minimizes the number of integer solutions x ∈ Zd of Ax ≤ b. We show that, unlike parametric integer programming, this problem is already N P -hard when d = 2 and B ∼= R. In fact, the following special case is N P -hard: given a convex polygon P and a direction v ∈ Z2 , find t ∈ R that minimizes the number of integer points in P + t v . We show this by a reduction from simultaneous Diophantine approximation. On the positive side, we give a polynomial time approximation scheme for this particular problem.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (1)

Related MOOCs (1)

Related concepts (6)

Linear and Discrete Optimization

This advanced undergraduate course treats basic principles on linear programming like the simplex algorithm, its complexity, and duality. Furthermore it gives an introduction on discrete optimization

Polyhedron

In geometry, a polyhedron (: polyhedra or polyhedrons; ) is a three-dimensional shape with flat polygonal faces, straight edges and sharp corners or vertices. A convex polyhedron is a polyhedron that bounds a convex set. Every convex polyhedron can be constructed as the convex hull of its vertices, and for every finite set of points, not all on the same plane, the convex hull is a convex polyhedron. Cubes and pyramids are examples of convex polyhedra. A polyhedron is a 3-dimensional example of a polytope, a more general concept in any number of dimensions.

Algorithm

In mathematics and computer science, an algorithm (ˈælɡərɪðəm) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning), achieving automation eventually.

Linear programming

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints.

Adam Teodor Polak, Lars Rohwedder

Knapsack and Subset Sum are fundamental NP-hard problems in combinatorial optimization. Recently there has been a growing interest in understanding the best possible pseudopolynomial running times for