Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Several fundamental problems that arise in optimization and computer science can be cast as follows: Given vectors v(1), ..., v(m) is an element of R-d and a constraint family B subset of 2([m]), find a set S. B that maximizes the squared volume of the simplex spanned by the vectors in S. A motivating example is the ubiquitous data-summarization problem in machine learning and information retrieval where one is given a collection of feature vectors that represent data such as documents or images. The volume of a collection of vectors is used as a measure of their diversity, and partition or matroid constraints over [m] are imposed in order to ensure resource or fairness constraints. Even with a simple cardinality constraint (B = (([m])(r))), the problem becomes NP-hard and has received much attention starting with a result by Khachiyan [1] who gave an r(O(r)) approximation algorithm for this problem. Recently, Nikolov and Singh [2] presented a convex program and showed how it can be used to estimate the value of the most diverse set when there are multiple cardinality constraints (i.e., when B corresponds to a partition matroid). Their proof of the integrality gap of the convex program relied on an inequality by Gurvits [3], and was recently extended to regular matroids [4], [5]. The question of whether these estimation algorithms can be converted into the more useful approximation algorithms - that also output a set - remained open. The main contribution of this paper is to give the first approximation algorithms for both partition and regular matroids. We present novel formulations for the subdeterminant maximization problem for these matroids; this reduces them to the problem of finding a point that maximizes the absolute value of a nonconvex function over a Cartesian product of probability simplices. The technical core of our results is a new anti-concentration inequality for dependent random variables that arise from these functions which allows us to relate the optimal value of these nonconvex functions to their value at a random point. Unlike prior work on the constrained subdeterminant maximization problem, our proofs do not rely on real-stability or convexity and could be of independent interest both in algorithms and complexity where anti-concentration phenomena has recently been deployed.