**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Combinatorial optimization

Summary

Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead.
Combinatorial optimization is related to operations research, algorithm theory, and computational complexity theory. It has important applications in several fields, including artificial intelligence, machine learning, auction theory, software engineering, VLSI, applied mathematics and theoretical computer science.
Some research literature considers discrete optimiza

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (27)

Related concepts (55)

Travelling salesman problem

The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exa

Mathematical optimization

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternative

Linear programming

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented

Related courses (31)

ME-454: Modelling and optimization of energy systems

The goal of the lecture is to present and apply techniques for the modelling and the thermo-economic optimisation of industrial process and energy systems. The lecture covers the problem statement, the solving methods for the simulation and the single and multi-objective optimisation problems.

MGT-418: Convex optimization

This course introduces the theory and application of modern convex optimization from an engineering perspective.

MATH-265: Introduction to optimization and operations research

Introduction to major operations research models and optimization algorithms

Related publications (100)

Loading

Loading

Loading

Related units (14)

Submodular functions are a widely studied topic in theoretical computer science. They have found several applications both theoretical and practical in the fields of economics, combinatorial optimization and machine learning. More recently, there have also been numerous works that study combinatorial problems with submodular objective functions. This is motivated by their natural diminishing returns property which is useful in real-world applications. The thesis at hand is concerned with the study of streaming and matching problems with submodular functions.Firstly, motivated by developing robust algorithms, we propose a new adversarial injections model, in which the input is ordered randomly, but an adversary may inject misleading elements at arbitrary positions. We study the maximum matching problem and cardinality constrained monotone submodular maximization. We show that even under this seemingly powerful adversary, it is possible to break the barrier of 1/2 for both these problems in the streaming setting. Our main result is a novel streaming algorithm that computes a 0.55-approximation for cardinality constrained monotone submodular maximization.In the second part of the thesis, we study the problem of matroid intersection in the semi-streaming setting. Our main result is a (2 + e)-approximate semi-streaming algorithm for weighted matroid inter- section improving upon the previous best guarantee of 4 + e. While our algorithm is based on the local ratio technique, its analysis differs from the related problem of weighted maximum matching and uses the concept of matroid kernels. We are also able to generalize our results to work for submodular functions by adapting ideas from a recent result by Levin and Wajc (SODA'21) on submodular maximization subject to matching constraints.Finally, we study the submodular Santa Claus problem in the restricted assignment case. The submodular Santa Claus problem was introduced in a seminal work by Goemans, Harvey, Iwata, and Mirrokni (SODA'09) as an application of their structural result. In the mentioned problem n unsplittable resources have to be assigned to m players, each with a monotone submodular utility function fi. The goal is to maximize mini fi(Si) where S1, . . . , Sm is a partition of the resources. The result by Goemans et al. implies a polynomial time O(n1/2+e)-approximation algorithm. In the restricted assignment case, each player is given a set of desired resources Gi and the individual valuation functions are defined as fi(S)= f(SnGi). OurmainresultisaO(loglog(n))-approximation algorithm for the problem. Our proof is inspired by the approach of Bansal and Srividenko (STOC'06) to the Santa Claus problem. Com- pared to the more basic linear setting, the introduction of submodularity requires a much more involved analysis and several new ideas.

We extend the traditional spectral invariants (spectrum and angles) by a stronger polynomial time computable graph invariant based on the angles between projections of standard basis vectors into the eigenspaces (in addition to the usual angles between standard basis vectors and eigenspaces). The exact power of the new invariant is still an open problem. We also define combinatorial invariants based on standard graph isomorphism heuristics and compare their strengths with the spectral invariants. In particular, we show that a simple edge coloring invariant is at least as powerful as all these spectral invariants. (C) 2009 Elsevier Inc. All rights reserved.

2010Related lectures (59)