The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.
The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP.
In the theory of computational complexity, the decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour of at most L) belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities.
The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, many heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved completely and even problems with millions of cities can be approximated within a small fraction of 1%.
The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. The TSP also appears in astronomy, as astronomers observing many sources will want to minimize the time spent moving the telescope between the sources; in such problems, the TSP can be embedded inside an optimal control problem. In many applications, additional constraints such as limited resources or time windows may be imposed.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Learn the concepts, tools and API's that are needed to debug, test, optimize and parallelize a scientific application on a cluster from an existing code or from scratch. Both OpenMP (shared memory) an
This course introduces the theory and applications of optimization. We develop tools and concepts of optimization and decision analysis that enable managers in manufacturing, service operations, marke
We address quantitatively the management of logistics operations, focusing notably on their environmental impact. Considering practical situations, focus is paid on the optimization of logistics syste
Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead.
In computational complexity theory, a problem is NP-complete when: It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no". When the answer is "yes", this can be demonstrated through the existence of a short (polynomial length) solution. The correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions.
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
Dans une première partie, nous étudierons d’abord comment résoudre de manière très concrète un problème au moyen d’un algorithme, ce qui nous amènera dans un second temps à une des grandes questions d
Dans une première partie, nous étudierons d’abord comment résoudre de manière très concrète un problème au moyen d’un algorithme, ce qui nous amènera dans un second temps à une des grandes questions d
Gruber et al. (2022) offered a framework how to explain "Physical time within human time", solving the 'two times problem: Here, I am asking whether such a problem exists at all. To question the question, I will appeal to neurobiological, evolutionary, and ...
In light of the challenges posed by climate change and the goals of the Paris Agreement, electricity generation is shifting to a more renewable and decentralized pattern, while the operation of systems like buildings is increasingly electrified. This calls ...
In this paper, we propose a reduced-order modeling strategy for two-way Dirichlet-Neumann parametric coupled problems solved with domain-decomposition (DD) sub-structuring methods. We split the original coupled differential problem into two sub-problems wi ...