In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
PSO is originally attributed to Kennedy, Eberhart and Shi and was first intended for simulating social behaviour, as a stylized representation of the movement of organisms in a bird flock or fish school. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli. Recently, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.
PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. Also, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found.
A basic variant of the PSO algorithm works by having a population (called a swarm) of candidate solutions (called particles). These particles are moved around in the search-space according to a few simple formulae.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
L'intelligence distribuée, appelée aussi intelligence en essaim, désigne l'apparition de phénomènes cohérents à l'échelle d'une population dont les individus agissent selon des règles simples. L'interaction ou la synergie entre actions individuelles simples peut de façons variées permettre l'émergence de formes, organisations, ou comportements collectifs, complexes ou cohérents, tandis que les individus eux se comportent à leur échelle indépendamment de toute règle globale.
Une métaheuristique est un algorithme d’optimisation visant à résoudre des problèmes d’optimisation difficile (souvent issus des domaines de la recherche opérationnelle, de l'ingénierie ou de l'intelligence artificielle) pour lesquels on ne connaît pas de méthode classique plus efficace. Les métaheuristiques sont généralement des algorithmes stochastiques itératifs, qui progressent vers un optimum global (c'est-à-dire l'extremum global d'une fonction), par échantillonnage d’une fonction objectif.
Les algorithmes de colonies de fourmis (, ou ACO) sont des algorithmes inspirés du comportement des fourmis, ou d'autres espèces formant un superorganisme, et qui constituent une famille de métaheuristiques d’optimisation. Initialement proposé par Marco Dorigo dans les années 1990, pour la recherche de chemins optimaux dans un graphe, le premier algorithme s’inspire du comportement des fourmis recherchant un chemin entre leur colonie et une source de nourriture.
The goal of the lecture is to present and apply techniques for the modelling and the thermo-economic optimisation of industrial process and energy systems. The lecture covers the problem statement, th
This course provides an overview of key advances in continuous optimization and statistical analysis for machine learning. We review recent learning formulations and models as well as their guarantees
Couvre l'optimisation avec des contraintes, en se concentrant sur les conditions de Karush-Kuhn-Tucker (KKT).
Explore la modélisation détaillée des canaux ioniques et des morphologies neuronales dans les neurosciences silico, couvrant la classification des neurones, la cinétique des canaux ioniques et les observations expérimentales.
Couvre les conditions KKT pour l'optimisation avec des contraintes, essentielles pour résoudre efficacement les problèmes d'optimisation.
Electronic devices play an irreplaceable role in our lives. With the tightening time to market, exploding demand for computing power, and continuous desire for smaller, faster, less energy-consuming, and lower-cost chips, computer-aided design for electron ...
This doctoral thesis navigates the complex landscape of motion coordination and formation control within teams of rotary-wing Micro Aerial Vehicles (MAVs). Prompted by the intricate demands of real-world applications such as search and rescue or surveillan ...
EPFL2024
, ,
We study the problem of performance optimization of closed -loop control systems with unmodeled dynamics. Bayesian optimization (BO) has been demonstrated to be effective for improving closed -loop performance by automatically tuning controller gains or re ...