Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.
We consider solving multi-objective optimization problems in a distributed manner by a network of cooperating and learning agents. The problem is equivalent to optimizing a global cost that is the sum of individual components. The optimizers of the individual components do not necessarily coincide and the network therefore needs to seek Pareto optimal solutions. We develop a distributed solution that relies on a general class of adaptive diffusion strategies. We show how the diffusion process can be represented as the cascade composition of three operators: two combination operators and a gradient descent operator. Using the Banach fixed-point theorem, we establish the existence of a unique fixed point for the composite cascade. We then study how close each agent converges towards this fixed point, and also examine how close the Pareto solution is to the fixed point. We perform a detailed mean-square error analysis and establish that all agents are able to converge to the same Pareto optimal solution within a sufficiently small mean-square-error (MSE) bound even for constant step-sizes. We illustrate one application of the theory to collaborative decision making in finance by a network of agents.
Stefano Filipazzi, Roberto Svaldi
Victor Panaretos, Julien René Pierre Fageot, Matthieu Martin Jean-André Simeoni, Alessia Caponera
Alessandro Vichi, Maria Refinetti, Marten Jan Reehorst