Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
In this work, we study the task of distributed optimization over a network of learners in which each learner possesses a convex cost function, a set of affine equality constraints, and a set of convex inequality constraints. We propose a fully distributed adaptive diffusion algorithm based on penalty methods that allows the network to cooperatively optimize the global cost function, which is defined as the sum of the individual costs over the network, subject to all constraints. We show that when small constant step-sizes are employed, the expected distance between the optimal solution vector and that obtained at each node in the network can be made arbitrarily small. Two distinguishing features of the proposed solution relative to other approaches is that the developed strategy does not require the use of projections and is able to track drifts in the location of the minimizer due to changes in the constraints or in the aggregate cost itself. The proposed strategy is able to cope with changing network topology, is robust to network disruptions, and does not require global information or rely on central processors.
, ,