This paper considers optimization problems over networks where agents have individual objectives to meet, or individual parameter vectors to estimate, subject to subspace constraints that enforce the objectives across the network to lie in a low-dimensional subspace. This constrained formulation includes consensus optimization as a special case, and allows for more general task relatedness models such as smoothness. While such formulations can be solved via projected gradient descent, the resulting algorithm is not distributed. Motivated by the centralized solution, we propose an iterative and distributed implementation of the projection step, which runs in parallel with the gradient descent update. We establish that, for small step-sizes mu, the proposed distributed adaptive strategy leads to small estimation errors on the order of mu.
Michel Bierlaire, Nicola Marco Ortelli, Matthieu Marie Cochon de Lapparent
Ali H. Sayed, Stefan Vlaski, Roula Nassif