Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This article considers solving an overdetermined system of linear equations in peer-to-peer multiagent networks. The network is assumed to be synchronous and strongly connected. Each agent has a set of local data points, and their goal is to compute a linear model that fits the collective data points. In principle, the agents can apply the decentralized gradient-descent method (DGD). However, when the data matrix is ill-conditioned, DGD requires many iterations to converge and is unstable against system noise. We propose a decentralized preconditioning technique to mitigate the deleterious effects of the data points' conditioning on the convergence rate of DGD. The proposed algorithm converges linearly, with an improved convergence rate than DGD. Considering the practical scenario where the computations performed by the agents are corrupted, we study the robustness guarantee of the proposed algorithm. In addition, we apply the proposed algorithm for solving decentralized state estimation problems. The empirical results show our proposed state predictor's favorable convergence rate and robustness against system noise compared to prominent decentralized algorithms.