Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
A multi-agent system consists of a collection of decision-making or learning agents subjected to streaming observations from some real-world phenomenon. The goal of the system is to solve some global learning or optimization problem in a distributed or decentralized manner, where agents collect data locally and interact either with their neighbors or with some central processor. In designing multi-agent systems, one normally formulates a global risk function, consisting of the aggregate of local risks, and then seeks to approximate its optimizer through localized interactions among neighboring agents. During this process, the agents will be required to share processed data and/or iterates with their neighbors. The issue of privacy then becomes critical in enabling the safe communication of information over edges linking the elements of the multi-agent network. There have been several works in the literature that enforce privacy by adding random noise sources on top of the shared information. Most of these works establish that the resulting architectures are differentially private, but they assume that the gradients of the risk functions are bounded. Unfortunately, this condition is rarely valid in practice and this fact is often overlooked in most studies. For example, even quadratic risk functions have unbounded gradients because these gradients will be affine functions of the unknown parameter. Moreover, most studies fail to recognise that their differentially private solutions and the added noise sources end up degrading the mean-square error performance of the learning algorithms from down to , where is the small learning parameter. These are serious limitations that remain unaddressed in existing approaches to differential privacy in multi-agent systems. In this dissertation, we resolve these two issues. First, we do not assume bounded gradients for the risk functions. And yet, we are still able to establish that the multi-agent systems remain differentially private, albeit with high probability. We achieve this conclusion by showing that the noise sources should not be added in an ad-hoc manner, as is common in existing approaches, but rather that they should be constructed in a manner that is cognizant of the graph topology. Otherwise, the noises end up generating a magnifying effect that degrades performance. For this reason, we introduce a locally homomorphic noise construction and show that, under this process, the MSE performance of the multi-agent system will remain at while being differentially private at the same time. This is a reassuring conclusion. We illustrate these results for the special case of federated learning architectures, but also extend them to more general distributed learning and optimisation strategies over decentralised architectures.