Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
The geometric median, an instrumental component of the secure machine learning toolbox, is known to be effective when robustly aggregating models (or gradients), gathered from potentially malicious (or strategic) users. What is less known is the extent to which the geometric median incentivizes dishonest behaviors. This paper addresses this fundamental question by quantifying its strategyproofness. While we observe that the geometric median is not even approximately strategyproof, we prove that it is asymptotically α-strategyproof: when the number of users is large enough, a user that misbehaves can gain at most a multiplicative factor α, which we compute as a function of the distribution followed by the users. We then generalize our results to the case where users actually care more about specific dimensions, determining how this impacts α. We also show how the skewed geometric medians can be used to improve strategyproofness.
Ekaterina Krymova, Nicola Parolini, Andrea Kraus, David Kraus, Daniel Lopez, Yijin Wang, Markus Scholz, Tao Sun