Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Predictive models based on machine learning (ML) offer a compelling promise: bringing clarity and structure to complex natural and social environments. However, the use of ML poses substantial risks related to the privacy of their training data as well as the security and reliability of their operation. This thesis explores the relationships between privacy, security, and reliability risks of ML. Our research aims to re-evaluate the standard practices and approaches to mitigating and measuring these risks in order to understand their connections and scrutinize their effectiveness.The first area we study is data privacy, particularly the standard privacy-preserving learning technique of differentially private (DP) training. DP training introduces controlled randomization to limit information leakage. This randomization has side effects such as performance loss and widening of performance disparities across population groups. In the thesis, we investigate additional side effects. On the positive side, we highlight the "What You See Is What You Get" property that DP training achieves. Models trained with standard methods often exhibit significant differences between training and testing phases, whereas privacy-preserving training guarantees similar behavior. Leveraging this property, we introduce competitive algorithms for group-distributionally robust optimization, addressing privacy-performance trade-offs, and mitigating robust overfitting. On the negative side, we show that decisions of DP-trained models can be arbitrary: due to the randomness in training, equally private models can yield drastically different predictions for the same input. We examine the costs of standard DP training algorithms in terms of arbitrariness, raising concerns about the justifiability of their decisions in high-stakes scenarios.Next, we study the standard measure of privacy leakage: vulnerability of models to membership inference attacks. We analyze how the vulnerability to these attacks, thus privacy risks, are unequally distributed across the population groups. We emphasize the need and provide methods to consider privacy leakage across diverse subpopulations to avoid disproportionate harm and address inequities.Finally, our study focuses on analyzing the security risks in tabular domains, which are commonly found in high-stakes ML settings. We challenge the assumptions behind existing security evaluation methods, which primarily consider threat models based on input geometry. We highlight that real-world adversaries in these settings face practical constraints, prompting the need for cost and utility-aware threat models. We propose a framework that tailors adversarial models to tabular domains, enabling the consideration of cost and utility constraints in high-stakes decision-making situations.Overall, the thesis sheds light on the subtle effects of DP training, emphasizes the importance of diverse subpopulations in risk measurements, and highlights the need for realistic threat models and security measures. By challenging assumptions and re-evaluating risk mitigation and measurement approaches, the thesis paves the way for more robust and ethically grounded studies of ML risks.
Carmela González Troncoso, Mathilde Aliénor Raynal, Dario Pasquini
Carmela González Troncoso, Bogdan Kulynych
Rachid Guerraoui, Nirupam Gupta, John Stephan, Youssef Allouah, Rafaël Benjamin Pinot