Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these attacks while discussing strategies to mitigate them through fundamental security principles or by implementing defensive mechanisms tailored for ML.
Katrin Beyer, Radhakrishna Achanta, Bryan German Pantoja Rosero