Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
Biased decision making by machine learning systems is increasingly recognized as an important issue. Recently, techniques have been proposed to learn non-discriminatory classifiers by enforcing constraints in the training phase. Such constraints are either non-convex in nature (posing computational difficulties) or don't have a clear probabilistic interpretation. Moreover, the techniques offer little understanding of the more subjective notion of fairness. In this paper, we introduce a novel technique to achieve nondiscrimination without sacrificing convexity and probabilistic interpretation. Our experimental analysis demonstrates the success of the method on popular real datasets including ProPublica's COMPAS dataset. We also propose a new notion of fairness for machine learning and show that our technique satisfies this subjective fairness criterion.
Christian Leinenbach, Sergey Shevchik, Rafal Wróbel
Marcello Ienca, Ambra D'imperio
Maxime Carl Felder, Joan Stavo-Debauge, Sahar Fneich