Publication

Countering Bias in Personalized Rankings From Data Engineering to Algorithm Development

Mirko Marras
2021
Conference paper
Abstract

This tutorial presents recent advances on the assessment and mitigation of data and algorithmic bias in personalized rankings. We first introduce fundamental concepts and definitions associated with bias issues, covering the state of the art and describing real-world examples of how bias can impact ranking algorithms from several perspectives (e.g., ethics and system's objectives). Then, we continue with a systematic presentation of techniques to uncover, assess, and mitigate biases along the personalized ranking design process, with a focus on the role of data engineering in each step of the pipeline. Handson parts provide attendees with concrete implementations of bias mitigation algorithms, in addition to processes and guidelines on how data is organized and manipulated by these algorithms. The tutorial leverages open-source tools and public datasets, engaging attendees in designing bias countermeasures and in articulating impacts on stakeholders. We finally showcase open issues and future directions in this vibrant and rapidly evolving research area (Website: https://biasinrecsys.github.ioticde2021/).

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related concepts (19)
Bias
Bias is a disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, or a belief. In science and engineering, a bias is a systematic error. Statistical bias results from an unfair sampling of a population, or from an estimation process that does not give accurate results on average. The word appears to derive from Old Provençal into Old French biais, "sideways, askance, against the grain".
Algorithmic bias
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm. Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms.
Media bias
Media bias is the bias of journalists and news producers within the mass media in the selection of many events and stories that are reported and how they are covered. The term "media bias" implies a pervasive or widespread bias contravening of the standards of journalism, rather than the perspective of an individual journalist or article. The direction and degree of media bias in various countries is widely disputed.
Show more
Related publications (33)

It’s All Relative: Learning Interpretable Models for Scoring Subjective Bias in Documents from Pairwise Comparisons

Matthias Grossglauser, Aswin Suresh, Chi Hsuan Wu

We propose an interpretable model to score the subjective bias present in documents, based only on their textual content. Our model is trained on pairs of revisions of the same Wikipedia article, where one version is more biased than the other. Although pr ...
2024

Biases in Information Selection and Processing: Survey Evidence from the Pandemic

Andreas Fuster

We conduct two survey experiments to study which information people choose to consume and how it affects their beliefs. In the first experiment, respondents choose between optimistic and pessimistic article headlines related to the COVID-19 pandemic and ar ...
2024

Exploiting the Signal-Leak Bias in Diffusion Models

Sabine Süsstrunk, Radhakrishna Achanta, Mahmut Sami Arpa, Martin Nicolas Everaert, Athanasios Fitsios

There is a bias in the inference pipeline of most diffusion models. This bias arises from a signal leak whose distribution deviates from the noise distribution, creating a discrepancy between training and inference processes. We demonstrate that this signa ...
2024
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.