Both search and recommendation algorithms provide results based on their relevance for the current user. In order to do so, such a relevance is usually computed by models trained on historical data, which is biased in most cases. Hence, the results produced by these algorithms naturally propagate, and frequently reinforce, biases hidden in the data, consequently strengthening inequalities. Being able to measure, characterize, and mitigate these biases while keeping high effectiveness is a topic of central interest for the information retrieval community. In this workshop, we aim to collect novel contributions in this emerging field and to provide a common ground for interested researchers and practitioners.
International Workshop on Algorithmic Bias in Search and Recommendation (Bias 2020)
Stilo, Giovanni
2020-01-01
Abstract
Both search and recommendation algorithms provide results based on their relevance for the current user. In order to do so, such a relevance is usually computed by models trained on historical data, which is biased in most cases. Hence, the results produced by these algorithms naturally propagate, and frequently reinforce, biases hidden in the data, consequently strengthening inequalities. Being able to measure, characterize, and mitigate these biases while keeping high effectiveness is a topic of central interest for the information retrieval community. In this workshop, we aim to collect novel contributions in this emerging field and to provide a common ground for interested researchers and practitioners.File | Dimensione | Formato | |
---|---|---|---|
CAMERAREADY__ECIR_2021.pdf
non disponibili
Tipologia:
Documento in Pre-print
Licenza:
Creative commons
Dimensione
148.29 kB
Formato
Adobe PDF
|
148.29 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.