Jump to Content

Fairness in Recommendation Ranking through Pairwise Comparisons

Alex Beutel
Tulsee Doshi
Hai Qian
Li Wei
Yi Wu
Lukasz Heldt
Zhe Zhao
Lichan Hong
Cristos Goodrow
KDD (2019)

Abstract

Recommender systems are one of the most pervasive applications of machine learning in industry, with many services using them to match users to products or information. As such it is important to ask: what are the possible fairness risks, how can we quantify them, and how should we address them? In this paper we offer a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems. In particular we show how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings from recommender systems. Building on this metric, we offer a new regularizer to encourage improving this metric during model training and thus improve fairness in the resulting rankings. We apply this pairwise regularization to a large-scale, production recommender system and show that we are able to significantly improve the system's pairwise fairness.