Controlling Fairness and Bias in Dynamic Learning-to-Rank (Extended Abstract)

Controlling Fairness and Bias in Dynamic Learning-to-Rank (Extended Abstract)

Marco Morik, Ashudeep Singh, Jessica Hong, Thorsten Joachims

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 4804-4808. https://doi.org/10.24963/ijcai.2021/655

Rankings are the primary interface through which many online platforms match users to items (e.g. news, products, music, video). In these two-sided markets, not only do the users draw utility from the rankings, but the rankings also determine the utility (e.g. exposure, revenue) for the item providers (e.g. publishers, sellers, artists, studios). It has already been noted that myopically optimizing utility to the users -- as done by virtually all learning-to-rank algorithms -- can be unfair to the item providers. We, therefore, present a learning-to-rank approach for explicitly enforcing merit-based fairness guarantees to groups of items (e.g. articles by the same publisher, tracks by the same artist). In particular, we propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data. The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility, dynamically adapting both as more data becomes available. In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
Keywords:
Machine Learning: Learning Preferences or Rankings
AI Ethics, Trust, Fairness: Fairness
Data Mining: Information Retrieval
Machine Learning: Online Learning