Fairness-Aware Neural Rényi Minimization for Continuous Features

Fairness-Aware Neural Rényi Minimization for Continuous Features

Vincent Grari, Sylvain Lamprier, Marcin Detyniecki

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2262-2268. https://doi.org/10.24963/ijcai.2020/313

The past few years have seen a dramatic rise of academic and societal interest in fair machine learning. While plenty of fair algorithms have been proposed recently to tackle this challenge for discrete variables, only a few ideas exist for continuous ones. The objective in this paper is to ensure some independence level between the outputs of regression models and any given continuous sensitive variables. For this purpose, we use the Hirschfeld-Gebelein-Rényi (HGR) maximal correlation coefficient as a fairness metric. We propose to minimize the HGR coefficient directly with an adversarial neural network architecture. The idea is to predict the output Y while minimizing the ability of an adversarial neural network to find the estimated transformations which are required to predict the HGR coefficient. We empirically assess and compare our approach and demonstrate significant improvements on previously presented work in the field.
Keywords:
Machine Learning: Adversarial Machine Learning
Machine Learning: Deep Learning
Trust, Fairness, Bias: General