FNNC: Achieving Fairness through Neural Networks

FNNC: Achieving Fairness through Neural Networks

Manisha Padala, Sujit Gujar

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2277-2283. https://doi.org/10.24963/ijcai.2020/315

In classification models, fairness can be ensured by solving a constrained optimization problem. We focus on fairness constraints like Disparate Impact, Demographic Parity, and Equalized Odds, which are non-decomposable and non-convex. Researchers define convex surrogates of the constraints and then apply convex optimization frameworks to obtain fair classifiers. Surrogates serve as an upper bound to the actual constraints, and convexifying fairness constraints is challenging. We propose a neural network-based framework, \emph{FNNC}, to achieve fairness while maintaining high accuracy in classification. The above fairness constraints are included in the loss using Lagrangian multipliers. We prove bounds on generalization errors for the constrained losses which asymptotically go to zero. The network is optimized using two-step mini-batch stochastic gradient descent. Our experiments show that FNNC performs as good as the state of the art, if not better. The experimental evidence supplements our theoretical guarantees. In summary, we have an automated solution to achieve fairness in classification, which is easily extendable to many fairness constraints.
Keywords:
Machine Learning: Classification
Machine Learning: Deep Learning
Trust, Fairness, Bias: General