FAHT: An Adaptive Fairness-aware Decision Tree Classifier

FAHT: An Adaptive Fairness-aware Decision Tree Classifier

Wenbin Zhang, Eirini Ntoutsi

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 1480-1486. https://doi.org/10.24963/ijcai.2019/205

Automated data-driven decision-making systems are ubiquitous across a wide spread of online as well as offline services. These systems, depend on sophisticated learning algorithms and available data, to optimize the service function for decision support assistance. However, there is a growing concern about the accountability and fairness of the employed models by the fact that often the available historic data is intrinsically discriminatory, i.e., the proportion of members sharing one or more sensitive attributes is higher than the proportion in the population as a whole when receiving positive classification, which leads to a lack of fairness in decision support system. A number of fairness-aware learning methods have been proposed to handle this concern. However, these methods tackle fairness as a static problem and do not take the evolution of the underlying stream population into consideration. In this paper, we introduce a learning mechanism to design a fair classifier for online stream based decision-making. Our learning model, FAHT (Fairness-Aware Hoeffding Tree), is an extension of the well-known Hoeffding Tree algorithm for decision tree induction over streams, that also accounts for fairness. Our experiments show that our algorithm is able to deal with discrimination in streaming environments, while maintaining a moderate predictive performance over the stream.
Keywords:
Humans and AI: Ethical Issues in AI
Machine Learning: Time-series;Data Streams