Group-Fairness in Influence Maximization

Group-Fairness in Influence Maximization

Alan Tsang, Bryan Wilder, Eric Rice, Milind Tambe, Yair Zick

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
AI for Improving Human Well-being. Pages 5997-6005. https://doi.org/10.24963/ijcai.2019/831

Influence maximization is a widely used model for information dissemination in social networks. Recent work has employed such interventions across a wide range of social problems, spanning public health, substance abuse, and international development (to name a few examples). A critical but understudied question is whether the benefits of such interventions are fairly distributed across different groups in the population; e.g., avoiding discrimination with respect to sensitive attributes such as race or gender. Drawing on legal and game-theoretic concepts, we introduce formal definitions of fairness in influence maximization. We provide an algorithmic framework to find solutions which satisfy fairness constraints, and in the process improve the state of the art for general multi-objective submodular maximization problems. Experimental results on real data from an HIV prevention intervention for homeless youth show that standard influence maximization techniques oftentimes neglect smaller groups which contribute less to overall utility, resulting in a disparity which our proposed algorithms substantially reduce. 
Keywords:
Special Track on AI for Improving Human-Well Being: Human wellbeing (Special Track on AI and Human Wellbeing)
Special Track on AI for Improving Human-Well Being: AI ethics (Special Track on AI and Human Wellbeing)