Wisdom from Diversity: Bias Mitigation Through Hybrid Human-LLM Crowds

Wisdom from Diversity: Bias Mitigation Through Hybrid Human-LLM Crowds

Axel Abels, Tom Lenaerts

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 321-329. https://doi.org/10.24963/ijcai.2025/37

Despite their performance, large language models (LLMs) can inadvertently perpetuate biases found in the data they are trained on. By analyzing LLM responses to bias-eliciting headlines, we find that these models often mirror human biases. To address this, we explore crowd-based strategies for mitigating bias through response aggregation. We first demonstrate that simply averaging responses from multiple LLMs, intended to leverage the ``wisdom of the crowd", can exacerbate existing biases due to the limited diversity within LLM crowds. In contrast, we show that locally weighted aggregation methods more effectively leverage the wisdom of the LLM crowd, achieving both bias mitigation and improved accuracy. Finally, recognizing the complementary strengths of LLMs (accuracy) and humans (diversity), we demonstrate that hybrid crowds containing both significantly enhance performance and further reduce biases across ethnic and gender-related contexts.
Keywords:
AI Ethics, Trust, Fairness: ETF: Bias
Humans and AI: HAI: Human-AI collaboration
Machine Learning: ML: Ensemble methods
AI Ethics, Trust, Fairness: ETF: Fairness and diversity