Artificial Intelligence, Bias, and Ethics

Artificial Intelligence, Bias, and Ethics

Aylin Caliskan

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Early Career. Pages 7007-7013. https://doi.org/10.24963/ijcai.2023/799

Although ChatGPT attempts to mitigate bias, when instructed to translate the gender-neutral Turkish sentences “O bir doktor. O bir hemşire” to English, the outcome is biased: “He is a doctor. She is a nurse.” In 2016, we have demonstrated that language representations trained via unsupervised learning automatically embed implicit biases documented in social cognition through the statistical regularities in language corpora. Evaluating embedding associations in language, vision, and multi-modal language-vision models reveals that large-scale sociocultural data is a source of implicit human biases regarding gender, race or ethnicity, skin color, ability, age, sexuality, religion, social class, and intersectional associations. The study of gender bias in language, vision, language-vision, and generative AI has highlighted the sexualization of women and girls in AI, while easily accessible generative AI models such as text-to-image generators amplify bias at scale. As AI increasingly automates tasks that determine life’s outcomes and opportunities, the ethics of AI bias has significant implications for human cognition, society, justice, and the future of AI. Thus, it is necessary to advance our understanding of the depth, prevalence, and complexities of bias in AI to mitigate it both in machines and society.
Keywords:
EC: Trustworthy AI
EC: Trustworthy Machine Learning, Fairness In Machine Learning
EC: Algorithmic Fairness