Shaping Shared Languages: Human and Large Language Models' Inductive Biases in Emergent Communication

Shaping Shared Languages: Human and Large Language Models' Inductive Biases in Emergent Communication

Tom Kouwenhoven, Max Peeperkorn, Roy de Kleijn, Tessa Verhoef

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Human-Centred AI. Pages 10298-10306. https://doi.org/10.24963/ijcai.2025/1144

Languages are shaped by the inductive biases of their users. Using a classical referential game, we investigate how artificial languages evolve when optimised for inductive biases in humans and large language models (LLMs) via Human-Human, LLM-LLM and Human-LLM experiments. We show that referentially grounded vocabularies emerge that enable reliable communication in all conditions, even when humans and LLMs collaborate. Comparisons between conditions reveal that languages optimised for LLMs subtly differ from those optimised for humans. Interestingly, interactions between humans and LLMs alleviate these differences and result in vocabularies more human-like than LLM-like. These findings advance our understanding of the role inductive biases in LLMs play in the dynamic nature of human language and contribute to maintaining alignment in human and machine communication. In particular, our work underscores the need to think of new LLM training methods that include human interaction and shows that using communicative success as a reward signal can be a fruitful, novel direction.
Keywords:
IJCAI25: Human-Centred AI