What If LLMs Can Smell: A Prototype
What If LLMs Can Smell: A Prototype
Xueyi Zhou, Qi Lu, Dong-Kyu Chae
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Demo Track. Pages 11141-11144.
https://doi.org/10.24963/ijcai.2025/1280
The olfaction is hardly mentioned in the studies of multi-modal Large Language Models (LLMs). This demo presents a prototypical framework to embody prevalent LLMs with smelling ability using a plug-and-play olfactory signal processing service. To this end, we collect a dataset on Korean beers by self-developed electronic noses (e-noses) and an open-source dataset. An olfaction-related question-answering corpus is also generated to fine-tune LLMs. A gas classification model is applied to identify the smelling liquor upon the e-nose data. We then adopt and fine-tune LLMs on the generated datasets. The results show that LLMs under this framework can interact with the environment by its `nose' and provide olfaction-related answers augmented by our dataset. To the best of our knowledge, this is the first work on embodying LLMs with artificial olfaction. We additionally deployed the gas classification model and the trained LLM in a simple web-based system to show the feasibility of our prototype. Our demo video can be found at: https://bit.ly/4j8x6ZY.
Keywords:
Natural Language Processing: NLP: Applications
Humans and AI: HAI: Applications
Humans and AI: HAI: Human-computer interaction
