InnateCoder: Learning Programmatic Options with Foundation Models
InnateCoder: Learning Programmatic Options with Foundation Models
Rubens O. Moraes, Quazi Asif Sadmine, Hendrik Baier, Levi H. S. Lelis
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 7652-7660.
https://doi.org/10.24963/ijcai.2025/851
Outside of transfer learning settings, reinforcement learning agents start their learning process from a clean slate. As a result, such agents have to go through a slow process to learn even the most obvious skills required to solve a problem. In this paper, we present InnateCoder, a system that leverages human knowledge encoded in foundation models to provide programmatic policies that encode ``innate skills'' in the form of temporally extended actions, or options. In contrast to existing approaches to learning options, InnateCoder learns them from the general human knowledge encoded in foundation models in a zero-shot setting, and not from the knowledge the agent gains by interacting with the environment. Then, InnateCoder searches for a programmatic policy by combining the programs encoding these options into larger and more complex programs. We hypothesized that InnateCoder's way of learning and using options could improve the sampling efficiency of current methods for learning programmatic policies. Empirical results in MicroRTS and Karel the Robot support our hypothesis, since they show that InnateCoder is more sample efficient than versions of the system that do not use options or learn them from experience.
Keywords:
Multidisciplinary Topics and Applications: MTA: Computer games
Machine Learning: ML: Reinforcement learning
Search: S: Game playing
Planning and Scheduling: PS: Markov decisions processes
