Reflective Verbal Reward Design for Pluralistic Alignment

Reflective Verbal Reward Design for Pluralistic Alignment

Carter Blair, Kate Larson, Edith Law

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Human-Centred AI. Pages 10271-10279. https://doi.org/10.24963/ijcai.2025/1141

AI agents are commonly aligned with "human values" through reinforcement learning from human feedback (RLHF), where a single reward model is learned from aggregated human feedback and used to align an agent's behavior. However, human values are not homogeneous--different people hold distinct and sometimes conflicting values. Aggregating feedback into a single reward model risks disproportionately suppressing minority preferences. To address this, we present a novel reward modeling approach for learning individualized reward models. Our approach uses a language model to guide users through reflective dialogues where they critique agent behavior and construct their preferences. This personalized dialogue history, containing the user's reflections and critiqued examples, is then used as context for another language model that serves as an individualized reward function (what we call a "verbal reward model") for evaluating new trajectories. In studies with 30 participants, our method achieved a 9-12% improvement in accuracy over non-reflective verbal reward models while being more sample efficient than traditional supervised learning methods.
Keywords:
IJCAI25: Human-Centred AI