Multi Objective Quantile Based Reinforcement Learning for Modern Urban Planning

Multi Objective Quantile Based Reinforcement Learning for Modern Urban Planning

Lukasz Pelcner, Leandro Soriano Marcolino, Matheus Aparecido do Carmo Alves, Paula A. Harrison, Peter M. Atkinson

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 232-239. https://doi.org/10.24963/ijcai.2025/27

We present a novel Multi-Agent Reinforcement Learning approach to understand and improve policy development by land-shaping agents, such as governments and institutional bodies. We derive the underlying policy decisions by analyzing the land and developing an intelligent system that proposes optimal land conversion strategies. The aim is an efficient method for allocating residential spaces while considering the dynamic population influx in different regions, jurisdictional constraints, and the intrinsic characteristics of the land. Our main goal is to be sustainable, preserving desirable land types such as forests and fluvial lands while optimizing land organization. We introduce an attractiveness metric that quantifies the proximity to different land types and other factors to optimize land usage. It distinguishes two types of agents: ``top-down'' agents, which are policymakers and shareholders, and ``bottom-up'' agents representing individuals or groups with specific housing preferences. Our main objective is to create a synergistic environment where the top-down policy meets the bottom-up preferences to devise a comprehensive land use and conversion strategy. This paper, thus, serves as a pivotal reference point for future urban planning and policy-making processes, contributing to a sustainable and efficient landscape design model.
Keywords:
Agent-based and Multi-agent Systems: MAS: Applications
Agent-based and Multi-agent Systems: MAS: Agent-based simulation and emergence
Agent-based and Multi-agent Systems: MAS: Multi-agent learning