Abstract

Scaling Up Optimal Heuristic Search in Dec-POMDPs via Incremental Expansion
Scaling Up Optimal Heuristic Search in Dec-POMDPs via Incremental Expansion
Matthijs T. J. Spaan, Frans A. Oliehoek, Christopher Amato
Planning under uncertainty for multiagent systems can be formalized as a decentralized partially observable Markov decision process. We advance the state of the art for optimal solution of this model, building on the Multiagent A* heuristic search method. A key insight is that we can avoid the full expansion of a search node that generates a number of children that is doubly exponential in the node's depth. Instead, we incrementally expand the children only when a next child might have the highest heuristic value. We target a subsequent bottleneck by introducing a more memory-efficient representation for our heuristic functions. Proof is given that the resulting algorithm is correct and experiments demonstrate a significant speedup over the state of the art, allowing for optimal solutions over longer horizons for many benchmark problems.