Action Selection via Learning Behavior Patterns in Multi-Robot Systems
Can Erdogan, Manuela Veloso
The RoboCup robot soccer Small Size League has been running since 1997 with many teams successfully competiting and very effectively playing the games. Teams of five robots, with a combined autonomous centralized perception and control, and distributed actuation, move at high speeds in the field space, actuating a golf ball by passing and shooting it to aim at scoring goals. Most teams run their own pre-defined team strategies, unknown to the other teams, with flexible game-state dependent assignment of robot roles and positioning. However, in this fast-paced noisy real robot league, recognizing the opponent team strategies and accordingly adapting one's own play has proven to be a considerable challenge. In this work, we analyze logged data of real games gathered by the CMDragons team, and contribute several results in learning and responding to opponent strategies. We define episodes as segments of interest in the logged data, and introduce a representation that captures the spatial and temporal data of the multi-robot system as instances of geometrical trajectory curves. We then learn a model of the team strategies through a variant of agglomerative hierarchical clustering. Using the learned cluster model, we are able to classify a team behavior incrementally as it occurs. Finally, we define an algorithm that autonomously generates counter tactics, in a simulation based on the real logs, showing that it can recognize and respond to opponent strategies.