Abstract

Learning Where You Are Going and from Whence You Came: h- and g-Cost Learning in Real-Time Heuristic Search
Learning Where You Are Going and from Whence You Came: h- and g-Cost Learning in Real-Time Heuristic Search
h
Real-time agent-centric algorithms have been used for learning and solving problems since the introduction of the LRTA* algorithm in 1990. In this time period, numerous variants have been produced, however, they have generally followed the same approach in varying parameters to learn a heuristic which estimates the remaining cost to arrive at a goal state. Recently, a different approach, RIBS, was suggested which, instead of learning costs to the goal, learns costs from the start state. RIBS can solve some problems faster, but in other problems has poor performance. We present a new algorithm, f-cost Learning Real-Time A* (f-LRTA*), which combines both approaches, simultaneously learning distances from the start and heuristics to the goal. An empirical evaluation demonstrates that f-LRTA* outperforms both RIBS and LRTA*-style approaches in a range of scenarios.