Risk-Averse Trust Region Optimization for Reward-Volatility Reduction

Risk-Averse Trust Region Optimization for Reward-Volatility Reduction

Lorenzo Bisi, Luca Sabbioni, Edoardo Vittori, Matteo Papini, Marcello Restelli

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Special Track on AI in FinTech. Pages 4583-4589. https://doi.org/10.24963/ijcai.2020/632

The use of reinforcement learning in algorithmic trading is of growing interest, since it offers the opportunity of making profit through the development of autonomous artificial traders, that do not depend on hard-coded rules. In such a framework, keeping uncertainty under control is as important as maximizing expected returns. Risk aversion has been addressed in reinforcement learning through measures related to the distribution of returns. However, in trading it is essential to keep under control the risk of portfolio positions in the intermediate steps. In this paper, we define a novel measure of risk, which we call reward volatility, consisting of the variance of the rewards under the state-occupancy measure. This new risk measure is shown to bound the return variance so that reducing the former also constrains the latter. We derive a policy gradient theorem with a new objective function that exploits the mean-volatility relationship. Furthermore, we adapt TRPO, the well-known policy gradient algorithm with monotonic improvement guarantees, in a risk-averse manner. Finally, we test the proposed approach in two financial environments using real market data.
Keywords:
Foundation for AI in FinTech: Reinforcement learning for FinTech
AI for trading: AI for algorithmic trading