Model-Free Real-Time Autonomous Energy Management for a Residential Multi-Carrier Energy System: A Deep Reinforcement Learning Approach

Model-Free Real-Time Autonomous Energy Management for a Residential Multi-Carrier Energy System: A Deep Reinforcement Learning Approach

Yujian Ye, Dawei Qiu, Jonathan Ward, Marcin Abram

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 339-346. https://doi.org/10.24963/ijcai.2020/48

The problem of real-time autonomous energy management is an application area that is receiving unprecedented attention from consumers, governments, academia, and industry. This paper showcases the first application of deep reinforcement learning (DRL) to real-time autonomous energy management for a multi-carrier energy system. The proposed approach is tailored to align with the nature of the energy management problem by posing it in multi-dimensional continuous state and action spaces, in order to coordinate power flows between different energy devices, and to adequately capture the synergistic effect of couplings between different energy carriers. This fundamental contribution is a significant step forward from earlier approaches that only sought to control the power output of a single device and neglected the demand-supply coupling of different energy carriers. Case studies on a real-world scenario demonstrate that the proposed method significantly outperforms existing DRL methods as well as model-based control approaches in achieving the lowest energy cost and yielding a representation of energy management policies that adapt to system uncertainties.
Keywords:
Agent-based and Multi-agent Systems: Agent-Based Simulation and Emergence
Multidisciplinary Topics and Applications: Real-Time Systems
Machine Learning Applications: Applications of Reinforcement Learning