Stochastic Actor-Executor-Critic for Image-to-Image Translation

Stochastic Actor-Executor-Critic for Image-to-Image Translation

Ziwei Luo, Jing Hu, Xin Wang, Siwei Lyu, Bin Kong, Youbing Yin, Qi Song, Xi Wu

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2775-2781. https://doi.org/10.24963/ijcai.2021/382

Training a model-free deep reinforcement learning model to solve image-to-image translation is difficult since it involves high-dimensional continuous state and action spaces. In this paper, we draw inspiration from the recent success of the maximum entropy reinforcement learning framework designed for challenging continuous control problems to develop stochastic policies over high dimensional continuous spaces including image representation, generation, and control simultaneously. Central to this method is the Stochastic Actor-Executor-Critic (SAEC) which is an off-policy actor-critic model with an additional executor to generate realistic images. Specifically, the actor focuses on the high-level representation and control policy by a stochastic latent action, as well as explicitly directs the executor to generate low-level actions to manipulate the state. Experiments on several image-to-image translation tasks have demonstrated the effectiveness and robustness of the proposed SAEC when facing high-dimensional continuous space problems.
Keywords:
Machine Learning: Deep Reinforcement Learning
Machine Learning: Learning Generative Models
Machine Learning Applications: Applications of Reinforcement Learning