Soft Q-Learning with Mutual-Information Regularization

back to our research

Soft Q-Learning with Mutual-Information Regularization

May 6 - 9 2019 ICLR, New Orleans, USA

Authors: Jordi Grau-Moya, Felix Leibfried and Peter Vrancx

Abstract: We propose a reinforcement learning (RL) algorithm that uses mutual-information regularization to optimize a prior action distribution for better performance and exploration. Entropy-based regularization has previously been shown to improve both exploration and robustness in challenging sequential decision-making tasks. It does so by encouraging policies to put probability mass on all actions. However, entropy regularization might be undesirable when actions have significantly different importance. In this paper, we propose a theoretically motivated framework that dynamically weights the importance of actions by using the mutual-information. In particular, we express the RL problem as an inference problem where the prior probability distribution over actions is subject to optimization. We show that the prior optimization introduces a mutual-information regularizer in the RL objective. This regularizer encourages the policy to be close to a non-uniform distribution that assigns higher probability mass to more important actions. We empirically demonstrate that our method significantly improves over entropy regularization methods and unregularized methods.

Reinforcement Learning

See paper

Join us to make AI that will change the world

join our team