A Bayesian Approach to Generative Adversarial Imitation Learning

back to our research

A Bayesian Approach to Generative Adversarial Imitation Learning

December 2 - 8, 2018 NeurIPS, Montreal, Canada

Authors: Wonseok Jeon (KAIST), Seokin Seo (KAIST), and Kee-Eung Kim (KAIST & PROWLER.io)

Abstract: Generative adversarial training for imitation learning has shown promising results on high-dimensional and continuous control tasks. This paradigm is based on reducing the imitation learning problem to the density matching problem, where the agent iteratively refines the policy to match the empirical state-action visitation frequency of the expert demonstration. Although this approach has shown to robustly learn to imitate even with scarce demonstration, one must still address the inherent challenge that collecting trajectory samples in each iteration is a costly operation. To address this issue, we first propose a Bayesian formulation of generative adversarial imitation learning (GAIL), where the imitation policy and the cost function are represented as stochastic neural networks. Then, we show that we can significantly enhance the sample efficiency of GAIL leveraging the predictive density of the cost, on an extensive set of imitation learning tasks with high-dimensional states and actions.

Reinforcement Learning

NeurIPS

Adversarial Training

Imitation Learning

Generative Model


See paper

Join us to make AI that will change the world

join our team