Active Exploration for Learning Symbolic Representations

back to our research

Active Exploration for Learning Symbolic Representations

4-9 December, Long Beach California, USA

Advances in Neural Information Processing Systems 30 (NIPS 2017)

Authors: Garrett Andersen (, George Konidaris (Brown)

Abstract: We introduce an online active exploration algorithm for data-efficiently learning an abstract symbolic model of an environment. Our algorithm is divided into two parts: the first part quickly generates an intermediate Bayesian symbolic model from the data that the agent has collected so far, which the agent can then use along with the second part to guide its future exploration towards regions of the state space that the model is uncertain about. We show that our algorithm outperforms random and greedy exploration policies on two different computer game domains. The first domain is an Asteroids-inspired game with complex dynamics, but basic logical structure. The second is the Treasure Game, with simpler dynamics, but more complex logical structure.



Reinforcement Learning

Representation Learning

See paper

Help us build AI that will change the world

join our team