InfoBot: Structured Exploration in ReinforcementLearning Using Information Bottleneck

Anirudh Goyal
Riashat Islam
Daniel Strouse
Matthew Botvinick
Yoshua Bengio
Sergey Levine
ICLR (2019)

Abstract

A central challenge in reinforcement learning is discovering effective policies for
tasks where rewards are sparsely distributed. We postulate that in the absence of
useful reward signals, an effective exploration strategy should seek out decision
states. These states lie at critical junctions in the state space from where the agent
can transition to new, potentially unexplored regions. We propose to learn about
decision states from prior experience. By training a goal-conditioned policy with
an information bottleneck, we can identify decision states by examining where
the model actually leverages the goal state. We find that this simple mechanism
effectively identifies decision states, even in partially observed settings. In effect,
the model learns the sensory cues that correlate with potential subgoals. In new
environments, this model can then identify novel subgoals for further exploration,
guiding the agent through a sequence of potential decision states and through new
regions of the state space.

Research Areas