ICRA 2020 Keynote - Can Deep Reinforcement Learning from pixels be made as efficient as from state?
An ICRA 2020 keynote by Pieter Abbeel. Learning from visual observations is a fundamental yet challenging problem in reinforcement learning. Although algorithmic advancements combined with convolutional neural networks have proved to be a recipe for success, it's been widely accepted that learning from pixels is not as efficient as learning from direct access to underlying state. In this talk I will describe our recent work that (almost entirely) bridges the gap in sample complexity between learning from pixels and from state, as empirically validated on the DeepMind Control Suite and Atari games. In fact, I will present two new approaches establishing this new state of the art: Reinforcement Learning with Augmented Data (RAD) and Contrastive Unsupervised Representations for Reinforcement Learning (CURL). At the core of both are data augmentation through random crops. Our approaches outperform prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.6x performance gains at the 100K environment and interaction steps benchmarks respectively.