I'm a research scientist at Google Brain in Montréal, Canada. My research focuses on reinforcement learning, generative models, and lifelong learning. Prior to joining Google Brain I was at DeepMind from 2013 to 2017, where I was involved in a number of reinforcement learning projects, notably helped develop the Deep Q-Networks (DQN) that were trained to play Atari 2600 games. Some of my research contributions include:
- The Arcade Learning Environment, the research interface to Atari 2600 games, developed at the University of Alberta,
- The use of density models to quantify uncertainty, for example to drive an agent's exploratory behaviour,
- A distributional framework for reinforcement learning, which lets us use tools from probabilistic modelling and classification in RL,
- Extremely fast generative models designed for model-based planning from few samples,
- Algorithms for stable and efficient off-policy learning in reinforcement learning,
- Algorithms for online statistical forecasting under constraints (e.g. bounded memory),
- Algorithms for black-box curriculum learning.