- Breaking the Sample Size Barrier in Reinforcement Learning
Reinforcement learning (RL) is frequently modeled as learning and decision making in a Markov decision process (MDP). A core objective of RL is to search for a policy — based on a collection of noisy data samples — that approximately maximizes expected cumulative rewards in an MDP, without direct access to a precise description of the underlying model. In contemporary applications, it is increasingly more common to encounter environments with prohibitively large state and action space, thus exacerbating the challenge of collecting enough samples to learn the model.
In this talk, we present three recent works to show how to break the sample size barrier in reinforcement learning. The first part demonstrates that a perturbed model-based RL approach is minimax optimal under a generative model, without suffering from a sample size barrier that was present in all past work. The second work shows that model-based offline reinforcement learning is minimax optimal without burn-in cost. Finally, we develop a minimax optimal algorithm for multi-agent Markov game to break curse of multi-agents and the long horizon barrier at the same time. These results might shed light on the efficacy of these algorithms in more complicated scenarios.
References: https://arxiv.org/abs/2005.12900, https://arxiv.org/abs/2204.05275, https://arxiv.org/abs/2208.10458
Gen Li is currently a postdoctoral researcher in the Department of Statistics and Data Science at the Wharton School, University of Pennsylvania. He received Ph.D. in Electrical Engineering from Princeton University in 2021, and his bachelor's degrees in Electronic Engineering and Mathematics from Tsinghua University in 2016. His research interests include reinforcement learning, high-dimensional statistics, machine learning, signal processing, and mathematical optimization. He has received the excellent graduate award and the excellent thesis award from Tsinghua University.