论文标题
重要的加权政策学习和适应
Importance Weighted Policy Learning and Adaptation
论文作者
论文摘要
利用先前经验来迅速解决新颖问题的能力是生物学习系统的标志,对于人为人造的问题非常重要。在元加强学习文献中,许多最近的工作集中在优化学习过程本身的问题上。在本文中,我们研究了一种互补的方法,该方法在概念上是简单,通用,模块化的,并且是基于最近改进的政策学习方法。该框架的灵感来自概率推理文献中的想法,并将强大的非政策学习与先验的行为或默认行为相结合,或默认行为限制了解决方案的空间并充当探索的偏见;以及值函数的表示形式,在多任务场景中,从许多培训任务中很容易学到这两种功能。与元强化学习基线相比,我们的方法在保持任务上实现了竞争性适应性,并且可以扩展到复杂的稀疏回报场景。
The ability to exploit prior experience to solve novel problems rapidly is a hallmark of biological learning systems and of great practical importance for artificial ones. In the meta reinforcement learning literature much recent work has focused on the problem of optimizing the learning process itself. In this paper we study a complementary approach which is conceptually simple, general, modular and built on top of recent improvements in off-policy learning. The framework is inspired by ideas from the probabilistic inference literature and combines robust off-policy learning with a behavior prior, or default behavior that constrains the space of solutions and serves as a bias for exploration; as well as a representation for the value function, both of which are easily learned from a number of training tasks in a multi-task scenario. Our approach achieves competitive adaptation performance on hold-out tasks compared to meta reinforcement learning baselines and can scale to complex sparse-reward scenarios.