Kernel-Based Lifelong Policy Gradient Reinforcement Learning
Rami Mowakeaa, Seung-Jun Kim, Darren Emge
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:11:14
Policy gradient methods have been widely used in reinforcement learning (RL), especially thanks to their facility to handle continuous state spaces, strong convergence guarantees, and low-complexity updates. Training of the methods for individual tasks, however, can still be taxing in terms of the earning speed and the sample trajectory collection. Lifelong learning aims to exploit the intrinsic structure shared among a suite of RL tasks, akin to multitask learning, but in an efficient online fashion. In this work, we propose a lifelong RL algorithm based on the kernel method to leverage nonlinear features of the data based on a popular union-of-subspace model. Experimental results on a set of simple related tasks verify the advantage of the proposed strategy, compared to the single-task and the parametric counterparts.
Chairs:
Seung-Jun Kim