Efficient Planning in a Compact Latent Action Space

Abstract

While planning-based sequence modelling methods have shown great potential in continuous control, scaling them to high-dimensional state-action sequences remains an open challenge due to the high computational complexity and innate difficulty of planning in high-dimensional spaces. We propose the Trajectory Autoencoding Planner (TAP), a planning-based sequence modelling RL method that scales to high state-action dimensionalities. Using a state-conditional Vector-Quantized Variational Autoencoder (VQ-VAE), TAP models the conditional distribution of the trajectories given the current state. When deployed as an RL agent, TAP avoids planning step-by-step in a high-dimensional continuous action space but instead looks for the optimal latent code sequences by beam search. Unlike O(D^3) complexity of Trajectory Transformer, TAP enjoys constant O(C) planning computational complexity regarding state-action dimensionality D. Our empirical evaluation also shows the increasingly strong performance of TAP with the growing dimensionality. For Adroit robotic hand manipulation tasks with high state and action dimensionality, TAP surpasses existing model-based methods, including TT, with a large margin and also beats strong model-free actor-critic baselines.

Publication
In Arxiv
Zhengyao Jiang
Zhengyao Jiang
PhD Student of Machine Learning

I’m Zhengyao Jiang, a machine learning PhD student at UCL, supervised by Tim Rocktäschel and Edward Grefenstette. I’m interested in improving the data efficiency and generalization of reinforcement learning, pushing the RL to real-world applications. To reach those goals, my research focuses on both incorporating priors with neuro-symbolic methods and leveraging off-policy/offline data. I used to work on deep learning financial applications, having the experience of algorithmic live trading on the cryptocurrency market.