An imitation learning framework for generating multi-modal trajectories from unstructured demonstrations

Jian Wei Peng, Min-Chun Hu, Wei Ta Chu

研究成果: Article同行評審

摘要

The main challenge of the trajectory generation problem is to generate long-term as well as diverse trajectories. Generative Adversarial Imitation Learning (GAIL) is a well-known model-free imitation learning algorithm that can be utilized to generate trajectory data, while vanilla GAIL would fail to capture multi-modal demonstrations. Recent methods propose latent variable models to solve this problem; however, previous works may have a mode missing problem. In this work, we propose a novel method to generate long-term trajectories that are controllable by a continuous latent variable based on GAIL and a conditional Variational Autoencoder (cVAE). We further assume that subsequences of the same trajectory should be encoded to similar locations in the latent space. Therefore, we introduce a contrastive loss in the training of the encoder. In our motion synthesis task, we propose to first construct a low-dimensional motion manifold by using a VAE to reduce the burden of our imitation learning model. Our experimental results show that the proposed model outperforms the state-of-the-art methods and can be applied to motion synthesis.

原文English
頁(從 - 到)712-723
頁數12
期刊Neurocomputing
500
DOIs
出版狀態Published - 2022 8月 21

All Science Journal Classification (ASJC) codes

  • 電腦科學應用
  • 認知神經科學
  • 人工智慧

指紋

深入研究「An imitation learning framework for generating multi-modal trajectories from unstructured demonstrations」主題。共同形成了獨特的指紋。

引用此