An imitation learning framework for generating multi-modal trajectories from unstructured demonstrations

Jian Wei Peng, Min-Chun Hu, Wei Ta Chu

Research output: Contribution to journalArticlepeer-review

Abstract

The main challenge of the trajectory generation problem is to generate long-term as well as diverse trajectories. Generative Adversarial Imitation Learning (GAIL) is a well-known model-free imitation learning algorithm that can be utilized to generate trajectory data, while vanilla GAIL would fail to capture multi-modal demonstrations. Recent methods propose latent variable models to solve this problem; however, previous works may have a mode missing problem. In this work, we propose a novel method to generate long-term trajectories that are controllable by a continuous latent variable based on GAIL and a conditional Variational Autoencoder (cVAE). We further assume that subsequences of the same trajectory should be encoded to similar locations in the latent space. Therefore, we introduce a contrastive loss in the training of the encoder. In our motion synthesis task, we propose to first construct a low-dimensional motion manifold by using a VAE to reduce the burden of our imitation learning model. Our experimental results show that the proposed model outperforms the state-of-the-art methods and can be applied to motion synthesis.

Original languageEnglish
Pages (from-to)712-723
Number of pages12
JournalNeurocomputing
Volume500
DOIs
Publication statusPublished - 2022 Aug 21

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'An imitation learning framework for generating multi-modal trajectories from unstructured demonstrations'. Together they form a unique fingerprint.

Cite this