Transformer with Task Selection for Continual Learning

Sheng Kai Huang, Chun Rong Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The goal of continual learning is to let the models continuously learn the new incoming knowledge without catastrophic forgetting. To address this issue, we propose a transformer-based framework with the task selection module. The task selection module will select corresponding task tokens to assist the learning of incoming samples of new tasks. For previous samples, the selected task tokens can retain the previous knowledge to assist the prediction of samples of learned classes. Compared with the state-of-The-Art methods, our method achieves good performance on the CIFAR-100 dataset especially for the testing of the last task to show that our method can better prevent catastrophic forgetting.

Original languageEnglish
Title of host publicationProceedings of MVA 2023 - 18th International Conference on Machine Vision and Applications
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9784885523434
DOIs
Publication statusPublished - 2023
Event18th International Conference on Machine Vision and Applications, MVA 2023 - Hamamatsu, Japan
Duration: 2023 Jul 232023 Jul 25

Publication series

NameProceedings of MVA 2023 - 18th International Conference on Machine Vision and Applications

Conference

Conference18th International Conference on Machine Vision and Applications, MVA 2023
Country/TerritoryJapan
CityHamamatsu
Period23-07-2323-07-25

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Graphics and Computer-Aided Design
  • Computer Science Applications
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Transformer with Task Selection for Continual Learning'. Together they form a unique fingerprint.

Cite this