DDNAS: Discretized Differentiable Neural Architecture Search for Text Classification

Kuan Chun Chen, Cheng Te Li, Kuo Jung Lee

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Neural Architecture Search (NAS) has shown promising capability in learning text representation. However, existing text-based NAS neither performs a learnable fusion of neural operations to optimize the architecture nor encodes the latent hierarchical categorization behind text input. This article presents a novel NAS method, Discretized Differentiable Neural Architecture Search (DDNAS), for text representation learning and classification. With the continuous relaxation of architecture representation, DDNAS can use gradient descent to optimize the search. We also propose a novel discretization layer via mutual information maximization, which is imposed on every search node to model the latent hierarchical categorization in text representation. Extensive experiments conducted on eight diverse real datasets exhibit that DDNAS can consistently outperform the state-of-the-art NAS methods. While DDNAS relies on only three basic operations, i.e., convolution, pooling, and none, to be the candidates of NAS building blocks, its promising performance is noticeable and extensible to obtain further improvement by adding more different operations.

Original languageEnglish
Article number88
JournalACM Transactions on Intelligent Systems and Technology
Volume14
Issue number5
DOIs
Publication statusPublished - 2023 Oct 3

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'DDNAS: Discretized Differentiable Neural Architecture Search for Text Classification'. Together they form a unique fingerprint.

Cite this