Learning the Chinese Sentence Representation with LSTM Autoencoder

Mu Yen Chen, Tien Chi Huang, Yu Shu, Chia Chen Chen, Tsung Che Hsieh, Neil Y. Yen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

This study retains the meanings of the original text using Autoencoder (AE) in this regard. This study uses the different loss (includes three types) to train the neural network model, hopes that after compressing sentence features, it can still decompress the original input sentences and classify the correct targets, such as positive or negative sentiment. In this way, it supposed to get the more relative features (compressing sentence features) in the sentences to classify the targets, rather than using the classification loss that may classify by the meaningless features (words). In the result, this study discovers that adding additional features for correction of errors does not interfere with the learning. Also, not all words are needed to be restored without distortion after applying the AE method.

Original languageEnglish
Title of host publicationThe Web Conference 2018 - Companion of the World Wide Web Conference, WWW 2018
PublisherAssociation for Computing Machinery, Inc
Pages403-408
Number of pages6
ISBN (Electronic)9781450356404
DOIs
Publication statusPublished - 2018 Apr 23
Event27th International World Wide Web, WWW 2018 - Lyon, France
Duration: 2018 Apr 232018 Apr 27

Publication series

NameThe Web Conference 2018 - Companion of the World Wide Web Conference, WWW 2018

Conference

Conference27th International World Wide Web, WWW 2018
Country/TerritoryFrance
CityLyon
Period18-04-2318-04-27

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Software

Fingerprint

Dive into the research topics of 'Learning the Chinese Sentence Representation with LSTM Autoencoder'. Together they form a unique fingerprint.

Cite this