TY - GEN
T1 - Latent Attribute Control for Story Generation
AU - Tang, Yu Siou
AU - Wu, Chung Hsien
N1 - Funding Information:
ACKNOWLEDGMENT We would like to thank the anonymous reviewers for their suggestions towards improving our manuscript. In addition, the authors would like to thank the Ministry of Science and Technology, Taiwan for financially supporting this research under contract no. 108-2221-E-006 -103 -MY3.
Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Neural open-domain story generation aims to generate long and fluent text as human writing. Recent work attempts to generate stories in fine-grained controls such as plot-like structure and ending valence. Although those outputs comply with the rules of grammar, they generally have logical conflicts and a lack of long-range cohesion because of explicit controlling. In this study, we propose to capture challenging story representation using latent variable modeling for the storytelling model, and we align the encoder output with story latent embeddings. Our approach and baselines are all built on the pre-trained BART language model. Experimental results demonstrated that our model largely improved compared to strong baselines on human evaluation. Human evaluators favored our generated stories, and the results were more relevant to the prompt and more coherent than the baselines.
AB - Neural open-domain story generation aims to generate long and fluent text as human writing. Recent work attempts to generate stories in fine-grained controls such as plot-like structure and ending valence. Although those outputs comply with the rules of grammar, they generally have logical conflicts and a lack of long-range cohesion because of explicit controlling. In this study, we propose to capture challenging story representation using latent variable modeling for the storytelling model, and we align the encoder output with story latent embeddings. Our approach and baselines are all built on the pre-trained BART language model. Experimental results demonstrated that our model largely improved compared to strong baselines on human evaluation. Human evaluators favored our generated stories, and the results were more relevant to the prompt and more coherent than the baselines.
UR - https://www.scopus.com/pages/publications/85125187899
UR - https://www.scopus.com/pages/publications/85125187899#tab=citedBy
U2 - 10.1109/IALP54817.2021.9675173
DO - 10.1109/IALP54817.2021.9675173
M3 - Conference contribution
AN - SCOPUS:85125187899
T3 - 2021 International Conference on Asian Language Processing, IALP 2021
SP - 148
EP - 153
BT - 2021 International Conference on Asian Language Processing, IALP 2021
A2 - Xiong, Deyi
A2 - Jiang, Ridong
A2 - Lu, Yanfeng
A2 - Dong, Minghui
A2 - Li, Haizhou
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 International Conference on Asian Language Processing, IALP 2021
Y2 - 11 December 2021 through 13 December 2021
ER -