Empirical Prior for Text Autoencoders

Yongjing Yin, Wenyang Gao, Haodong Wu, Jianhao Yan, Yue Zhang


Abstract
This paper explores the application of Variational Autoencoders (VAE) in text generation, focusing on overcoming challenges like posterior collapse and the limitations of simplistic prior distributions. We investigate a transition from VAE to text autoencoders (AE), which model a compact latent space and preserves the capability of the language model itself. Our method involves layer-wise latent vectors regularized by orthogonal constraints to encourage distinct semantic spaces. In particular, we estimate an empirical prior online from the learned latent vectors to support sampling during generation like VAE. Experimental results on standard benchmarks demonstrate that the autoencoders generate higher quality and more diverse text than the VAE-based Transformer baselines, offering an effective alternative for generative language modeling.
Anthology ID:
2024.findings-emnlp.796
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13628–13640
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.796/
DOI:
10.18653/v1/2024.findings-emnlp.796
Bibkey:
Cite (ACL):
Yongjing Yin, Wenyang Gao, Haodong Wu, Jianhao Yan, and Yue Zhang. 2024. Empirical Prior for Text Autoencoders. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 13628–13640, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Empirical Prior for Text Autoencoders (Yin et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.796.pdf