Hakan Inan
2020
Best Practices for Data-Efficient Modeling in NLG:How to Train Production-Ready Neural Models with Less Data
Ankit Arun
|
Soumya Batra
|
Vikas Bhardwaj
|
Ashwini Challa
|
Pinar Donmez
|
Peyman Heidari
|
Hakan Inan
|
Shashank Jain
|
Anuj Kumar
|
Shawn Mei
|
Karthik Mohan
|
Michael White
Proceedings of the 28th International Conference on Computational Linguistics: Industry Track
Natural language generation (NLG) is a critical component in conversational systems, owing to its role of formulating a correct and natural text response. Traditionally, NLG components have been deployed using template-based solutions. Although neural network solutions recently developed in the research community have been shown to provide several benefits, deployment of such model-based solutions has been challenging due to high latency, correctness issues, and high data needs. In this paper, we present approaches that have helped us deploy data-efficient neural solutions for NLG in conversational systems to production. We describe a family of sampling and modeling techniques to attain production quality with light-weight neural network models using only a fraction of the data that would be necessary otherwise, and show a thorough comparison between each. Our results show that domain complexity dictates the appropriate approach to achieve high data efficiency. Finally, we distill the lessons from our experimental findings into a list of best practices for production-level NLG model development, and present them in a brief runbook. Importantly, the end products of all of the techniques are small sequence-to-sequence models (~2Mb) that we can reliably deploy in production. These models achieve the same quality as large pretrained models (~1Gb) as judged by human raters.
Improving Text-to-Text Pre-trained Models for the Graph-to-Text Task
Zixiaofan Yang
|
Arash Einolghozati
|
Hakan Inan
|
Keith Diedrick
|
Angela Fan
|
Pinar Donmez
|
Sonal Gupta
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)
Converting a knowledge graph or sub-graph to natural text is useful when answering questions based on a knowledge base. High-capacity language models pre-trained on large-scale text corpora have recently been shown to be powerful when fine-tuned for the knowledge-graph-to-text (KG-to-text) task. In this paper, we propose two classes of methods to improve such pre-trained models for this task. First, we improve the structure awareness of the model by organizing the input as well as learning optimal ordering via multitask learning. Second, we bridge the domain gap between text-to-text and KG-to-text tasks via a second-phase KG-to-text pre-training on similar datasets and extra lexicalization supervision to make the input more similar to natural text. We demonstrate the efficacy of our methods on the popular WebNLG dataset. Our best model achieves an almost 3 point BLEU improvement on a strong baseline while lowering the relative slot-error-rate by around 35%. We also validate our results via human evaluation.
Search
Co-authors
- Pinar Donmez 2
- Ankit Arun 1
- Soumya Batra 1
- Vikas Bhardwaj 1
- Ashwini Challa 1
- show all...