2022
pdf
bib
abs
Logical Reasoning for Task Oriented Dialogue Systems
Sajjad Beygi
|
Maryam Fazel-Zarandi
|
Alessandra Cervone
|
Prakash Krishnan
|
Siddhartha Jonnalagadda
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
In recent years, large pretrained models have been used in dialogue systems to improve successful task completion rates. However, lack of reasoning capabilities of dialogue platforms make it difficult to provide relevant and fluent responses, unless the designers of a conversational experience spend a considerable amount of time implementing these capabilities in external rule based modules. In this work, we propose a novel method to fine-tune pretrained transformer models such as Roberta and T5, to reason over a set of facts in a given dialogue context. Our method includes a synthetic data generation mechanism which helps the model learn logical relations, such as comparison between list of numerical values, inverse relations (and negation), inclusion and exclusion for categorical attributes, and application of a combination of attributes over both numerical and categorical values, and spoken form for numerical values, without need for additional training data. We show that the transformer based model can perform logical reasoning to answer questions when the dialogue context contains all the required information, otherwise it is able to extract appropriate constraints to pass to downstream components (e.g. a knowledge base) when partial information is available. We observe that transformer based models such as UnifiedQA-T5 can be fine-tuned to perform logical reasoning (such as numerical and categorical attributes’ comparison) over attributes seen at training time (e.g., accuracy of 90%+ for comparison of smaller than kmax=5 values over heldout test dataset).
2021
pdf
bib
abs
Alexa Conversations: An Extensible Data-driven Approach for Building Task-oriented Dialogue Systems
Anish Acharya
|
Suranjit Adhikari
|
Sanchit Agarwal
|
Vincent Auvray
|
Nehal Belgamwar
|
Arijit Biswas
|
Shubhra Chandra
|
Tagyoung Chung
|
Maryam Fazel-Zarandi
|
Raefer Gabriel
|
Shuyang Gao
|
Rahul Goel
|
Dilek Hakkani-Tur
|
Jan Jezabek
|
Abhay Jha
|
Jiun-Yu Kao
|
Prakash Krishnan
|
Peter Ku
|
Anuj Goyal
|
Chien-Wei Lin
|
Qing Liu
|
Arindam Mandal
|
Angeliki Metallinou
|
Vishal Naik
|
Yi Pan
|
Shachi Paul
|
Vittorio Perera
|
Abhishek Sethi
|
Minmin Shen
|
Nikko Strom
|
Eddie Wang
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations
Traditional goal-oriented dialogue systems rely on various components such as natural language understanding, dialogue state tracking, policy learning and response generation. Training each component requires annotations which are hard to obtain for every new domain, limiting scalability of such systems. Similarly, rule-based dialogue systems require extensive writing and maintenance of rules and do not scale either. End-to-End dialogue systems, on the other hand, do not require module-specific annotations but need a large amount of data for training. To overcome these problems, in this demo, we present Alexa Conversations, a new approach for building goal-oriented dialogue systems that is scalable, extensible as well as data efficient. The components of this system are trained in a data-driven manner, but instead of collecting annotated conversations for training, we generate them using a novel dialogue simulator based on a few seed dialogues and specifications of APIs and entities provided by the developer. Our approach provides out-of-the-box support for natural conversational phenomenon like entity sharing across turns or users changing their mind during conversation without requiring developers to provide any such dialogue flows. We exemplify our approach using a simple pizza ordering task and showcase its value in reducing the developer burden for creating a robust experience. Finally, we evaluate our system using a typical movie ticket booking task integrated with live APIs and show that the dialogue simulator is an essential component of the system that leads to over 50% improvement in turn-level action signature prediction accuracy.