Learning from Relevant Subgoals in Successful Dialogs using Iterative Training for Task-oriented Dialog Systems

Magdalena Kaiser, Patrick Ernst, György Szarvas


Abstract
Task-oriented Dialog (ToD) systems have to solve multiple subgoals to accomplish user goals, whereas feedback is often obtained only at the end of the dialog. In this work, we propose SUIT (SUbgoal-aware ITerative Training), an iterative training approach for improving ToD systems. We sample dialogs from the model we aim to improve and determine subgoals that contribute to dialog success using distant supervision to obtain high quality training samples. We show how this data improves supervised fine-tuning or, alternatively, preference learning results. Performance improves when applying these steps over several iterations: SUIT reaches new state-of-the-art performance on a popular ToD benchmark.
Anthology ID:
2024.findings-emnlp.362
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6236–6246
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.362/
DOI:
10.18653/v1/2024.findings-emnlp.362
Bibkey:
Cite (ACL):
Magdalena Kaiser, Patrick Ernst, and György Szarvas. 2024. Learning from Relevant Subgoals in Successful Dialogs using Iterative Training for Task-oriented Dialog Systems. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6236–6246, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Learning from Relevant Subgoals in Successful Dialogs using Iterative Training for Task-oriented Dialog Systems (Kaiser et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.362.pdf