Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk

Dennis Ulmer, Elman Mansimov, Kaixiang Lin, Lijia Sun, Xibin Gao, Yi Zhang


Abstract
Large language models (LLMs) are powerful dialogue agents, but specializing them towards fulfilling a specific function can be challenging. Instructing tuning, i.e. tuning models on instruction and sample responses generated by humans (Ouyang et al., 2022), has proven as an effective method to do so, yet requires a number of data samples that a) might not be available or b) costly to generate. Furthermore, this cost increases when the goal is to make the LLM follow a specific workflow within a dialogue instead of single instructions. Inspired by the self-play technique in reinforcement learning and the use of LLMs to simulate human agents, we propose a more effective method for data collection through LLMs engaging in a conversation in various roles. This approach generates a training data via “self-talk” of LLMs that can be refined and utilized for supervised fine-tuning. We introduce an automated way to measure the (partial) success of a dialogue. This metric is used to filter the generated conversational data that is fed back in LLM for training. Based on our automated and human evaluations of conversation quality, we demonstrate that such self-talk data improves results. In addition, we examine the various characteristics that showcase the quality of generated dialogues and how they can be connected to their potential utility as training data.
Anthology ID:
2024.findings-acl.566
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9500–9522
Language:
URL:
https://aclanthology.org/2024.findings-acl.566
DOI:
10.18653/v1/2024.findings-acl.566
Bibkey:
Cite (ACL):
Dennis Ulmer, Elman Mansimov, Kaixiang Lin, Lijia Sun, Xibin Gao, and Yi Zhang. 2024. Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk. In Findings of the Association for Computational Linguistics: ACL 2024, pages 9500–9522, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk (Ulmer et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.566.pdf