Francisco Javier Chiyah Garcia
2020
CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues
Francisco Javier Chiyah Garcia
|
José Lopes
|
Xingkun Liu
|
Helen Hastie
Proceedings of the Twelfth Language Resources and Evaluation Conference
Large corpora of task-based and open-domain conversational dialogues are hugely valuable in the field of data-driven dialogue systems. Crowdsourcing platforms, such as Amazon Mechanical Turk, have been an effective method for collecting such large amounts of data. However, difficulties arise when task-based dialogues require expert domain knowledge or rapid access to domain-relevant information, such as databases for tourism. This will become even more prevalent as dialogue systems become increasingly ambitious, expanding into tasks with high levels of complexity that require collaboration and forward planning, such as in our domain of emergency response. In this paper, we propose CRWIZ: a framework for collecting real-time Wizard of Oz dialogues through crowdsourcing for collaborative, complex tasks. This framework uses semi-guided dialogue to avoid interactions that breach procedures and processes only known to experts, while enabling the capture of a wide variety of interactions.
2018
Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models
Francisco Javier Chiyah Garcia
|
David A. Robb
|
Xingkun Liu
|
Atanas Laskov
|
Pedro Patron
|
Helen Hastie
Proceedings of the 11th International Conference on Natural Language Generation
As unmanned vehicles become more autonomous, it is important to maintain a high level of transparency regarding their behaviour and how they operate. This is particularly important in remote locations where they cannot be directly observed. Here, we describe a method for generating explanations in natural language of autonomous system behaviour and reasoning. Our method involves deriving an interpretable model of autonomy through having an expert ‘speak aloud’ and providing various levels of detail based on this model. Through an online evaluation study with operators, we show it is best to generate explanations with multiple possible reasons but tersely worded. This work has implications for designing interfaces for autonomy as well as for explainable AI and operator training.
Search
Co-authors
- Xingkun Liu 2
- Helen Hastie 2
- David A. Robb 1
- Atanas Laskov 1
- Pedro Patron 1
- show all...