A Collection of Pragmatic-Similarity Judgments over Spoken Dialog Utterances

Nigel Ward, Divette Marco


Abstract
Automatic measures of similarity between sentences or utterances are invaluable for training speech synthesizers, evaluating machine translation, and assessing learner productions. While there exist measures for semantic similarity and prosodic similarity, there are as yet none for pragmatic similarity. To enable the training of such measures, we developed the first collection of human judgments of pragmatic similarity between utterance pairs. 9 judges listened to 220 utterance pairs, each consisting of an utterance extracted from a recorded dialog and a re-enactment of that utterance under various conditions designed to create various degrees of similarity. Each pair was rated on a continuous scale. The average inter-judge correlation was 0.45. We make this data available at https://github.com/divettemarco/PragSim .
Anthology ID:
2024.lrec-main.14
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
154–163
Language:
URL:
https://aclanthology.org/2024.lrec-main.14
DOI:
Bibkey:
Cite (ACL):
Nigel Ward and Divette Marco. 2024. A Collection of Pragmatic-Similarity Judgments over Spoken Dialog Utterances. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 154–163, Torino, Italia. ELRA and ICCL.
Cite (Informal):
A Collection of Pragmatic-Similarity Judgments over Spoken Dialog Utterances (Ward & Marco, LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.14.pdf