Can probing classifiers reveal the learning by contact center large language models?: No, it doesn’t!

Varun Nathan, Ayush Kumar, Digvijay Ingle


Abstract
Fine-tuning large language models (LLMs) with domain-specific instruction dataset has emerged as an effective method to enhance their domain-specific understanding. Yet, there is limited work that examines the core characteristics acquired during this process. In this study, we benchmark the fundamental characteristics learned by contact-center (CC) domain specific instruction fine-tuned LLMs with out-of-the-box (OOB) LLMs via probing tasks encompassing conversational, channel, and automatic speech recognition (ASR) properties. We explore different LLM architectures (Flan-T5 and Llama) and sizes (3B, 7B, 11B, 13B). Our findings reveal remarkable effectiveness of CC-LLMs on the in-domain downstream tasks, with improvement in response acceptability by over 48% compared to OOB-LLMs. However, we observe that the performance of probing classifiers are relatively similar and does not reflect the performance of in-domain downstream tasks. A similar observation is also noted on SentEval dataset that assess capabilities of models in terms of surface, syntactic, and semantic information through probing tasks. Our study challenges the premise that probing classifiers can reveal the fundamental characteristics learned by large language models and is reflective of the downstream task performance, via a case-study of LLMs tuned for contact center domain.
Anthology ID:
2024.insights-1.12
Volume:
Proceedings of the Fifth Workshop on Insights from Negative Results in NLP
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Shabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers, Anna Rumshisky
Venues:
insights | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
92–100
Language:
URL:
https://aclanthology.org/2024.insights-1.12
DOI:
10.18653/v1/2024.insights-1.12
Bibkey:
Cite (ACL):
Varun Nathan, Ayush Kumar, and Digvijay Ingle. 2024. Can probing classifiers reveal the learning by contact center large language models?: No, it doesn’t!. In Proceedings of the Fifth Workshop on Insights from Negative Results in NLP, pages 92–100, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Can probing classifiers reveal the learning by contact center large language models?: No, it doesn’t! (Nathan et al., insights-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.insights-1.12.pdf