Daniel Rim
2025
LLM ContextBridge: A Hybrid Approach for Intent and Dialogue Understanding in IVSR
Changwoo Chun
|
Daniel Rim
|
Juhee Park
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
In-vehicle speech recognition (IVSR) systems are crucial components of modern automotive interfaces, enabling hands-free control and enhancing user safety. However, traditional IVSR systems often struggle with interpreting user intent accurately due to limitations in contextual understanding and ambiguity resolution, leading to user frustration. This paper introduces LLM ContextBridge, a novel hybrid architecture that integrates Pretrained Language Model-based intent classification with Large Language Models to enhance both command recognition and dialogue management. LLM ContextBridge serves as a seamless bridge between traditional natural language understanding techniques and LLMs, combining the precise intent recognition of conventional NLU with the contextual handling and ambiguity resolution capabilities of LLMs. This approach significantly improves recognition accuracy and user experience, particularly in complex, multi-turn dialogues. Experimental results show notable improvements in task success rates and user satisfaction, demonstrating that LLM ContextBridge can make IVSR systems more intuitive, responsive, and context-aware.
2024
Protecting Privacy Through Approximating Optimal Parameters for Sequence Unlearning in Language Models
Dohyun Lee
|
Daniel Rim
|
Minseok Choi
|
Jaegul Choo
Findings of the Association for Computational Linguistics: ACL 2024
2023
DEnsity: Open-domain Dialogue Evaluation Metric using Density Estimation
ChaeHun Park
|
Seungil Lee
|
Daniel Rim
|
Jaegul Choo
Findings of the Association for Computational Linguistics: ACL 2023
Despite the recent advances in open-domain dialogue systems, building a reliable evaluation metric is still a challenging problem. Recent studies proposed learnable metrics based on classification models trained to distinguish the correct response. However, neural classifiers are known to make overly confident predictions for examples from unseen distributions. We propose DENSITY, which evaluates a response by utilizing density estimation on the feature space derived from a neural classifier. Our metric measures how likely a response would appear in the distribution of human conversations. Moreover, to improve the performance of DENSITY, we utilize contrastive learning to further compress the feature space. Experiments on multiple response evaluation datasets show that DENSITY correlates better with human evaluations than the existing metrics.
Search
Fix data
Co-authors
- Jaegul Choo 2
- Minseok Choi 1
- Changwoo Chun 1
- Seungil Lee 1
- Dohyun Lee 1
- show all...