MACAROON: Training Vision-Language Models To Be Your Engaged Partners

Shujin Wu, Yi Fung, Sha Li, Yixin Wan, Kai-Wei Chang, Heng Ji


Abstract
Large vision-language models (LVLMs), while proficient in following instructions and responding to diverse questions, invariably generate detailed responses even when questions are ambiguous or unanswerable, leading to hallucinations and bias issues. Thus, it is essential for LVLMs to proactively engage with humans to ask for clarifications or additional information for better responses. In this study, we aim to shift LVLMs from passive answer providers to proactive engaged partners. We begin by establishing a three-tiered hierarchy for questions of invalid, ambiguous, and personalizable nature to measure the proactive engagement capabilities of LVLMs. Utilizing this hierarchy, we create PIE, (ProactIve Engagement Evaluation) through GPT-4o and human annotators, consisting of 853 questions across six distinct, fine-grained question types that are verified by human annotators and accompanied with well-defined metrics. Our evaluations on indicate poor performance of existing LVLMs, with the best-performing open-weights model only achieving an Aggregate Align Rate (AAR) of 0.28. In response, we introduce MACAROON, self-iMaginAtion for ContrAstive pReference OptimizatiON, which instructs LVLMs to autonomously generate contrastive response pairs for unlabeled questions given the task description and human-crafted criteria. Then, the self-imagined data is formatted for conditional reinforcement learning. Experimental results show MACAROON effectively improves LVLMs’ capabilities to be proactively engaged (0.84 AAR) while maintaining comparable performance on general tasks.
Anthology ID:
2024.findings-emnlp.454
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7715–7731
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.454
DOI:
10.18653/v1/2024.findings-emnlp.454
Bibkey:
Cite (ACL):
Shujin Wu, Yi Fung, Sha Li, Yixin Wan, Kai-Wei Chang, and Heng Ji. 2024. MACAROON: Training Vision-Language Models To Be Your Engaged Partners. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7715–7731, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
MACAROON: Training Vision-Language Models To Be Your Engaged Partners (Wu et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.454.pdf
Software:
 2024.findings-emnlp.454.software.zip