ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering

Francesco Maria Molfese, Simone Conia, Riccardo Orlando, Roberto Navigli


Abstract
Current Large Language Models (LLMs) have shown strong reasoning capabilities in commonsense question answering benchmarks, but the process underlying their success remains largely opaque. As a consequence, recent approaches have equipped LLMs with mechanisms for knowledge retrieval, reasoning and introspection, not only to improve their capabilities but also to enhance the interpretability of their outputs. However, these methods require additional training, hand-crafted templates or human-written explanations. To address these issues, we introduce ZEBRA, a zero-shot question answering framework that combines retrieval, case-based reasoning and introspection and dispenses with the need for additional training of the LLM. Given an input question, ZEBRA retrieves relevant question-knowledge pairs from a knowledge base and generates new knowledge by reasoning over the relationships in these pairs. This generated knowledge is then used to answer the input question, improving the model’s performance and interpretability. We evaluate our approach across 8 well-established commonsense reasoning benchmarks, demonstrating that ZEBRA consistently outperforms strong LLMs and previous knowledge integration approaches, achieving an average accuracy improvement of up to 4.5 points.
Anthology ID:
2024.emnlp-main.1251
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22429–22444
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1251/
DOI:
10.18653/v1/2024.emnlp-main.1251
Bibkey:
Cite (ACL):
Francesco Maria Molfese, Simone Conia, Riccardo Orlando, and Roberto Navigli. 2024. ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 22429–22444, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
ZEBRA: Zero-Shot Example-Based Retrieval Augmentation for Commonsense Question Answering (Molfese et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1251.pdf