Explicit Inductive Inference using Large Language Models

Tianyang Liu, Tianyi Li, Liang Cheng, Mark Steedman


Abstract
Large Language Models (LLMs) are reported to hold undesirable attestation bias on inference tasks: when asked to predict if a premise P entails a hypothesis H, instead of considering H‘s conditional truthfulness entailed by P, LLMs tend to use the out-of-context truth label of H as a fragile proxy. In this paper, we propose a pipeline that exploits this bias to do explicit inductive inference. Our pipeline uses an LLM to transform a premise into a set of attested alternatives, and then aggregate answers of the derived new entailment inquiries to support the original inference prediction. On a directional predicate entailment benchmark, we demonstrate that by applying this simple pipeline, we can improve the overall performance of LLMs on inference and substantially alleviate the impact of their attestation bias.
Anthology ID:
2024.findings-emnlp.926
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15779–15786
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.926/
DOI:
10.18653/v1/2024.findings-emnlp.926
Bibkey:
Cite (ACL):
Tianyang Liu, Tianyi Li, Liang Cheng, and Mark Steedman. 2024. Explicit Inductive Inference using Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 15779–15786, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Explicit Inductive Inference using Large Language Models (Liu et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.926.pdf
Software:
 2024.findings-emnlp.926.software.zip
Data:
 2024.findings-emnlp.926.data.zip