Boosting Language Models Reasoning with Chain-of-Knowledge Prompting

Jianing Wang, Qiushi Sun, Xiang Li, Ming Gao


Abstract
Recently, Chain-of-Thought (CoT) prompting has delivered success on complex reasoning tasks, which aims at designing a simple prompt like “Let’s think step by step” or multiple in-context exemplars with well-designed rationales to elicit Large Language Models (LLMs) to generate intermediate reasoning steps. However, the generated rationales often come with hallucinations, making unfactual and unfaithful reasoning chains. To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple. This is inspired by our human behaviors, i.e., we can draw a mind map or knowledge map as the reasoning evidence in the brain before answering a complex question. Benefiting from CoK, we additionally introduce an F2-Verification method to estimate the reliability of the reasoning chains in terms of factuality and faithfulness. For the unreliable response, the wrong evidence can be indicated to prompt the LLM to rethink. Extensive experiments demonstrate that our method can further improve the performance of commonsense, factual, symbolic, and arithmetic reasoning tasks.
Anthology ID:
2024.acl-long.271
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4958–4981
Language:
URL:
https://aclanthology.org/2024.acl-long.271
DOI:
10.18653/v1/2024.acl-long.271
Bibkey:
Cite (ACL):
Jianing Wang, Qiushi Sun, Xiang Li, and Ming Gao. 2024. Boosting Language Models Reasoning with Chain-of-Knowledge Prompting. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4958–4981, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Boosting Language Models Reasoning with Chain-of-Knowledge Prompting (Wang et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.271.pdf