Devil’s Advocate: Anticipatory Reflection for LLM Agents

Haoyu Wang, Tao Li, Zhiwei Deng, Dan Roth, Yang Li


Abstract
In this work, we introduce a novel approach that equips LLM agents with introspection, enhancing consistency and adaptability in solving complex tasks. Our approach prompts LLM agents to decompose a given task into manageable subtasks (i.e., to make a plan), and to continuously introspect upon the suitability and results of their actions. We implement a three-fold introspective intervention: 1) anticipatory reflection on potential failures and alternative remedy before action execution, 2) post-action alignment with subtask objectives and backtracking with remedy to ensure utmost effort in plan execution, and 3) comprehensive review upon plan completion for future strategy refinement. By deploying and experimenting with this methodology—a zero-shot approach—within WebArena for practical tasks in web environments, our agent demonstrates superior performance with a success rate of 23.5% over existing zero-shot methods by 3.5%. The experimental results suggest that our introspection-driven approach not only enhances the agent’s ability to navigate unanticipated challenges through a robust mechanism of plan execution, but also improves efficiency by reducing the number of trials and plan revisions by 45% needed to achieve a task.
Anthology ID:
2024.findings-emnlp.53
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
966–978
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.53/
DOI:
10.18653/v1/2024.findings-emnlp.53
Bibkey:
Cite (ACL):
Haoyu Wang, Tao Li, Zhiwei Deng, Dan Roth, and Yang Li. 2024. Devil’s Advocate: Anticipatory Reflection for LLM Agents. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 966–978, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Devil’s Advocate: Anticipatory Reflection for LLM Agents (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.53.pdf