Ruihao Shui
2023
A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction
Ruihao Shui
|
Yixin Cao
|
Xiang Wang
|
Tat-Seng Chua
Findings of the Association for Computational Linguistics: EMNLP 2023
Large language models (LLMs) have demonstrated great potential for domain-specific applications, such as the law domain. However, recent disputes over GPT-4’s law evaluation raise questions concerning their performance in real-world legal tasks. To systematically investigate their competency in the law, we design practical baseline solutions based on LLMs and test on the task of legal judgment prediction. In our solutions, LLMs can work alone to answer open questions or coordinate with an information retrieval (IR) system to learn from similar cases or solve simplified multi-choice questions. We show that similar cases and multi-choice options, namely label candidates, included in prompts can help LLMs recall domain knowledge that is critical for expertise legal reasoning. We additionally present an intriguing paradox wherein an IR system surpasses the performance of LLM+IR due to limited gains acquired by weaker LLMs from powerful IR systems. In such case, the role of LLMs becomes redundant. Our evaluation pipeline can be easily extended into other tasks to facilitate evaluations in other domains. Code is available at https://github.com/srhthu/LM-CompEval-Legal
2020
Expertise Style Transfer: A New Task Towards Better Communication between Experts and Laymen
Yixin Cao
|
Ruihao Shui
|
Liangming Pan
|
Min-Yen Kan
|
Zhiyuan Liu
|
Tat-Seng Chua
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The curse of knowledge can impede communication between experts and laymen. We propose a new task of expertise style transfer and contribute a manually annotated dataset with the goal of alleviating such cognitive biases. Solving this task not only simplifies the professional language, but also improves the accuracy and expertise level of laymen descriptions using simple words. This is a challenging task, unaddressed in previous work, as it requires the models to have expert intelligence in order to modify text with a deep understanding of domain knowledge and structures. We establish the benchmark performance of five state-of-the-art models for style transfer and text simplification. The results demonstrate a significant gap between machine and human performance. We also discuss the challenges of automatic evaluation, to provide insights into future research directions. The dataset is publicly available at https://srhthu.github.io/expertise-style-transfer/.
Search
Co-authors
- Yixin Cao 2
- Tat-Seng Chua 2
- Xiang Wang 1
- Liangming Pan 1
- Min-Yen Kan 1
- show all...