Qi Yang
2024
yangqi at SemEval-2024 Task 9: Simulate Human Thinking by Large Language Model for Lateral Thinking Challenges
Qi Yang
|
Jingjie Zeng
|
Liang Yang
|
Hongfei Lin
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
This paper describes our system used in the SemEval-2024 Task 9 on two sub-tasks, BRAINTEASER: A Novel Task Defying Common Sense. In this work, we developed a system SHTL, which means simulate human thinking capabilities by Large Language Model (LLM). Our approach bifurcates into two main components: Common Sense Reasoning and Rationalize Defying Common Sense. To mitigate the hallucinations of LLM, we implemented a strategy that combines Retrieval-augmented Generation (RAG) with the the Self-Adaptive In-Context Learning (SAICL), thereby sufficiently leveraging the powerful language ability of LLM. The effectiveness of our method has been validated by its performance on the test set, with an average performance on two subtasks that is 30.1 higher than ChatGPT setting zero-shot and only 0.8 lower than that of humans.
2023
Orca: A Few-shot Benchmark for Chinese Conversational Machine Reading Comprehension
Nuo Chen
|
Hongguang Li
|
Junqing He
|
Yinan Bao
|
Xinshi Lin
|
Qi Yang
|
Jianfeng Liu
|
Ruyi Gan
|
Jiaxing Zhang
|
Baoyuan Wang
|
Jia Li
Findings of the Association for Computational Linguistics: EMNLP 2023
The conversational machine reading comprehension (CMRC) task aims to answer questions in conversations, which has been a hot research topic in recent years because of its wide applications. However, existing CMRC benchmarks in which each conversation is assigned a static passage are inconsistent with real scenarios. Thus, model’s comprehension ability towards real scenarios are hard to evaluate reasonably. To this end, we propose the first Chinese CMRC benchmark Orca and further provide zero-shot/few-shot settings to evaluate model’s generalization ability towards diverse domains. We collect 831 hot-topic driven conversations with 4,742 turns in total. Each turn of a conversation is assigned with a response-related passage, aiming to evaluate model’s comprehension ability more reasonably. The topics of conversations are collected from social media platform and cover 33 domains, trying to be consistent with real scenarios. Importantly, answers in Orca are all well-annotated natural responses rather than the specific spans or short phrase in previous datasets. Besides, we implement three strong baselines to tackle the challenge in Orca. The results indicate the great challenge of our CMRC benchmark.
Search
Co-authors
- Nuo Chen 1
- Hongguang Li 1
- Junqing He 1
- Yinan Bao 1
- Xinshi Lin 1
- show all...