Jinjin Tian
2024
BlendFilter: Advancing Retrieval-Augmented Large Language Models via Query Generation Blending and Knowledge Filtering
Haoyu Wang
|
Ruirui Li
|
Haoming Jiang
|
Jinjin Tian
|
Zhengyang Wang
|
Chen Luo
|
Xianfeng Tang
|
Monica Cheng
|
Tuo Zhao
|
Jing Gao
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Retrieval-augmented Large Language Models (LLMs) offer substantial benefits in enhancing performance across knowledge-intensive scenarios. However, these methods often struggle with complex inputs and encounter difficulties due to noisy knowledge retrieval, notably hindering model effectiveness. To address this issue, we introduce BlendFilter, a novel approach that elevates retrieval-augmented LLMs by integrating query generation blending with knowledge filtering. BlendFilter proposes the blending process through its query generation method, which integrates both external and internal knowledge augmentation with the original query, ensuring comprehensive information gathering. Additionally, our distinctive knowledge filtering module capitalizes on the intrinsic capabilities of the LLM, effectively eliminating extraneous data. We conduct extensive experiments on three open-domain question answering benchmarks, and the findings clearly indicate that our innovative BlendFilter surpasses state-of-the-art baselines significantly.
2023
UseClean: learning from complex noisy labels in named entity recognition
Jinjin Tian
|
Kun Zhou
|
Meiguo Wang
|
Yu Zhang
|
Benjamin Yao
|
Xiaohu Liu
|
Chenlei Guo
Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD)
We investigate and refine denoising methods for NER task on data that potentially contains extremely noisy labels from multi-sources. In this paper, we first summarized all possible noise types and noise generation schemes, based on which we built a thorough evaluation system. We then pinpoint the bottleneck of current state-of-art denoising methods using our evaluation system. Correspondingly, we propose several refinements, including using a two-stage framework to avoid error accumulation; a novel confidence score utilizing minimal clean supervision to increase predictive power; an automatic cutoff fitting to save extensive hyper-parameter tuning; a warm started weighted partial CRF to better learn on the noisy tokens. Additionally, we propose to use adaptive sampling to further boost the performance in long-tailed entity settings. Our method improves F1 score by on average at least 5 10% over current state-of-art across extensive experiments.
Search
Co-authors
- Haoyu Wang 1
- Ruirui Li 1
- Haoming Jiang 1
- Zhengyang Wang 1
- Chen Luo 1
- show all...