KEHRL: Learning Knowledge-Enhanced Language Representations with Hierarchical Reinforcement Learning

Dongyang Li, Taolin Zhang, Longtao Huang, Chengyu Wang, Xiaofeng He, Hui Xue


Abstract
Knowledge-enhanced pre-trained language models (KEPLMs) leverage relation triples from knowledge graphs (KGs) and integrate these external data sources into language models via self-supervised learning. Previous works treat knowledge enhancement as two independent operations, i.e., knowledge injection and knowledge integration. In this paper, we propose to learn Knowledge-Enhanced language representations with Hierarchical Reinforcement Learning (KEHRL), which jointly addresses the problems of detecting positions for knowledge injection and integrating external knowledge into the model in order to avoid injecting inaccurate or irrelevant knowledge. Specifically, a high-level reinforcement learning (RL) agent utilizes both internal and prior knowledge to iteratively detect essential positions in texts for knowledge injection, which filters out less meaningful entities to avoid diverting the knowledge learning direction. Once the entity positions are selected, a relevant triple filtration module is triggered to perform low-level RL to dynamically refine the triples associated with polysemic entities through binary-valued actions. Experiments validate KEHRL’s effectiveness in probing factual knowledge and enhancing the model’s performance on various natural language understanding tasks.
Anthology ID:
2024.lrec-main.847
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
9693–9704
Language:
URL:
https://aclanthology.org/2024.lrec-main.847
DOI:
Bibkey:
Cite (ACL):
Dongyang Li, Taolin Zhang, Longtao Huang, Chengyu Wang, Xiaofeng He, and Hui Xue. 2024. KEHRL: Learning Knowledge-Enhanced Language Representations with Hierarchical Reinforcement Learning. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 9693–9704, Torino, Italia. ELRA and ICCL.
Cite (Informal):
KEHRL: Learning Knowledge-Enhanced Language Representations with Hierarchical Reinforcement Learning (Li et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.847.pdf