Towards Human-Like Machine Comprehension: Few-Shot Relational Learning in Visually-Rich Documents

Hao Wang, Tang Li, Chenhui Chu, Rui Wang, Pinpin Zhu


Abstract
Key-value relations are prevalent in Visually-Rich Documents (VRDs), often depicted in distinct spatial regions accompanied by specific color and font styles. These non-textual cues serve as important indicators that greatly enhance human comprehension and acquisition of such relation triplets. However, current document AI approaches often fail to consider this valuable prior information related to visual and spatial features, resulting in suboptimal performance, particularly when dealing with limited examples. To address this limitation, our research focuses on few-shot relational learning, specifically targeting the extraction of key-value relation triplets in VRDs. Given the absence of a suitable dataset for this task, we introduce two new few-shot benchmarks built upon existing supervised benchmark datasets. Furthermore, we propose a variational approach that incorporates relational 2D-spatial priors and prototypical rectification techniques. This approach aims to generate relation representations that are more aware of the spatial context and unseen relation in a manner similar to human perception. Experimental results demonstrate the effectiveness of our proposed method by showcasing its ability to outperform existing methods. This study also opens up new possibilities for practical applications.
Anthology ID:
2024.lrec-main.1439
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
16557–16569
Language:
URL:
https://aclanthology.org/2024.lrec-main.1439
DOI:
Bibkey:
Cite (ACL):
Hao Wang, Tang Li, Chenhui Chu, Rui Wang, and Pinpin Zhu. 2024. Towards Human-Like Machine Comprehension: Few-Shot Relational Learning in Visually-Rich Documents. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 16557–16569, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Towards Human-Like Machine Comprehension: Few-Shot Relational Learning in Visually-Rich Documents (Wang et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.1439.pdf