Jianmin Yao

Also published as: Jian-Min Yao, Jian-min Yao


2024

pdf bib
Demonstration Retrieval-Augmented Generative Event Argument Extraction
Shiming He | Yu Hong | Shuai Yang | Jianmin Yao | Guodong Zhou
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

We tackle Event Argument Extraction (EAE) in the manner of template-based generation. Based on our exploration of generative EAE, it suffers from several issues, such as multiple arguments of one role, generating words out of context and inconsistency with prescribed format. We attribute it to the weakness of following complex input prompts. To address these problems, we propose the demonstration retrieval-augmented generative EAE (DRAGEAE), containing two components: event knowledge-injected generator (EKG) and demonstration retriever (DR). EKG employs event knowledge prompts to capture role dependencies and semantics. DR aims to search informative demonstrations from training data, facilitating the conditional generation of EKG. To train DR, we use the probability-based rankings from large language models (LLMs) as supervised signals. Experimental results on ACE-2005, RAMS and WIKIEVENTS demonstrate that our method outperforms all strong baselines and it can be generalized to various datasets. Further analysis is conducted to discuss the impact of diverse LLMs and prove that our model alleviates the above issues.

pdf bib
SGCM: Salience-Guided Context Modeling for Question Generation
Chuyao Ding | Yu Hong | Jianmin Yao
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

We tackle Paragraph-level Question Generation (abbr., PQG) in this paper. PQG is a task of automatically generating questions given paragraphs and answers. Identifying the relevant sentences to answers is crucial for reasoning the possible questions before generation. Accordingly, we propose a salience-guided approach to enhance PQG. Specifically, we construct an auxiliary task of identifying salient sentences that manifest relevance. Grounded on this auxiliary task and the main task of PQG, we strengthen the BART encoder during training within a multitask learning framework. In particular, we utilize the identified salient sentences as an explicit guidance to enable the salience-aware attention computation in the BART decoder. We experiment on the benchmark dataset FairytaleQA. The test results show that our approach yields substantial improvements compared to the BART baseline, achieving the Rouge-L, BLEU4, BERTScore, Q-BLUE-3 and F1-scores of about 56.56%, 19.78%, 61.19%, 54.33% and 43.55%, respectively. Both the source codes and models will be publicly available.

pdf bib
Word-level Commonsense Knowledge Selection for Event Detection
Shuai Yang | Yu Hong | Shiming He | Qingting Xu | Jianmin Yao
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Event Detection (ED) is a task of automatically extracting multi-class trigger words. The understanding of word sense is crucial for ED. In this paper, we utilize context-specific commonsense knowledge to strengthen word sense modeling. Specifically, we leverage a Context-specific Knowledge Selector (CKS) to select the exact commonsense knowledge of words from a large knowledge base, i.e., ConceptNet. Context-specific selection is made in terms of the relevance of knowledge to the living contexts. On this basis, we incorporate the commonsense knowledge into the word-level representations before decoding. ChatGPT is an ideal generative CKS when the prompts are deliberately designed, though it is cost-prohibitive. To avoid the heavy reliance on ChatGPT, we train an offline CKS using the predictions of ChatGPT over a small number of examples (about 9% of all). We experiment on the benchmark ACE-2005 dataset. The test results show that our approach yields substantial improvements compared to the BERT baseline, achieving the F1-score of about 78.3%. All models, source codes and data will be made publicly available.

2023

pdf bib
Smart “Chef”: Verifying the Effect of Role-based Paraphrasing for Aspect Term Extraction
Jiaxiang Chen | Yu Hong | Qingting Xu | Jianmin Yao
Findings of the Association for Computational Linguistics: EMNLP 2023

We tackle Aspect Term Extraction (ATE), a task of automatically extracting aspect terms from sentences. The current Pretrained Language Model (PLM) based extractors have achieved significant improvements. They primarily benefit from context-aware encoding. However, a considerable number of sentences in ATE corpora contain uninformative or low-quality contexts. Such sentences frequently act as “troublemakers” during test. In this study, we explore the context-oriented quality improvement method. Specifically, we propose to automatically rewrite the sentences from the perspectives of virtual experts with different roles, such as a “chef” in the restaurant domain. On this basis, we perform ATE over the paraphrased sentences during test, using the well-trained extractors without any change. In the experiments, we leverage ChatGPT to determine virtual experts in the considered domains, and induce ChatGPT to generate paraphrases conditioned on the roles of virtual experts. We experiment on the benchmark SemEval datasets, including Laptop-domain L14 and Restaurant-domain R14-16. The experimental results show that our approach effectively recalls the inconspicuous aspect terms like “al di la”, although it reduces the precision. In addition, it is proven that our approach can be substantially improved by redundancy elimination and multi-role voting. More importantly, our approach can be used to expand the predictions obtained on the original sentences. This yields state-of-the-art performance (i.e., F1-scores of 86.2%, 89.3%, 77.7%, 82.7% on L14 and R14-16) without retraining or fine-tuning the baseline extractors.

2022

pdf bib
Unregulated Chinese-to-English Data Expansion Does NOT Work for Neural Event Detection
Zhongqiu Li | Yu Hong | Jie Wang | Shiming He | Jianmin Yao | Guodong Zhou
Proceedings of the 29th International Conference on Computational Linguistics

We leverage cross-language data expansion and retraining to enhance neural Event Detection (abbr., ED) on English ACE corpus. Machine translation is utilized for expanding English training set of ED from that of Chinese. However, experimental results illustrate that such strategy actually results in performance degradation. The survey of translations suggests that the mistakenly-aligned triggers in the expanded data negatively influences the retraining process. We refer this phenomenon to “trigger falsification”. To overcome the issue, we apply heuristic rules for regulating the expanded data, fixing the distracting samples that contain the falsified triggers. The supplementary experiments show that the rule-based regulation is beneficial, yielding the improvement of about 1.6% F1-score for ED. We additionally prove that, instead of transfer learning from the translated ED data, the straight data combination by random pouring surprisingly performs better.

pdf bib
Taking Actions Separately: A Bidirectionally-Adaptive Transfer Learning Method for Low-Resource Neural Machine Translation
Xiaolin Xing | Yu Hong | Minhan Xu | Jianmin Yao | Guodong Zhou
Proceedings of the 29th International Conference on Computational Linguistics

Training Neural Machine Translation (NMT) models suffers from sparse parallel data, in the infrequent translation scenarios towards low-resource source languages. The existing solutions primarily concentrate on the utilization of Parent-Child (PC) transfer learning. It transfers well-trained NMT models on high-resource languages (namely Parent NMT) to low-resource languages, so as to produce Child NMT models by fine-tuning. It has been carefully demonstrated that a variety of PC variants yield significant improvements for low-resource NMT. In this paper, we intend to enhance PC-based NMT by a bidirectionally-adaptive learning strategy. Specifically, we divide inner constituents (6 transformers) of Parent encoder into two “teams”, i.e., T1 and T2. During representation learning, T1 learns to encode low-resource languages conditioned on bilingual shareable latent space. Generative adversarial network and masked language modeling are used for space-shareable encoding. On the other hand, T2 is straightforwardly transferred to low-resource languages, and fine-tuned together with T1 for low-resource translation. Briefly, T1 and T2 take actions separately for different goals. The former aims to adapt to characteristics of low-resource languages during encoding, while the latter adapts to translation experiences learned from high-resource languages. We experiment on benchmark corpora SETIMES, conducting low-resource NMT for Albanian (Sq), Macedonian (Mk), Croatian (Hr) and Romanian (Ro). Experimental results show that our method yields substantial improvements, which allows the NMT performance to reach BLEU4-scores of 62.24%, 56.93%, 50.53% and 54.65% for Sq, Mk, Hr and Ro, respectively.

2021

pdf bib
Feature-level Incongruence Reduction for Multimodal Translation
Zhifeng Li | Yu Hong | Yuchen Pan | Jian Tang | Jianmin Yao | Guodong Zhou
Proceedings of the Second Workshop on Advances in Language and Vision Research

Caption translation aims to translate image annotations (captions for short). Recently, Multimodal Neural Machine Translation (MNMT) has been explored as the essential solution. Besides of linguistic features in captions, MNMT allows visual(image) features to be used. The integration of multimodal features reinforces the semantic representation and considerably improves translation performance. However, MNMT suffers from the incongruence between visual and linguistic features. To overcome the problem, we propose to extend MNMT architecture with a harmonization network, which harmonizes multimodal features(linguistic and visual features)by unidirectional modal space conversion. It enables multimodal translation to be carried out in a seemingly monomodal translation pipeline. We experiment on the golden Multi30k-16 and 17. Experimental results show that, compared to the baseline,the proposed method yields the improvements of 2.2% BLEU for the scenario of translating English captions into German (En→De) at best,7.6% for the case of English-to-French translation(En→Fr) and 1.5% for English-to-Czech(En→Cz). The utilization of harmonization network leads to the competitive performance to the-state-of-the-art.

2020

pdf bib
Don’t Eclipse Your Arts Due to Small Discrepancies: Boundary Repositioning with a Pointer Network for Aspect Extraction
Zhenkai Wei | Yu Hong | Bowei Zou | Meng Cheng | Jianmin Yao
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

The current aspect extraction methods suffer from boundary errors. In general, these errors lead to a relatively minor difference between the extracted aspects and the ground-truth. However, they hurt the performance severely. In this paper, we propose to utilize a pointer network for repositioning the boundaries. Recycling mechanism is used, which enables the training data to be collected without manual intervention. We conduct the experiments on the benchmark datasets SE14 of laptop and SE14-16 of restaurant. Experimental results show that our method achieves substantial improvements over the baseline, and outperforms state-of-the-art methods.

2018

pdf bib
Incorporating Image Matching Into Knowledge Acquisition for Event-Oriented Relation Recognition
Yu Hong | Yang Xu | Huibin Ruan | Bowei Zou | Jianmin Yao | Guodong Zhou
Proceedings of the 27th International Conference on Computational Linguistics

Event relation recognition is a challenging language processing task. It is required to determine the relation class of a pair of query events, such as causality, under the condition that there isn’t any reliable clue for use. We follow the traditional statistical approach in this paper, speculating the relation class of the target events based on the relation-class distributions on the similar events. There is minimal supervision used during the speculation process. In particular, we incorporate image processing into the acquisition of similar event instances, including the utilization of images for visually representing event scenes, and the use of the neural network based image matching for approximate calculation between events. We test our method on the ACE-R2 corpus and compared our model with the fully-supervised neural network models. Experimental results show that we achieve a comparable performance to CNN while slightly better than LSTM.

pdf bib
Using active learning to expand training data for implicit discourse relation recognition
Yang Xu | Yu Hong | Huibin Ruan | Jianmin Yao | Min Zhang | Guodong Zhou
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We tackle discourse-level relation recognition, a problem of determining semantic relations between text spans. Implicit relation recognition is challenging due to the lack of explicit relational clues. The increasingly popular neural network techniques have been proven effective for semantic encoding, whereby widely employed to boost semantic relation discrimination. However, learning to predict semantic relations at a deep level heavily relies on a great deal of training data, but the scale of the publicly available data in this field is limited. In this paper, we follow Rutherford and Xue (2015) to expand the training data set using the corpus of explicitly-related arguments, by arbitrarily dropping the overtly presented discourse connectives. On the basis, we carry out an experiment of sampling, in which a simple active learning approach is used, so as to take the informative instances for data expansion. The goal is to verify whether the selective use of external data not only reduces the time consumption of retraining but also ensures a better system performance. Using the expanded training data, we retrain a convolutional neural network (CNN) based classifer which is a simplified version of Qin et al. (2016)’s stacking gated relation recognizer. Experimental results show that expanding the training set with small-scale carefully-selected external data yields substantial performance gain, with the improvements of about 4% for accuracy and 3.6% for F-score. This allows a weak classifier to achieve a comparable performance against the state-of-the-art systems.

2016

pdf bib
Image-Image Search for Comparable Corpora Construction
Yu Hong | Liang Yao | Mengyi Liu | Tongtao Zhang | Wenxuan Zhou | Jianmin Yao | Heng Ji
Proceedings of the Sixth Workshop on Hybrid Approaches to Translation (HyTra6)

We present a novel method of comparable corpora construction. Unlike the traditional methods which heavily rely on linguistic features, our method only takes image similarity into consid-eration. We use an image-image search engine to obtain similar images, together with the cap-tions in source language and target language. On the basis, we utilize captions of similar imag-es to construct sentence-level bilingual corpora. Experiments on 10,371 target captions show that our method achieves a precision of 0.85 in the top search results.

2014

pdf bib
An Iterative Link-based Method for Parallel Web Page Mining
Le Liu | Yu Hong | Jun Lu | Jun Lang | Heng Ji | Jianmin Yao
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
Effective Selection of Translation Model Training Data
Le Liu | Yu Hong | Hao Liu | Xing Wang | Jianmin Yao
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2011

pdf bib
Thread Cleaning and Merging for Microblog Topic Detection
Jianfeng Zhang | Yunqing Xia | Bin Ma | Jianmin Yao | Yu Hong
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf bib
Using Cross-Entity Inference to Improve Event Extraction
Yu Hong | Jianfeng Zhang | Bin Ma | Jianmin Yao | Guodong Zhou | Qiaoming Zhu
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf bib
Jumping Distance based Chinese Person Name Disambiguation
Yu Hong | Fei Pei | Yue-hui Yang | Jian-min Yao | Qiao-ming Zhu
CIPS-SIGHAN Joint Conference on Chinese Language Processing

pdf bib
A Novel Method for Bilingual Web Page Acquisition from Search Engine Web Records
Yanhui Feng | Yu Hong | Zhenxiang Yan | Jianmin Yao | Qiaoming Zhu
Coling 2010: Posters

pdf bib
Negative Feedback: The Forsaken Nature Available for Re-ranking
Yu Hong | Qing-qing Cai | Song Hua | Jian-min Yao | Qiao-ming Zhu
Coling 2010: Posters

2006

pdf bib
Performance Analysis and Visualization of Machine Translation Evaluation
Jianmin Yao | Yunqian Qu | Qiang Lv | Qiaoming Zhu | Jing Zhang
International Journal of Computational Linguistics & Chinese Language Processing, Volume 11, Number 3, September 2006: Special Issue on Selected Papers from ROCLING XVII

pdf bib
An Equivalent Pseudoword Solution to Chinese Word Sense Disambiguation
Zhimao Lu | Haifeng Wang | Jianmin Yao | Ting Liu | Sheng Li
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
A Visualization method for machine translation evaluation results
Jian-Min Yao | Yun-Qian Qu | Qiao-Ming Zhu | Jing Zhang
Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation

2002

pdf bib
An Automatic Evaluation Method for Localization Oriented Lexicalised EBMT System
Jianmin Yao | Ming Zhou | Tiejun Zhao | Hao Yu | Sheng Li
COLING 2002: The 19th International Conference on Computational Linguistics

pdf bib
Automatic Information Transfer between English and Chinese
Jianmin Yao | Hao Yu | Tiejun Zhao | Xiaohong Li
COLING-02: Machine Translation in Asia

2000

pdf bib
Statistics Based Hybrid Approach to Chinese Base Phrase Identification
Tie-jun Zhao | Mu-yun Yang | Fang Liu | Jian-min Yao | Hao Yu
Second Chinese Language Processing Workshop