Guohong Fu


2024

pdf bib
MODDP: A Multi-modal Open-domain Chinese Dataset for Dialogue Discourse Parsing
Chen Gong | DeXin Kong | Suxian Zhao | Xingyu Li | Guohong Fu
Findings of the Association for Computational Linguistics ACL 2024

Dialogue discourse parsing (DDP) aims to capture the relations between utterances in the dialogue. In everyday real-world scenarios, dialogues are typically multi-modal and cover open-domain topics. However, most existing widely used benchmark datasets for DDP contain only textual modality and are domain-specific. This makes it challenging to accurately and comprehensively understand the dialogue without multi-modal clues, and prevents them from capturing the discourse structures of the more prevalent daily conversations. This paper proposes MODDP, the first multi-modal Chinese discourse parsing dataset derived from open-domain daily dialogues, consisting 864 dialogues and 18,114 utterances, accompanied by 12.7 hours of video clips. We present a simple yet effective benchmark approach for multi-modal DDP. Through extensive experiments, we present several benchmark results based on MODDP. The significant improvement in performance from introducing multi-modalities into the original textual unimodal DDP model demonstrates the necessity of integrating multi-modalities into DDP.

2023

pdf bib
Non-autoregressive Text Editing with Copy-aware Latent Alignments
Yu Zhang | Yue Zhang | Leyang Cui | Guohong Fu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Recent work has witnessed a paradigm shift from Seq2Seq to Seq2Edit in the field of text editing, with the aim of addressing the slow autoregressive inference problem posed by the former. Despite promising results, Seq2Edit approaches still face several challenges such as inflexibility in generation and difficulty in generalizing to other languages. In this work, we propose a novel non-autoregressive text editing method to circumvent the above issues, by modeling the edit process with latent CTC alignments. We make a crucial extension to CTC by introducing the copy operation into the edit space, thus enabling more efficient management of textual overlap in editing. We conduct extensive experiments on GEC and sentence fusion tasks, showing that our proposed method significantly outperforms existing Seq2Edit models and achieves similar or even better results than Seq2Seq with over speedup. Moreover, it demonstrates good generalizability on German and Russian. In-depth analyses reveal the strengths of our method in terms of the robustness under various scenarios and generating fluent and flexible outputs.

2022

pdf bib
RST Discourse Parsing with Second-Stage EDU-Level Pre-training
Nan Yu | Meishan Zhang | Guohong Fu | Min Zhang
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Pre-trained language models (PLMs) have shown great potentials in natural language processing (NLP) including rhetorical structure theory (RST) discourse parsing. Current PLMs are obtained by sentence-level pre-training, which is different from the basic processing unit, i.e. element discourse unit (EDU).To this end, we propose a second-stage EDU-level pre-training approach in this work, which presents two novel tasks to learn effective EDU representations continually based on well pre-trained language models. Concretely, the two tasks are (1) next EDU prediction (NEP) and (2) discourse marker prediction (DMP).We take a state-of-the-art transition-based neural parser as baseline, and adopt it with a light bi-gram EDU modification to effectively explore the EDU-level pre-trained EDU representation. Experimental results on a benckmark dataset show that our method is highly effective,leading a 2.1-point improvement in F1-score. All codes and pre-trained models will be released publicly to facilitate future studies.

pdf bib
Tracking Satisfaction States for Customer Satisfaction Prediction in E-commerce Service Chatbots
Yang Sun | Liangqing Wu | Shuangyong Song | Xiaoguang Yu | Xiaodong He | Guohong Fu
Proceedings of the 29th International Conference on Computational Linguistics

Due to the increasing use of service chatbots in E-commerce platforms in recent years, customer satisfaction prediction (CSP) is gaining more and more attention. CSP is dedicated to evaluating subjective customer satisfaction in conversational service and thus helps improve customer service experience. However, previous methods focus on modeling customer-chatbot interaction across different turns, which are hard to represent the important dynamic satisfaction states throughout the customer journey. In this work, we investigate the problem of satisfaction states tracking and its effects on CSP in E-commerce service chatbots. To this end, we propose a dialogue-level classification model named DialogueCSP to track satisfaction states for CSP. In particular, we explore a novel two-step interaction module to represent the dynamic satisfaction states at each turn. In order to capture dialogue-level satisfaction states for CSP, we further introduce dialogue-aware attentions to integrate historical informative cues into the interaction module. To evaluate the proposed approach, we also build a Chinese E-commerce dataset for CSP. Experiment results demonstrate that our model significantly outperforms multiple baselines, illustrating the benefits of satisfaction states tracking on CSP.

pdf bib
Semantic Role Labeling as Dependency Parsing: Exploring Latent Tree Structures inside Arguments
Yu Zhang | Qingrong Xia | Shilin Zhou | Yong Jiang | Guohong Fu | Min Zhang
Proceedings of the 29th International Conference on Computational Linguistics

Semantic role labeling (SRL) is a fundamental yet challenging task in the NLP community. Recent works of SRL mainly fall into two lines: 1) BIO-based; 2) span-based. Despite ubiquity, they share some intrinsic drawbacks of not considering internal argument structures, potentially hindering the model’s expressiveness. The key challenge is arguments are flat structures, and there are no determined subtree realizations for words inside arguments. To remedy this, in this paper, we propose to regard flat argument spans as latent subtrees, accordingly reducing SRL to a tree parsing task. In particular, we equip our formulation with a novel span-constrained TreeCRF to make tree structures span-aware and further extend it to the second-order case. We conduct extensive experiments on CoNLL05 and CoNLL12 benchmarks. Results reveal that our methods perform favorably better than all previous syntax-agnostic works, achieving new state-of-the-art under both end-to-end and w/ gold predicates settings.

pdf bib
Speaker-Aware Discourse Parsing on Multi-Party Dialogues
Nan Yu | Guohong Fu | Min Zhang
Proceedings of the 29th International Conference on Computational Linguistics

Discourse parsing on multi-party dialogues is an important but difficult task in dialogue systems and conversational analysis. It is believed that speaker interactions are helpful for this task. However, most previous research ignores speaker interactions between different speakers. To this end, we present a speaker-aware model for this task. Concretely, we propose a speaker-context interaction joint encoding (SCIJE) approach, using the interaction features between different speakers. In addition, we propose a second-stage pre-training task, same speaker prediction (SSP), enhancing the conversational context representations by predicting whether two utterances are from the same speaker. Experiments on two standard benchmark datasets show that the proposed model achieves the best-reported performance in the literature. We will release the codes of this paper to facilitate future research.

2021

pdf bib
A Discourse-Aware Graph Neural Network for Emotion Recognition in Multi-Party Conversation
Yang Sun | Nan Yu | Guohong Fu
Findings of the Association for Computational Linguistics: EMNLP 2021

Emotion recognition in multi-party conversation (ERMC) is becoming increasingly popular as an emerging research topic in natural language processing. Prior research focuses on exploring sequential information but ignores the discourse structures of conversations. In this paper, we investigate the importance of discourse structures in handling informative contextual cues and speaker-specific features for ERMC. To this end, we propose a discourse-aware graph neural network (ERMC-DisGCN) for ERMC. In particular, we design a relational convolution to lever the self-speaker dependency of interlocutors to propagate contextual information. Furthermore, we exploit a gated convolution to select more informative cues for ERMC from dependent utterances. The experimental results show our method outperforms multiple baselines, illustrating that discourse structures are of great value to ERMC.

pdf bib
Chinese Opinion Role Labeling with Corpus Translation: A Pivot Study
Ranran Zhen | Rui Wang | Guohong Fu | Chengguo Lv | Meishan Zhang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Opinion Role Labeling (ORL), aiming to identify the key roles of opinion, has received increasing interest. Unlike most of the previous works focusing on the English language, in this paper, we present the first work of Chinese ORL. We construct a Chinese dataset by manually translating and projecting annotations from a standard English MPQA dataset. Then, we investigate the effectiveness of cross-lingual transfer methods, including model transfer and corpus translation. We exploit multilingual BERT with Contextual Parameter Generator and Adapter methods to examine the potentials of unsupervised cross-lingual learning and our experiments and analyses for both bilingual and multilingual transfers establish a foundation for the future research of this task.

2020

pdf bib
Sentence Matching with Syntax- and Semantics-Aware BERT
Tao Liu | Xin Wang | Chengguo Lv | Ranran Zhen | Guohong Fu
Proceedings of the 28th International Conference on Computational Linguistics

Sentence matching aims to identify the special relationship between two sentences, and plays a key role in many natural language processing tasks. However, previous studies mainly focused on exploiting either syntactic or semantic information for sentence matching, and no studies consider integrating both of them. In this study, we propose integrating syntax and semantics into BERT with sentence matching. In particular, we use an implicit syntax and semantics integration method that is less sensitive to the output structure information. Thus the implicit integration can alleviate the error propagation problem. The experimental results show that our approach has achieved state-of-the-art or competitive performance on several sentence matching datasets, demonstrating the benefits of implicitly integrating syntactic and semantic features in sentence matching.

2019

pdf bib
Enhancing Opinion Role Labeling with Semantic-Aware Word Representations from Semantic Role Labeling
Meishan Zhang | Peili Liang | Guohong Fu
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Opinion role labeling (ORL) is an important task for fine-grained opinion mining, which identifies important opinion arguments such as holder and target for a given opinion trigger. The task is highly correlative with semantic role labeling (SRL), which identifies important semantic arguments such as agent and patient for a given predicate. As predicate agents and patients usually correspond to opinion holders and targets respectively, SRL could be valuable for ORL. In this work, we propose a simple and novel method to enhance ORL by utilizing SRL, presenting semantic-aware word representations which are learned from SRL. The representations are then fed into a baseline neural ORL model as basic inputs. We verify the proposed method on a benchmark MPQA corpus. Experimental results show that the proposed method is highly effective. In addition, we compare the method with two representative methods of SRL integration as well, finding that our method can outperform the two methods significantly, achieving 1.47% higher F-scores than the better one.

pdf bib
Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations
Meishan Zhang | Zhenghua Li | Guohong Fu | Min Zhang
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Syntax has been demonstrated highly effective in neural machine translation (NMT). Previous NMT models integrate syntax by representing 1-best tree outputs from a well-trained parsing system, e.g., the representative Tree-RNN and Tree-Linearization methods, which may suffer from error propagation. In this work, we propose a novel method to integrate source-side syntax implicitly for NMT. The basic idea is to use the intermediate hidden representations of a well-trained end-to-end dependency parser, which are referred to as syntax-aware word representations (SAWRs). Then, we simply concatenate such SAWRs with ordinary word embeddings to enhance basic NMT models. The method can be straightforwardly integrated into the widely-used sequence-to-sequence (Seq2Seq) NMT models. We start with a representative RNN-based Seq2Seq baseline system, and test the effectiveness of our proposed method on two benchmark datasets of the Chinese-English and English-Vietnamese translation tasks, respectively. Experimental results show that the proposed approach is able to bring significant BLEU score improvements on the two datasets compared with the baseline, 1.74 points for Chinese-English translation and 0.80 point for English-Vietnamese translation, respectively. In addition, the approach also outperforms the explicit Tree-RNN and Tree-Linearization methods.

pdf bib
Cross-Lingual Dependency Parsing Using Code-Mixed TreeBank
Meishan Zhang | Yue Zhang | Guohong Fu
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Treebank translation is a promising method for cross-lingual transfer of syntactic dependency knowledge. The basic idea is to map dependency arcs from a source treebank to its target translation according to word alignments. This method, however, can suffer from imperfect alignment between source and target words. To address this problem, we investigate syntactic transfer by code mixing, translating only confident words in a source treebank. Cross-lingual word embeddings are leveraged for transferring syntactic knowledge to the target from the resulting code-mixed treebank. Experiments on University Dependency Treebanks show that code-mixed treebanks are more effective than translated treebanks, giving highly competitive performances among cross-lingual parsing methods.

2018

pdf bib
Transition-based Neural RST Parsing with Implicit Syntax Features
Nan Yu | Meishan Zhang | Guohong Fu
Proceedings of the 27th International Conference on Computational Linguistics

Syntax has been a useful source of information for statistical RST discourse parsing. Under the neural setting, a common approach integrates syntax by a recursive neural network (RNN), requiring discrete output trees produced by a supervised syntax parser. In this paper, we propose an implicit syntax feature extraction approach, using hidden-layer vectors extracted from a neural syntax parser. In addition, we propose a simple transition-based model as the baseline, further enhancing it with dynamic oracle. Experiments on the standard dataset show that our baseline model with dynamic oracle is highly competitive. When implicit syntax features are integrated, we are able to obtain further improvements, better than using explicit Tree-RNN.

2017

pdf bib
End-to-End Neural Relation Extraction with Global Optimization
Meishan Zhang | Yue Zhang | Guohong Fu
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Neural networks have shown promising results for relation extraction. State-of-the-art models cast the task as an end-to-end problem, solved incrementally using a local classifier. Yet previous work using statistical models have demonstrated that global optimization can achieve better performances compared to local classification. We build a globally optimized neural model for end-to-end relation extraction, proposing novel LSTM features in order to better learn context representations. In addition, we present a novel method to integrate syntactic information to facilitate global learning, yet requiring little background on syntactic grammars thus being easy to extend. Experimental results show that our proposed model is highly effective, achieving the best performances on two standard benchmarks.

2016

pdf bib
Tweet Sarcasm Detection Using Deep Neural Network
Meishan Zhang | Yue Zhang | Guohong Fu
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Sarcasm detection has been modeled as a binary document classification task, with rich features being defined manually over input documents. Traditional models employ discrete manual features to address the task, with much research effect being devoted to the design of effective feature templates. We investigate the use of neural network for tweet sarcasm detection, and compare the effects of the continuous automatic features with discrete manual features. In particular, we use a bi-directional gated recurrent neural network to capture syntactic and semantic information over tweets locally, and a pooling neural network to extract contextual features automatically from history tweets. Results show that neural features give improved accuracies for sarcasm detection, with different error distributions compared with discrete manual features.

pdf bib
Transition-Based Neural Word Segmentation
Meishan Zhang | Yue Zhang | Guohong Fu
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
Polarity Classification of Short Product Reviews via Multiple Cluster-based SVM Classifiers
Jiaying Song | Yu He | Guohong Fu
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation: Posters

2014

pdf bib
Improving Chinese Sentence Polarity Classification via Opinion Paraphrasing
Guohong Fu | Yu He | Jiaying Song | Chaoyue Wang
Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing

2013

pdf bib
Description of HLJU Chinese Spelling Checker for SIGHAN Bakeoff 2013
Yu He | Guohong Fu
Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing

2012

pdf bib
A CRF Sequence Labeling Approach to Chinese Punctuation Prediction
Yanqing Zhao | Chaoyue Wang | Guohong Fu
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

pdf bib
Chinese Tweets Segmentation based on Morphemes
Chaoyue Wang | Guohong Fu
Proceedings of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing

2010

pdf bib
Chinese Sentence-Level Sentiment Classification Based on Fuzzy Sets
Guohong Fu | Xin Wang
Coling 2010: Posters

2008

pdf bib
A Morpheme-based Part-of-Speech Tagger for Chinese
Guohong Fu | Jonathan J. Webster
Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing

2005

pdf bib
Description of the HKU Chinese Word Segmentation System for Sighan Bakeoff 2005
Guohong Fu | Kang-Kwong Luke | Percy Ping-Wai Wong
Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing

2003

pdf bib
An integrated approach for Chinese word segmentation
Guohong Fu | K.K. Luke
Proceedings of the 17th Pacific Asia Conference on Language, Information and Computation

pdf bib
A Two-stage Statistical Word Segmentation System for Chinese
Guohong Fu | Kang-Kwong Luke
Proceedings of the Second SIGHAN Workshop on Chinese Language Processing