Weipeng Huang
2020
Towards Fast and Accurate Neural Chinese Word Segmentation with Multi-Criteria Learning
Weipeng Huang
|
Xingyi Cheng
|
Kunlong Chen
|
Taifeng Wang
|
Wei Chu
Proceedings of the 28th International Conference on Computational Linguistics
The ambiguous annotation criteria lead to divergence of Chinese Word Segmentation (CWS) datasets in various granularities. Multi-criteria Chinese word segmentation aims to capture various annotation criteria among datasets and leverage their common underlying knowledge. In this paper, we propose a domain adaptive segmenter to exploit diverse criteria of various datasets. Our model is based on Bidirectional Encoder Representations from Transformers (BERT), which is responsible for introducing open-domain knowledge. Private and shared projection layers are proposed to capture domain-specific knowledge and common knowledge, respectively. We also optimize computational efficiency via distillation, quantization, and compiler optimization. Experiments show that our segmenter outperforms the previous state of the art (SOTA) models on 10 CWS datasets with superior efficiency.
2019
Variational Semi-Supervised Aspect-Term Sentiment Analysis via Transformer
Xingyi Cheng
|
Weidi Xu
|
Taifeng Wang
|
Wei Chu
|
Weipeng Huang
|
Kunlong Chen
|
Junfeng Hu
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Aspect-term sentiment analysis (ATSA) is a long-standing challenge in natural language process. It requires fine-grained semantical reasoning about a target entity appeared in the text. As manual annotation over the aspects is laborious and time-consuming, the amount of labeled data is limited for supervised learning. This paper proposes a semi-supervised method for the ATSA problem by using the Variational Autoencoder based on Transformer. The model learns the latent distribution via variational inference. By disentangling the latent representation into the aspect-specific sentiment and the lexical context, our method induces the underlying sentiment prediction for the unlabeled data, which then benefits the ATSA classifier. Our method is classifier-agnostic, i.e., the classifier is an independent module and various supervised models can be integrated. Experimental results are obtained on the SemEval 2014 task 4 and show that our method is effective with different the five specific classifiers and outperforms these models by a significant margin.
Search
Co-authors
- Xingyi Cheng 2
- Taifeng Wang 2
- Wei Chu 2
- Kunlong Chen 2
- Weidi Xu 1
- show all...