Dianhai Yu


2024

pdf bib
NACL: A General and Effective KV Cache Eviction Framework for LLM at Inference Time
Yilong Chen | Guoxia Wang | Junyuan Shang | Shiyao Cui | Zhenyu Zhang | Tingwen Liu | Shuohuan Wang | Yu Sun | Dianhai Yu | Hua Wu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Large Language Models (LLMs) have ignited an innovative surge of AI applications, marking a new era of exciting possibilities equipped with extended context windows. However, hosting these models is cost-prohibitive mainly due to the extensive memory consumption of KV Cache involving long-context modeling. Despite several works proposing to evict unnecessary tokens from the KV Cache, most of them rely on the biased local statistics of accumulated attention scores and report performance using unconvincing metric like perplexity on inadequate short-text evaluation. In this paper, we propose NACL, a general framework for long-context KV cache eviction that achieves more optimal and efficient eviction in a single operation during the encoding phase. Due to NACL’s efficiency, we combine more accurate attention score statistics in Proxy-Tokens Eviction with the diversified random eviction strategy of Random Eviction, aiming to alleviate the issue of attention bias and enhance the robustness in maintaining pivotal tokens for long-context modeling tasks. Notably, our method significantly improves the performance on short- and long-text tasks by 80% and 76% respectively, reducing KV Cache by up to with over 95% performance maintenance. Code available at https://github.com/PaddlePaddle/Research/tree/master/NLP/ACL2024-NACL.

2022

pdf bib
PaddleSpeech: An Easy-to-Use All-in-One Speech Toolkit
Hui Zhang | Tian Yuan | Junkun Chen | Xintong Li | Renjie Zheng | Yuxin Huang | Xiaojie Chen | Enlei Gong | Zeyu Chen | Xiaoguang Hu | Dianhai Yu | Yanjun Ma | Liang Huang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations

PaddleSpeech is an open-source all-in-one speech toolkit. It aims at facilitating the development and research of speech processing technologies by providing an easy-to-use command-line interface and a simple code structure. This paper describes the design philosophy and core architecture of PaddleSpeech to support several essential speech-to-text and text-to-speech tasks. PaddleSpeech achieves competitive or state-of-the-art performance on various speech datasets and implements the most popular methods. It also provides recipes and pretrained models to quickly reproduce the experimental results in this paper. PaddleSpeech is publicly avaiable at https://github.com/PaddlePaddle/PaddleSpeech.

2018

pdf bib
Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network
Xiangyang Zhou | Lu Li | Daxiang Dong | Yi Liu | Ying Chen | Wayne Xin Zhao | Dianhai Yu | Hua Wu
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context. In this paper, we investigate matching a response with its multi-turn context using dependency information based entirely on attention. Our solution is inspired by the recently proposed Transformer in machine translation (Vaswani et al., 2017) and we extend the attention mechanism in two ways. First, we construct representations of text segments at different granularities solely with stacked self-attention. Second, we try to extract the truly matched segment pairs with attention across the context and response. We jointly introduce those two kinds of attention in one uniform neural network. Experiments on two large-scale multi-turn response selection tasks show that our proposed model significantly outperforms the state-of-the-art models.

2016

pdf bib
Multi-view Response Selection for Human-Computer Conversation
Xiangyang Zhou | Daxiang Dong | Hua Wu | Shiqi Zhao | Dianhai Yu | Hao Tian | Xuan Liu | Rui Yan
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2015

pdf bib
Multi-Task Learning for Multiple Language Translation
Daxiang Dong | Hua Wu | Wei He | Dianhai Yu | Haifeng Wang
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
Improve Statistical Machine Translation with Context-Sensitive Bilingual Semantic Embedding Model
Haiyang Wu | Daxiang Dong | Xiaoguang Hu | Dianhai Yu | Wei He | Hua Wu | Haifeng Wang | Ting Liu
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
Compound Embedding Features for Semi-supervised Learning
Mo Yu | Tiejun Zhao | Daxiang Dong | Hao Tian | Dianhai Yu
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Generalization of Words for Chinese Dependency Parsing
Xianchao Wu | Jie Zhou | Yu Sun | Zhanyi Liu | Dianhai Yu | Hua Wu | Haifeng Wang
Proceedings of the 13th International Conference on Parsing Technologies (IWPT 2013)

pdf bib
Cross-lingual Projections between Languages from Different Families
Mo Yu | Tiejun Zhao | Yalong Bai | Hao Tian | Dianhai Yu
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2008

pdf bib
An Improved CRF based Chinese Language Processing System for SIGHAN Bakeoff 2007
Xihong Wu | Xiaojun Lin | Xinhao Wang | Chunyao Wu | Yaozhong Zhang | Dianhai Yu
Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing

2006

pdf bib
Chinese Word Segmentation with Maximum Entropy and N-gram Language Model
Xinhao Wang | Xiaojun Lin | Dianhai Yu | Hao Tian | Xihong Wu
Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing