2024
pdf
bib
abs
Continual Contrastive Spoken Language Understanding
Umberto Cappellazzo
|
Enrico Fini
|
Muqiao Yang
|
Daniele Falavigna
|
Alessio Brutti
|
Bhiksha Raj
Findings of the Association for Computational Linguistics ACL 2024
Recently, neural networks have shown impressive progress across diverse fields, with speech processing being no exception. However, recent breakthroughs in this area require extensive offline training using large datasets and tremendous computing resources. Unfortunately, these models struggle to retain their previously acquired knowledge when learning new tasks continually. In this paper, we investigate the problem of learning sequence-to-sequence models for spoken language understanding in a class-incremental learning (CIL) setting and we propose COCONUT, a CIL method that relies on the combination of experience replay and contrastive learning. Through a modified version of the standard supervised contrastive loss, COCONUT preserves the learned representations by pulling closer samples from the same class and pushing away the others. Moreover, we leverage a multimodal contrastive loss that helps the model learn more discriminative representations of the new data by aligning audio and text features. We also investigate different contrastive designs to combine the strengths of the contrastive loss with teacher-student architectures used for distillation. Experiments on two established SLU datasets reveal the effectiveness of our proposed approach and significant improvements over the baselines. We also show that COCONUT can be combined with methods that operate on the decoder side of the model, resulting in further metrics improvements.
2021
pdf
bib
abs
Seed Words Based Data Selection for Language Model Adaptation
Roberto Gretter
|
Marco Matassoni
|
Daniele Falavigna
Proceedings of the 1st Workshop on Automatic Spoken Language Translation in Real-World Settings (ASLTRW)
We address the problem of language model customization in applications where the ASR component needs to manage domain-specific terminology; although current state-of-the-art speech recognition technology provides excellent results for generic domains, the adaptation to specialized dictionaries or glossaries is still an open issue. In this work we present an approach for automatically selecting sentences, from a text corpus, that match, both semantically and morphologically, a glossary of terms (words or composite words) furnished by the user. The final goal is to rapidly adapt the language model of an hybrid ASR system with a limited amount of in-domain text data in order to successfully cope with the linguistic domain at hand; the vocabulary of the baseline model is expanded and tailored, reducing the resulting OOV rate. Data selection strategies based on shallow morphological seeds and semantic similarity via word2vec are introduced and discussed; the experimental setting consists in a simultaneous interpreting scenario, where ASRs in three languages are designed to recognize the domainspecific terms (i.e. dentistry). Results using different metrics (OOV rate, WER, precision and recall) show the effectiveness of the proposed techniques.
2020
pdf
bib
abs
Automatically Assess Children’s Reading Skills
Ornella Mich
|
Nadia Mana
|
Roberto Gretter
|
Marco Matassoni
|
Daniele Falavigna
Proceedings of the 1st Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI)
Assessing reading skills is an important task teachers have to perform at the beginning of a new scholastic year to evaluate the starting level of the class and properly plan next learning activities. Digital tools based on automatic speech recognition (ASR) may be really useful to support teachers in this task, currently very time consuming and prone to human errors. This paper presents a web application for automatically assessing fluency and accuracy of oral reading in children attending Italian primary and lower secondary schools. Our system, based on ASR technology, implements the Cornoldi’s MT battery, which is a well-known Italian test to assess reading skills. The front-end of the system has been designed following the participatory design approach by involving end users from the beginning of the creation process. Teachers may use our system to both test student’s reading skills and monitor their performance over time. In fact, the system offers an effective graphical visualization of the assessment results for both individual students and entire class. The paper also presents the results of a pilot study to evaluate the system usability with teachers.
2016
pdf
bib
TranscRater: a Tool for Automatic Speech Recognition Quality Estimation
Shahab Jalalvand
|
Matteo Negri
|
Marco Turchi
|
José G. C. de Souza
|
Daniele Falavigna
|
Mohammed R. H. Qwaider
Proceedings of ACL-2016 System Demonstrations
2015
pdf
bib
Multitask Learning for Adaptive Quality Estimation of Automatically Transcribed Utterances
José G. C. de Souza
|
Hamed Zamani
|
Matteo Negri
|
Marco Turchi
|
Daniele Falavigna
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
Driving ROVER with Segment-based ASR Quality Estimation
Shahab Jalalvand
|
Matteo Negri
|
Daniele Falavigna
|
Marco Turchi
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
2014
pdf
bib
Quality Estimation for Automatic Speech Recognition
Matteo Negri
|
Marco Turchi
|
José G. C. de Souza
|
Daniele Falavigna
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers
2013
pdf
bib
abs
FBK @ IWSLT 2013 – ASR tracks
Daniele Falavigna
|
Roberto Gretter
|
Fabio Brugnara
|
Diego Giuliani
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper reports on the participation of FBK at the IWSLT2013 evaluation campaign on automatic speech recognition (ASR): precisely on both English and German ASR track. Only primary submissions have been sent for evaluation. For English, the ASR system features acoustic models trained on a portion of the TED talk recordings that was automatically selected according to the fidelity of the provided transcriptions. Two decoding steps are performed interleaved by acoustic feature normalization and acoustic model adaptation. A final step combines the outputs obtained after having rescored the word graphs generated in the second decoding step with 4 different language models. The latter are trained on: out-of-domain text data, in-domain data and several sets of automatically selected data. For German, acoustic models have been trained on automatically selected portions of a broadcast news corpus, called ”Euronews”. Differently from English, in this case only two decoding steps are carried out without making use of any rescoring procedure.
pdf
bib
abs
Parameter optimization for iterative confusion network decoding in weather-domain speech recognition
Shahab Jalalvand
|
Daniele Falavigna
Proceedings of the 10th International Workshop on Spoken Language Translation: Papers
In this paper, we apply a set of approaches to, efficiently, rescore the output of the automatic speech recognition over weather-domain data. Since the in-domain data is usually insufficient for training an accurate language model (LM) we utilize an automatic selection method to extract domain-related sentences from a general text resource. Then, an N-gram language model is trained on this set. We exploit this LM, along with a pre-trained acoustic model for recognition of the development and test instances. The recognizer generates a confusion network (CN) for each instance. Afterwards, we make use of the recurrent neural network language model (RNNLM), trained on the in-domain data, in order to iteratively rescore the CNs. Rescoring the CNs, in this way, requires estimating the weights of the RNNLM, N-gramLM and acoustic model scores. Weights optimization is the critical part of this work, whereby, we propose using the minimum error rate training (MERT) algorithm along with a novel N-best list extraction method. The experiments are done over weather forecast domain data that has been provided in the framework of EUBRIDGE project.
2002
pdf
bib
The C-ORAL-ROM Project. New methods for spoken language archives in a multilingual romance corpus
Emanuela Cresti
|
Massimo Moneglia
|
Fernanda Bacelar do Nascimento
|
Antonio Moreno Sandoval
|
Jean Veronis
|
Philippe Martin
|
Kalid Choukri
|
Valerie Mapelli
|
Daniele Falavigna
|
Antonio Cid
|
Claude Blum
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)