2024
pdf
bib
abs
Code-Switching and Back-Transliteration Using a Bilingual Model
Daniel Weisberg Mitelman
|
Nachum Dershowitz
|
Kfir Bar
Findings of the Association for Computational Linguistics: EACL 2024
The challenges of automated transliteration and code-switching–detection in Judeo-Arabic texts are addressed. We introduce two novel machine-learning models, one focused on transliterating Judeo-Arabic into Arabic, and another aimed at identifying non-Arabic words, predominantly Hebrew and Aramaic. Unlike prior work, our models are based on a bilingual Arabic-Hebrew language model, providing a unique advantage in capturing shared linguistic nuances. Evaluation results show that our models outperform prior solutions for the same tasks. As a practical contribution, we present a comprehensive pipeline capable of taking Judeo-Arabic text, identifying non-Arabic words, and then transliterating the Arabic portions into Arabic script. This work not only advances the state of the art but also offers a valuable toolset for making Judeo-Arabic texts more accessible to a broader Arabic-speaking audience.
2023
pdf
bib
abs
A Statistical Exploration of Text Partition Into Constituents: The Case of the Priestly Source in the Books of Genesis and Exodus
Gideon Yoffe
|
Axel Bühler
|
Nachum Dershowitz
|
Thomas Romer
|
Eli Piasetzky
|
Israel Finkelstein
|
Barak Sober
Findings of the Association for Computational Linguistics: ACL 2023
We present a pipeline for a statistical stylometric exploration of a hypothesized partition of a text. Given a parameterization of the text, our pipeline: (1) detects literary features yielding the optimal overlap between the hypothesized and unsupervised partitions, (2) performs a hypothesis-testing analysis to quantify the statistical significance of the optimal overlap, while conserving implicit correlations between units of text that are more likely to be grouped, and (3) extracts and quantifies the importance of features most responsible for the classification, estimates their statistical stability and cluster-wise abundance. We apply our pipeline to the first two books in the Bible, where one stylistic component stands out in the eyes of biblical scholars, namely, the Priestly component. We identify and explore statistically significant stylistic differences between the Priestly and non-Priestly components.
2022
pdf
bib
abs
Masking Morphosyntactic Categories to Evaluate Salience for Schizophrenia Diagnosis
Yaara Shriki
|
Ido Ziv
|
Nachum Dershowitz
|
Eiran Harel
|
Kfir Bar
Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology
Natural language processing tools have been shown to be effective for detecting symptoms of schizophrenia in transcribed speech. We analyze and assess the contribution of the various syntactic and morphological categories towards successful machine classification of texts produced by subjects with schizophrenia and by others. Specifically, we fine-tune a language model for the classification task, and mask all words that are attributed with each category of interest. The speech samples were generated in a controlled way by interviewing inpatients who were officially diagnosed with schizophrenia, and a corresponding group of healthy controls. All participants are native Hebrew speakers. Our results show that nouns are the most significant category for classification performance.
pdf
bib
abs
Style Classification of Rabbinic Literature for Detection of Lost Midrash Tanhuma Material
Solomon Tannor
|
Nachum Dershowitz
|
Moshe Lavee
Proceedings of the 2nd International Workshop on Natural Language Processing for Digital Humanities
Midrash collections are complex rabbinic works that consist of text in multiple languages, that evolved through long processes of instable oral and written transmission. Determining the origin of a given passage in such a compilation is not always straightforward and is often a matter disputed by scholars, yet it is essential for scholars’ understanding of the passage and its relationship to other texts in the rabbinic corpus. To help solve this problem, we propose a system for classification of rabbinic literature based on its style, leveraging recently released pretrained Transformer models for Hebrew. Additionally, we demonstrate how our method can be applied to uncover lost material from the Midrash Tanhuma.
pdf
bib
abs
How Much Does Lookahead Matter for Disambiguation? Partial Arabic Diacritization Case Study
Saeed Esmail
|
Kfir Bar
|
Nachum Dershowitz
Computational Linguistics, Volume 48, Issue 4 - December 2022
We suggest a model for partial diacritization of deep orthographies. We focus on Arabic, where the optional indication of selected vowels by means of diacritics can resolve ambiguity and improve readability. Our partial diacritizer restores short vowels only when they contribute to the ease of understandability during reading a given running text. The idea is to identify those uncertainties of absent vowels that require the reader to look ahead to disambiguate. To achieve this, two independent neural networks are used for predicting diacritics, one that takes the entire sentence as input and another that considers only the text that has been read thus far. Partial diacritization is then determined by retaining precisely those vowels on which the two networks disagree, preferring the reading based on consideration of the whole sentence over the more naïve reading-order diacritization. For evaluation, we prepared a new dataset of Arabic texts with both full and partial vowelization. In addition to facilitating readability, we find that our partial diacritizer improves translation quality compared either to their total absence or to random selection. Lastly, we study the benefit of knowing the text that follows the word in focus toward the restoration of short vowels during reading, and we measure the degree to which lookahead contributes to resolving ambiguities encountered while reading. L’Herbelot had asserted, that the most ancient Korans, written in the Cufic character, had no vowel points; and that these were first invented by Jahia–ben Jamer, who died in the 127th year of the Hegira. “Toderini’s History of Turkish Literature,” Analytical Review (1789)
pdf
bib
abs
Can Yes-No Question-Answering Models be Useful for Few-Shot Metaphor Detection?
Lena Dankin
|
Kfir Bar
|
Nachum Dershowitz
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
Metaphor detection has been a challenging task in the NLP domain both before and after the emergence of transformer-based language models. The difficulty lies in subtle semantic nuances that are required to detect metaphor and in the scarcity of labeled data. We explore few-shot setups for metaphor detection, and also introduce new question answering data that can enhance classifiers that are trained on a small amount of data. We formulate the classification task as a question-answering one, and train a question-answering model. We perform extensive experiments for few shot on several architectures and report the results of several strong baselines. Thus, the answer to the question posed in the title is a definite “Yes!”
2020
pdf
bib
abs
Transliteration of Judeo-Arabic Texts into Arabic Script Using Recurrent Neural Networks
Ori Terner
|
Kfir Bar
|
Nachum Dershowitz
Proceedings of the Fifth Arabic Natural Language Processing Workshop
We trained a model to automatically transliterate Judeo-Arabic texts into Arabic script, enabling Arabic readers to access those writings. We employ a recurrent neural network (RNN), combined with the connectionist temporal classification (CTC) loss to deal with unequal input/output lengths. This obligates adjustments in the training data to avoid input sequences that are shorter than their corresponding outputs. We also utilize a pretraining stage with a different loss function to improve network converge. Since only a single source of parallel text was available for training, we take advantage of the possibility of generating data synthetically. We train a model that has the capability to memorize words in the output language, and that also utilizes context for distinguishing ambiguities in the transliteration. We obtain an improvement over the baseline 9.5% character error, achieving 2% error with our best configuration. To measure the contribution of context to learning, we also tested word-shuffled data, for which the error rises to 2.5%.
2019
pdf
bib
abs
Semantic Characteristics of Schizophrenic Speech
Kfir Bar
|
Vered Zilberstein
|
Ido Ziv
|
Heli Baram
|
Nachum Dershowitz
|
Samuel Itzikowitz
|
Eiran Vadim Harel
Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology
Natural language processing tools are used to automatically detect disturbances in transcribed speech of schizophrenia inpatients who speak Hebrew. We measure topic mutation over time and show that controls maintain more cohesive speech than inpatients. We also examine differences in how inpatients and controls use adjectives and adverbs to describe content words and show that the ones used by controls are more common than the those of inpatients. We provide experimental results and show their potential for automatically detecting schizophrenia in patients by means only of their speech patterns.
2014
pdf
bib
The Tel Aviv University System for the Code-Switching Workshop Shared Task
Kfir Bar
|
Nachum Dershowitz
Proceedings of the First Workshop on Computational Approaches to Code Switching
2012
pdf
bib
Language Classification and Segmentation of Noisy Documents in Hebrew Scripts
Alex Zhicharevich
|
Nachum Dershowitz
Proceedings of the 6th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities
pdf
bib
Deriving Paraphrases for Highly Inflected Languages from Comparable Documents
Kfir Bar
|
Nachum Dershowitz
Proceedings of COLING 2012
2011
pdf
bib
Unsupervised Decomposition of a Document into Authorial Components
Moshe Koppel
|
Navot Akiva
|
Idan Dershowitz
|
Nachum Dershowitz
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies
2010
pdf
bib
abs
Using Synonyms for Arabic-to-English Example-Based Translation
Kfir Bar
|
Nachum Dershowitz
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Student Research Workshop
An implementation of a non-structural Example-Based Machine Translation system that translates sentences from Arabic to English, using a parallel corpus aligned at the sentence level, is described. Source-language synonyms were derived automatically and used to help locate potential translation examples for fragments of a given input sentence. The smaller the parallel corpus, the greater the contribution provided by synonyms. Considering the degree of relevance of the subject matter of a potential match contributes to the quality of the final results.
pdf
bib
Tel Aviv University’s system description for IWSLT 2010
Kfir Bar
|
Nachum Dershowitz
Proceedings of the 7th International Workshop on Spoken Language Translation: Evaluation Campaign