2024
pdf
bib
abs
MRL Parsing Without Tears: The Case of Hebrew
Shaltiel Shmidman
|
Avi Shmidman
|
Moshe Koppel
|
Reut Tsarfaty
Findings of the Association for Computational Linguistics ACL 2024
Syntactic parsing remains a critical tool for relation extraction and information extraction, especially in resource-scarce languages where LLMs are lacking. Yet in morphologically rich languages (MRLs), where parsers need to identify multiple lexical units in each token, existing systems suffer in latency and setup complexity. Some use a pipeline to peel away the layers: first segmentation, then morphology tagging, and then syntax parsing; however, errors in earlier layers are then propagated forward. Others use a joint architecture to evaluate all permutations at once; while this improves accuracy, it is notoriously slow. In contrast, and taking Hebrew as a test case, we present a new “flipped pipeline”: decisions are made directly on the whole-token units by expert classifiers, each one dedicated to one specific task. The classifier predictions are independent of one another, and only at the end do we synthesize their predictions. This blazingly fast approach requires only a single huggingface call, without the need for recourse to lexicons or linguistic resources. When trained on the same training set used in previous studies, our model achieves near-SOTA performance on a wide array of Hebrew NLP tasks. Furthermore, when trained on a newly enlarged training corpus, our model achieves a new SOTA for Hebrew POS tagging and dependency parsing. We release this new SOTA model to the community. Because our architecture does not rely on any language-specific resources, it can serve as a model to develop similar parsers for other MRLs.
pdf
bib
abs
MsBERT: A New Model for the Reconstruction of Lacunae in Hebrew Manuscripts
Avi Shmidman
|
Ometz Shmidman
|
Hillel Gershuni
|
Moshe Koppel
Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)
Hebrew manuscripts preserve thousands of textual transmissions of post-Biblical Hebrew texts from the first millennium. In many cases, the text in the manuscripts is not fully decipherable, whether due to deterioration, perforation, burns, or otherwise. Existing BERT models for Hebrew struggle to fill these gaps, due to the many orthographical deviations found in Hebrew manuscripts. We have pretrained a new dedicated BERT model, dubbed MsBERT (short for: Manuscript BERT), designed from the ground up to handle Hebrew manuscript text. MsBERT substantially outperforms all existing Hebrew BERT models regarding the prediction of missing words in fragmentary Hebrew manuscript transcriptions in multiple genres, as well as regarding the task of differentiating between quoted passages and exegetical elaborations. We provide MsBERT for free download and unrestricted use, and we also provide an interactive and user-friendly website to allow manuscripts scholars to leverage the power of MsBERT in their scholarly work of reconstructing fragmentary Hebrew manuscripts.
2023
pdf
bib
abs
Do Pretrained Contextual Language Models Distinguish between Hebrew Homograph Analyses?
Avi Shmidman
|
Cheyn Shmuel Shmidman
|
Dan Bareket
|
Moshe Koppel
|
Reut Tsarfaty
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Semitic morphologically-rich languages (MRLs) are characterized by extreme word ambiguity. Because most vowels are omitted in standard texts, many of the words are homographs with multiple possible analyses, each with a different pronunciation and different morphosyntactic properties. This ambiguity goes beyond word-sense disambiguation (WSD), and may include token segmentation into multiple word units. Previous research on MRLs claimed that standardly trained pre-trained language models (PLMs) based on word-pieces may not sufficiently capture the internal structure of such tokens in order to distinguish between these analyses.Taking Hebrew as a case study, we investigate the extent to which Hebrew homographs can be disambiguated and analyzed using PLMs. We evaluate all existing models for contextualized Hebrew embeddings on a novel Hebrew homograph challenge sets that we deliver. Our empirical results demonstrate that contemporary Hebrew contextualized embeddings outperform non-contextualized embeddings; and that they are most effective for disambiguating segmentation and morphosyntactic features, less so regarding pure word-sense disambiguation. We show that these embeddings are more effective when the number of word-piece splits is limited, and they are more effective for 2-way and 3-way ambiguities than for 4-way ambiguity. We show that the embeddings are equally effective for homographs of both balanced and skewed distributions, whether calculated as masked or unmasked tokens. Finally, we show that these embeddings are as effective for homograph disambiguation with extensive supervised training as with a few-shot setup.
2020
pdf
bib
abs
A Novel Challenge Set for Hebrew Morphological Disambiguation and Diacritics Restoration
Avi Shmidman
|
Joshua Guedalia
|
Shaltiel Shmidman
|
Moshe Koppel
|
Reut Tsarfaty
Findings of the Association for Computational Linguistics: EMNLP 2020
One of the primary tasks of morphological parsers is the disambiguation of homographs. Particularly difficult are cases of unbalanced ambiguity, where one of the possible analyses is far more frequent than the others. In such cases, there may not exist sufficient examples of the minority analyses in order to properly evaluate performance, nor to train effective classifiers. In this paper we address the issue of unbalanced morphological ambiguities in Hebrew. We offer a challenge set for Hebrew homographs — the first of its kind — containing substantial attestation of each analysis of 21 Hebrew homographs. We show that the current SOTA of Hebrew disambiguation performs poorly on cases of unbalanced ambiguity. Leveraging our new dataset, we achieve a new state-of-the-art for all 21 words, improving the overall average F1 score from 0.67 to 0.95. Our resulting annotated datasets are made publicly available for further research.
pdf
bib
abs
Nakdan: Professional Hebrew Diacritizer
Avi Shmidman
|
Shaltiel Shmidman
|
Moshe Koppel
|
Yoav Goldberg
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
We present a system for automatic diacritization of Hebrew Text. The system combines modern neural models with carefully curated declarative linguistic knowledge and comprehensive manually constructed tables and dictionaries. Besides providing state of the art diacritization accuracy, the system also supports an interface for manual editing and correction of the automatic output, and has several features which make it particularly useful for preparation of scientific editions of historical Hebrew texts. The system supports Modern Hebrew, Rabbinic Hebrew and Poetic Hebrew. The system is freely accessible for all use at
http://nakdanpro.dicta.org.il2018
pdf
bib
Proceedings of the Second Workshop on Stylistic Variation
Julian Brooke
|
Lucie Flekova
|
Moshe Koppel
|
Thamar Solorio
Proceedings of the Second Workshop on Stylistic Variation
2017
pdf
bib
Proceedings of the Workshop on Stylistic Variation
Julian Brooke
|
Thamar Solorio
|
Moshe Koppel
Proceedings of the Workshop on Stylistic Variation
2016
pdf
bib
Reconstructing Ancient Literary Texts from Noisy Manuscripts
Moshe Koppel
|
Moty Michaely
|
Alex Tal
Proceedings of the Fifth Workshop on Computational Linguistics for Literature
pdf
bib
abs
Shamela: A Large-Scale Historical Arabic Corpus
Yonatan Belinkov
|
Alexander Magidow
|
Maxim Romanov
|
Avi Shmidman
|
Moshe Koppel
Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH)
Arabic is a widely-spoken language with a rich and long history spanning more than fourteen centuries. Yet existing Arabic corpora largely focus on the modern period or lack sufficient diachronic information. We develop a large-scale, historical corpus of Arabic of about 1 billion words from diverse periods of time. We clean this corpus, process it with a morphological analyzer, and enhance it by detecting parallel passages and automatically dating undated texts. We demonstrate its utility with selected case-studies in which we show its application to the digital humanities.
2014
pdf
bib
Automatic Detection of Machine Translated Text and Translation Quality Estimation
Roee Aharoni
|
Moshe Koppel
|
Yoav Goldberg
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
2013
pdf
bib
Automatically Identifying Pseudepigraphic Texts
Moshe Koppel
|
Shachar Seidman
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
pdf
bib
Authorship Attribution of Micro-Messages
Roy Schwartz
|
Oren Tsur
|
Ari Rappoport
|
Moshe Koppel
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
2011
pdf
bib
Translationese and Its Dialects
Moshe Koppel
|
Noam Ordan
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies
pdf
bib
Unsupervised Decomposition of a Document into Authorial Components
Moshe Koppel
|
Navot Akiva
|
Idan Dershowitz
|
Nachum Dershowitz
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies
2007
pdf
bib
Fully Unsupervised Discovery of Concept-Specific Relationships by Web Mining
Dmitry Davidov
|
Ari Rappoport
|
Moshe Koppel
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics