2024
pdf
bib
abs
Exposing propaganda: an analysis of stylistic cues comparing human annotations and machine classification
Géraud Faye
|
Benjamin Icard
|
Morgane Casanova
|
Julien Chanson
|
François Maine
|
François Bancilhon
|
Guillaume Gadek
|
Guillaume Gravier
|
Paul Égré
Proceedings of the Third Workshop on Understanding Implicit and Underspecified Language
This paper investigates the language of propaganda and its stylistic features. It presents the PPN dataset, standing for Propagandist Pseudo-News, a multisource, multilingual, multimodal dataset composed of news articles extracted from websites identified as propaganda sources by expert agencies. A limited sample from this set was randomly mixed with papers from the regular French press, and their URL masked, to conduct an annotation-experiment by humans, using 11 distinct labels. The results show that human annotators were able to reliably discriminate between the two types of press across each of the labels. We use different NLP techniques to identify the cues used by annotators, and to compare them with machine classification: first the analyzer VAGO to detect discourse vagueness and subjectivity, and then four different classifiers, two based on RoBERTa, one CATS using syntax, and one XGBoost combining syntactic and semantic features.
pdf
bib
abs
A Multi-Label Dataset of French Fake News: Human and Machine Insights
Benjamin Icard
|
François Maine
|
Morgane Casanova
|
Géraud Faye
|
Julien Chanson
|
Guillaume Gadek
|
Ghislain Atemezing
|
François Bancilhon
|
Paul Égré
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
We present a corpus of 100 documents, named OBSINFOX, selected from 17 sources of French press considered unreliable by expert agencies, annotated using 11 labels by 8 annotators. By collecting more labels than usual, by more annotators than is typically done, we can identify features that humans consider as characteristic of fake news, and compare them to the predictions of automated classifiers. We present a topic and genre analysis using Gate Cloud, indicative of the prevalence of satire-like text in the corpus. We then use the subjectivity analyzer VAGO, and a neural version of it, to clarify the link between ascriptions of the label Subjective and ascriptions of the label Fake News. The annotated dataset is available online at the following url: https://github.com/obs-info/obsinfox Keywords: Fake News, Multi-Labels, Subjectivity, Vagueness, Detail, Opinion, Exaggeration, French Press
2023
pdf
bib
abs
K-pop and fake facts: from texts to smart alerting for maritime security
Maxime Prieur
|
Souhir Gahbiche
|
Guillaume Gadek
|
Sylvain Gatepaille
|
Kilian Vasnier
|
Valerian Justine
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)
Maritime security requires full-time monitoring of the situation, mainly based on technical data (radar, AIS) but also from OSINT-like inputs (e.g., newspapers). Some threats to the operational reliability of this maritime surveillance, such as malicious actors, introduce discrepancies between hard and soft data (sensors and texts), either by tweaking their AIS emitters or by emitting false information on pseudo-newspapers. Many techniques exist to identify these pieces of false information, including using knowledge base population techniques to build a structured view of the information. This paper presents a use case for suspect data identification in a maritime setting. The proposed system UMBAR ingests data from sensors and texts, processing them through an information extraction step, in order to feed a Knowledge Base and finally perform coherence checks between the extracted facts.
pdf
bib
abs
DWIE-FR : Un nouveau jeu de données en français annoté en entités nommées
Sylvain Verdy
|
Maxime Prieur
|
Guillaume Gadek
|
Cédric Lopez
Actes de CORIA-TALN 2023. Actes de la 30e Conférence sur le Traitement Automatique des Langues Naturelles (TALN), volume 2 : travaux de recherche originaux -- articles courts
Ces dernières années, les contributions majeures qui ont eu lieu en apprentissage automatique supervisé ont mis en evidence la nécessité de disposer de grands jeux de données annotés de haute qualité. Les recherches menées sur la tâche de reconnaissance d’entités nommées dans des textes en français font face à l’absence de jeux de données annotés “à grande échelle” et avec de nombreuses classes d’entités hiérarchisées. Dans cet article, nous proposons une approche pour obtenir un tel jeu de données qui s’appuie sur des étapes de traduction puis d’annotation des données textuelles en anglais vers une langue cible (ici au français). Nous évaluons la qualité de l’approche proposée et mesurons les performances de quelques modèles d’apprentissage automatique sur ces données.
2020
pdf
bib
abs
Arabizi Language Models for Sentiment Analysis
Gaétan Baert
|
Souhir Gahbiche
|
Guillaume Gadek
|
Alexandre Pauchet
Proceedings of the 28th International Conference on Computational Linguistics
Arabizi is a written form of spoken Arabic, relying on Latin characters and digits. It is informal and does not follow any conventional rules, raising many NLP challenges. In particular, Arabizi has recently emerged as the Arabic language in online social networks, becoming of great interest for opinion mining and sentiment analysis. Unfortunately, only few Arabizi resources exist and state-of-the-art language models such as BERT do not consider Arabizi. In this work, we construct and release two datasets: (i) LAD, a corpus of 7.7M tweets written in Arabizi and (ii) SALAD, a subset of LAD, manually annotated for sentiment analysis. Then, a BERT architecture is pre-trained on LAD, in order to create and distribute an Arabizi language model called BAERT. We show that a language model (BAERT) pre-trained on a large corpus (LAD) in the same language (Arabizi) as that of the fine-tuning dataset (SALAD), outperforms a state-of-the-art multi-lingual pretrained model (multilingual BERT) on a sentiment analysis task.