Venkatasai Ojus Yenumulapalli
2024
TECHSSN1 at SemEval-2024 Task 10: Emotion Classification in Hindi-English Code-Mixed Dialogue using Transformer-based Models
Venkatasai Ojus Yenumulapalli
|
Pooja Premnath
|
Parthiban Mohankumar
|
Rajalakshmi Sivanaiah
|
Angel Deborah
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
The increase in the popularity of code mixed languages has resulted in the need to engineer language models for the same . Unlike pure languages, code-mixed languages lack clear grammatical structures, leading to ambiguous sentence constructions. This ambiguity presents significant challenges for natural language processing tasks, including syntactic parsing, word sense disambiguation, and language identification. This paper focuses on emotion recognition of conversations in Hinglish, a mix of Hindi and English, as part of Task 10 of SemEval 2024. The proposed approach explores the usage of standard machine learning models like SVM, MNB and RF, and also BERT-based models for Hindi-English code-mixed data- namely, HingBERT, Hing mBERT and HingRoBERTa for subtask A.
2023
TechSSN1 at LT-EDI-2023: Depression Detection and Classification using BERT Model for Social Media Texts
Venkatasai Ojus Yenumulapalli
|
Vijai Aravindh R
|
Rajalakshmi Sivanaiah
|
Angel Deborah S
Proceedings of the Third Workshop on Language Technology for Equality, Diversity and Inclusion
Depression is a severe mental health disorder characterized by persistent feelings of sadness and anxiety, a decline in cognitive functioning resulting in drastic changes in a human’s psychological and physical well-being. However, depression is curable completely when treated at a suitable time and treatment resulting in the rejuvenation of an individual. The objective of this paper is to devise a technique for detecting signs of depression from English social media comments as well as classifying them based on their intensity into severe, moderate, and not depressed categories. The paper illustrates three approaches that are developed when working toward the problem. Of these approaches, the BERT model proved to be the most suitable model with an F1 macro score of 0.407, which gave us the 11th rank overall.
Search
Co-authors
- Rajalakshmi Sivanaiah 2
- Pooja Premnath 1
- Parthiban Mohankumar 1
- Angel Deborah 1
- Vijai Aravindh R 1
- show all...