Tawunrat Chalothorn


2024

pdf bib
MrRank: Improving Question Answering Retrieval System through Multi-Result Ranking Model
Danupat Khamnuansin | Tawunrat Chalothorn | Ekapol Chuangsuwanich
Findings of the Association for Computational Linguistics ACL 2024

Large Language Models (LLMs) often struggle with hallucinations and outdated information. To address this, Information Retrieval (IR) systems can be employed to augment LLMs with up-to-date knowledge. However, existing IR techniques contain deficiencies, posing a performance bottleneck. Given the extensive array of IR systems, combining diverse approaches presents a viable strategy. Nevertheless, prior attempts have yielded restricted efficacy. In this work, we propose an approach that leverages learning-to-rank techniques to combine heterogeneous IR systems. We demonstrate the method on two Retrieval Question Answering (ReQA) tasks. Our empirical findings exhibit a significant performance enhancement, outperforming previous approaches and achieving state-of-the-art results on ReQA SQuAD.

pdf bib
Financial Product Ontology Population with Large Language Models
Chanatip Saetia | Jiratha Phruetthiset | Tawunrat Chalothorn | Monchai Lertsutthiwong | Supawat Taerungruang | Pakpoom Buabthong
Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing

Ontology population, which aims to extract structured data to enrich domain-specific ontologies from unstructured text, typically faces challenges in terms of data scarcity and linguistic complexity, particularly in specialized fields such as retail banking. In this study, we investigate the application of large language models (LLMs) to populate domain-specific ontologies of retail banking products from Thai corporate documents. We compare traditional span-based approaches to LLMs-based generative methods, with different prompting techniques. Our findings reveal that while span-based methods struggle with data scarcity and the complex linguistic structure, LLMs-based generative approaches substantially outperform, achieving a 61.05% F1 score, with the most improvement coming from providing examples in the prompts. This improvement highlights the potential of LLMs for ontology population tasks, offering a scalable and efficient solution for structured information extraction in especially in low-resource language settings.

2023

pdf bib
Enhancing Word Discrimination and Matching in Query-by-Example Spoken term detection with Acoustic Word Embeddings
Pantid Chantangphol | Theerat Sakdejayont | Tawunrat Chalothorn
Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)

2020

pdf bib
Combining Thai EDUs: Principle and Implementation
Chanatip Saetia | Supawat Taerungruang | Tawunrat Chalothorn
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation

2014

pdf bib
TJP: Identifying the Polarity of Tweets from Contexts
Tawunrat Chalothorn | Jeremy Ellman
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

2013

pdf bib
TJP: Using Twitter to Analyze the Polarity of Contexts
Tawunrat Chalothorn | Jeremy Ellman
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)