-
Notifications
You must be signed in to change notification settings - Fork 380
Closed
Labels
correctionfor corrections submitted to the anthologyfor corrections submitted to the anthologymetadataCorrection to paper metadataCorrection to paper metadata
Description
JSON data block
{
"anthology_id": "2026.findings-eacl.110",
"abstract": "Large Language Models (LLMs) depend on retrieval for factual grounding in Retrieval-Augmented Generation (RAG), placing Information Retrieval (IR) at the core of modern Question Answering (QA) systems. While lexical, dense, and hybrid paradigms have been extensively benchmarked in English, their relative effectiveness for Vietnamese remains insufficiently characterized, especially under realistic multi-domain settings. Existing studies are typically confined to single domains or curated datasets, limiting cross-domain comparability and obscuring paradigm-level trade-offs. We introduce the first domain-normalized, multi-domain benchmark for Vietnamese IR under a unified and reproducible evaluation protocol, spanning six domains and ten datasets across education, legal, healthcare, customer support, lifestyle reviews, and open-domain knowledge. We evaluate lexical, neural-sparse, late-interaction, dense, and hybrid paradigms across diverse Vietnamese-specific and multilingual embedding backbones, and release two QA datasets, EduCoQA and CSConDa, constructed from authentic counseling and customer-service interactions. Beyond reporting benchmark performance, we derive systematic insights into lexical–semantic hybridization, specialization versus robustness trade-offs, and the limited predictive value of model scale for retrieval effectiveness. All datasets and evaluation scripts are publicly available at <url>https://github.com/longstnguyen/ViRE</url>.",
"authors": [
{
"first": "Long S. T.",
"last": "Nguyen",
"id": "long-nguyen"
},
{
"first": "Tho T.",
"last": "Quan",
"id": "tho-quan"
}
],
"authors_old": "Long S. T. Nguyen | Tho Quan",
"authors_new": "Long S. T. Nguyen | Tho T. Quan"
}Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
correctionfor corrections submitted to the anthologyfor corrections submitted to the anthologymetadataCorrection to paper metadataCorrection to paper metadata