Skip to content

Commit d60df82

Browse files
committed
Manually correct unparseable abstracts
1 parent 808be01 commit d60df82

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

data/xml/2026.eacl.xml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@
119119
<author><first>Chris</first><last>Biemann</last><affiliation>U Hamburg</affiliation></author>
120120
<author id="martin-semmann" orcid="0000-0001-5316-3696"><first>Martin</first><last>Semmann</last><affiliation>Universität Hamburg</affiliation></author>
121121
<pages>165-191</pages>
122-
<abstract>Since many real-world documents combine textual and tabular data, robust Retrieval Augmented Generation (RAG) systems are essential for effectively accessing and analyzing such content to support complex reasoning tasks. Therefore, this paper introduces <tex-math>\textbf{$T^2$-RAGBench}</tex-math>, a benchmark comprising <tex-math>\textbf{23,088}</tex-math> question-context-answer triples, designed to evaluate RAG methods on real-world text-and-table data. Unlike typical QA datasets that operate under <tex-math>\textit{Oracle Context}</tex-math> settings, <tex-math>\textbf{$T^2$-RAGBench}</tex-math> challenges models to first retrieve the correct context before conducting numerical reasoning. Existing QA datasets containing text-and-table data typically contain context-dependent questions, which may yield multiple correct answers depending on the provided context. To address this, we transform SOTA datasets into a context-independent format, validated by experts as 91.3% context-independent questions, enabling reliable RAG evaluation. Our comprehensive evaluation identifies <tex-math>\textit{Hybrid BM25}</tex-math> , a technique that combines dense and sparse vectors, as the most effective approach for text-and-table data. However, results demonstrate that <tex-math>\textbf{$T^2$-RAGBench}</tex-math> remains challenging even for SOTA LLMs and RAG methods. Further ablation studies examine the impact of embedding models and corpus size on retrieval performance. <tex-math>\textbf{$T^2$-RAGBench}</tex-math> provides a realistic and rigorous benchmark for existing RAG methods on text-and-table data. Code and dataset are available online: <url>https://github.com/uhh-hcds/g4kmu-paper</url></abstract>
122+
<abstract>Since many real-world documents combine textual and tabular data, robust Retrieval Augmented Generation (RAG) systems are essential for effectively accessing and analyzing such content to support complex reasoning tasks. Therefore, this paper introduces <b><tex-math>T^2-RAGBench</tex-math></b>, a benchmark comprising <tex-math>\textbf{23,088}</tex-math> question-context-answer triples, designed to evaluate RAG methods on real-world text-and-table data. Unlike typical QA datasets that operate under <tex-math>\textit{Oracle Context}</tex-math> settings, <b><tex-math>T^2-RAGBench</tex-math></b> challenges models to first retrieve the correct context before conducting numerical reasoning. Existing QA datasets containing text-and-table data typically contain context-dependent questions, which may yield multiple correct answers depending on the provided context. To address this, we transform SOTA datasets into a context-independent format, validated by experts as 91.3% context-independent questions, enabling reliable RAG evaluation. Our comprehensive evaluation identifies <tex-math>\textit{Hybrid BM25}</tex-math> , a technique that combines dense and sparse vectors, as the most effective approach for text-and-table data. However, results demonstrate that <b><tex-math>T^2-RAGBench</tex-math></b> remains challenging even for SOTA LLMs and RAG methods. Further ablation studies examine the impact of embedding models and corpus size on retrieval performance. <b><tex-math>T^2-RAGBench</tex-math></b> provides a realistic and rigorous benchmark for existing RAG methods on text-and-table data. Code and dataset are available online: <url>https://github.com/uhh-hcds/g4kmu-paper</url></abstract>
123123
<url hash="7a0e5f15">2026.eacl-long.8</url>
124124
<attachment hash="0ef55fdb" type="checklist">2026.eacl-long.8.checklist.pdf</attachment>
125125
<bibkey>strich-etal-2026-t2</bibkey>
@@ -1276,7 +1276,7 @@
12761276
<author><first>Abhisek</first><last>Tiwari</last></author>
12771277
<author id="javaid-nabi" orcid="0009-0004-4664-3201"><first>Javaid</first><last>Nabi</last><affiliation>Samsung Research India</affiliation></author>
12781278
<pages>2181-2205</pages>
1279-
<abstract>Despite advances in large language models (LLMs), Task-Oriented Dialogue (TOD) systems often fall short in delivering personalized, context-rich responses, especially in low-resource, code-mixed, and multimodal settings like Hinglish (Hindi-English). To bridge this gap, we introduce <tex-math>\textit{HiVisTask}</tex-math>, the first Hinglish multimodal, multidomain, persona-based TOD dataset that captures user-agent interactions across text and visual modalities. We also propose <tex-math>\textit{G$^{3}$ TOD}</tex-math>, a generalizable framework that enhances personalization using three structured knowledge graphs: entity context, user persona, and commonsense reasoning, all extracted from conversation history. Extensive experiments with LLMs (e.g., LLaMA3.2, Phi3, GPT4, Mistral7b, Qwen3, Gemma3) show that <tex-math>\textit{G$^{3}$ TOD}</tex-math> consistently outperforms both standard and ablated baselines. We observe substantial gains across evaluation metrics (both quantitative: BLEU <tex-math>\uparrow</tex-math> and qualitative: Human Eval <tex-math>\uparrow</tex-math>) over existing models. The observed improvements strongly underscore the value of structured and selective contextualization in generating personalized and engaging multimodal responses.</abstract>
1279+
<abstract>Despite advances in large language models (LLMs), Task-Oriented Dialogue (TOD) systems often fall short in delivering personalized, context-rich responses, especially in low-resource, code-mixed, and multimodal settings like Hinglish (Hindi-English). To bridge this gap, we introduce <i>HiVisTask</i>, the first Hinglish multimodal, multidomain, persona-based TOD dataset that captures user-agent interactions across text and visual modalities. We also propose <i><tex-math>G^3 TOD</tex-math></i>, a generalizable framework that enhances personalization using three structured knowledge graphs: entity context, user persona, and commonsense reasoning, all extracted from conversation history. Extensive experiments with LLMs (e.g., LLaMA3.2, Phi3, GPT4, Mistral7b, Qwen3, Gemma3) show that <i><tex-math>G^3 TOD</tex-math></i> consistently outperforms both standard and ablated baselines. We observe substantial gains across evaluation metrics (both quantitative: BLEU ↑ and qualitative: Human Eval ↑) over existing models. The observed improvements strongly underscore the value of structured and selective contextualization in generating personalized and engaging multimodal responses.</abstract>
12801280
<url hash="616744f4">2026.eacl-long.96</url>
12811281
<attachment hash="b4c21524" type="checklist">2026.eacl-long.96.checklist.pdf</attachment>
12821282
<bibkey>agrahari-etal-2026-know</bibkey>

0 commit comments

Comments
 (0)