FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue

Alon Albalak
Connor Pryor
Jay Pujara
Lise Getoor
Luke Yoffe
Pegah Jandaghimeibodi
William Wang
Yi-Lin Tuan
EMNLP'22 (2022)

Abstract

Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models. Dialogue understanding encompasses many diverse tasks, yet task transfer has not been thoroughly studied in conversational AI.
This work explores conversational task transfer by introducing \feta: a benchmark for \textbf{FE}w-sample \textbf{TA}sk transfer in open-domain dialogue.
\feta\;contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer; task transfer without domain adaptation.
We utilize three popular language models and three learning algorithms to analyze the transferability between 132 source-target task pairs and create a baseline for future work.
We run experiments in the single- and multi-source settings and report valuable findings, e.g., most performance trends are model-specific, and span extraction and multiple-choice tasks benefit the most from task transfer.
In addition to task transfer, \feta\;can be a valuable resource for future research into the efficiency and generalizability of pre-training datasets and model architectures, as well as learning settings such as continual and multitask learning.