Measuring Domain Portability and Error Propagation in Biomedical QA

Stefan Hosein
Ryan McDonald
BioASQ (2019)

Abstract

In this work we present Google’s submission to the BioASQ
7 biomedical question answering (QA) task (specifically Task 7b, Phase
B). The core of our systems are based on BERT QA models, specifically
the model of [1]. In this report, and via our submissions, we aimed to
investigate two research questions. We start by studying how domain
portable are QA systems that have been pre-trained and fine-tuned on
general texts, e.g., Wikipedia. We measure this via two submissions. The
first is a non-adapted model that uses a public pre-trained BERT model
and is fine-tuned on the Natural Questions data set [4]. The second system takes this non-adapted model and fine-tunes it with the BioASQ
training data. Next, we study the impact of error propagation in end-to-end retrieval and QA systems. Again we test this via two submissions.
The first uses human annotated relevant documents and snippets as input to the model and the second predicted documents and snippets. Our
main findings are that domain specific fine-tuning can benefit Biomedical QA. However, the biggest quality bottleneck is at the retrieval stage,
where we see large drops in metrics – over 10pts absolute – when using
non gold inputs to the QA model.