Mismatch Quest: Visual and Textual Feedback for Image-Text Misalignment

Brian Gordon
Dani Lischinski
Daniel Cohen-Or
arXiv (2023)

Abstract

While existing image/text alignment models reach high quality binary assessments, they fall short of pinpointing the exact source of misalignment.
In this paper, we present a method to provide detailed textual and visual explanation of detected misalignments between text/image pairs.
We leverage large language models to automatically construct a training set that holds plausible misaligned captions for a given image and corresponding textual explanations and visual indicators. We also introduce a new human curated test set comprising ground-truth textual and visual misalignment annotations. Empirical results show that fine-tuning vision language models on our training set enables them to articulate misalignments and visually indicate them within images, outperforming strong baselines both on the binary alignment classification and the explanation generation tasks.