Generation and Comprehension of Unambiguous Object Descriptions

Junhua Mao
Alexander Toshev
Oana Camburu
Computer Vision and Pattern Recognition (2016)
Google Scholar

Abstract

We propose a method that can generate an unambiguous
description (known as a referring expression) of a specific
object or region in an image, and which can also comprehend
or interpret such an expression to infer which object
is being described. We show that our method outperforms
previous methods that generate descriptions of objects
without taking into account other potentially ambiguous
objects in the scene. Our model is inspired by recent
successes of deep learning methods for image captioning,
but while image captioning is difficult to evaluate, our task
allows for easy objective evaluation. We also present a new
large-scale dataset for referring expressions, based on MSCOCO.
We have released the dataset and a toolbox for visualization
and evaluation, see https://github.com/
mjhucla/Google_Refexp_toolbox.

Research Areas