Modeling Uncertainty with Hedged Instance Embedding

Seong Joon Oh
Jiyan Pan
ICLR 2019 (2019)

Abstract

Instance embeddings are an efficient and versatile image representation that facilitates applications like recognition, verification, retrieval, and clustering. Many
metric learning methods represent the input as a single point in the embedding
space. Often the distance between points is used as a proxy for match confidence.
However, this can fail to represent uncertainty which can arise when the input is
ambiguous, e.g., due to occlusion or blurriness. This work addresses this issue and
explicitly models the uncertainty by “hedging” the location of each input in the
embedding space. We introduce the hedged instance embedding (HIB) in which
embeddings are modeled as random variables and the model is trained under the
variational information bottleneck principle (Alemi et al., 2016; Achille & Soatto,
2018). Empirical results on our new N-digit MNIST dataset show that our method
leads to the desired behavior of “hedging its bets” across the embedding space
upon encountering ambiguous inputs. This results in improved performance for
image matching and classification tasks, more structure in the learned embedding
space, and an ability to compute a per-exemplar uncertainty measure which is
correlated with downstream performance.