Attention-based Extraction of Structured Information from Street View Imagery

Zbigniew Wojna
Alex Gorban
Dar-Shyang Lee
Qian Yu
Julian Ibarz
ICDAR (2017), pp. 8

Abstract

We present a neural network model, based on
CNNs, RNNs and attention mechanisms, which achieves 84.04%
accuracy on the challenging French Street Name Signs (FSNS)
dataset, significantly outperforming the previous state of the
art (Smith’16), which achieved 72.46%. Furthermore, our new
method is much simpler and more general than the previous
approach. To demonstrate the generality of our model, we also
apply it to two datasets, derived from Google Street View, in
which the goal is to extract business names from store fronts,
and extract structured date/time information from parking signs.
Finally, we study the speed/accuracy tradeoff that results from
cutting pretrained inception CNNs at different depths and using
them as feature extractors for the attention mechanism. The
resulting model is not only accurate but efficient, allowing it
to be used at scale on a variety of challenging real-world text
extraction problems.