MIXED CASE CONTEXTUAL ASR USING CAPITALIZATION MASKS

Quoc-Nam Le-The
INTERSPEECH 2020 (2020)
Google Scholar

Abstract

End-to-end (E2E) mixed case automatic speech recognition systems
(ASR) that directly predict words in the written domain are attractive
due to being simple to build, not requiring explicit capitalization
models, allowing streaming capitalization without additional effort
beyond that required for streaming ASR, and their small size.
However, the fact that these systems produce various versions of the same
word with different capitalizations, and even different word
segmentations for different case variants when wordpieces (WP) are predicted,
leads to multiple problems with contextual ASR. In particular,
the size and time to build contextual models grows considerably
with the number of variants per word. In this paper, we propose
separating orthographic recognition from capitalization, so that the
ASR system first predicts a word, then predicts its capitalization in
the form of a capitalization mask. We show that the use of capitalization
masks achieves the same low error rate as traditional mixed
case ASR, while reducing the size and compilation time of contextual models.
Furthermore, we observe significant improvements in capitalization quality.

Research Areas