Audio-Visual Speech Recognition is Worth 32x32x8 Voxels

Dmitriy (Dima) Serdyuk
ASRU (2021)
Google Scholar

Abstract

Audio-visual automatic speech recognition (AV-ASR) introduces the video modality into the speech recognition
process, in particular often relying on information conveyed by the motion
of the speaker's mouth.
The use of the visual signal requires extracting visual features,
which are then combined with the acoustic features to build an AV-ASR
system~\cite{Makino2019-zd}. This is traditionally done with some form of 3D
convolution network (e.g. VGG) as widely used in the computer vision community. Recently,
video transformers~\cite{Dosovitskiy2020-nh} have been introduced to
extract visual features useful for image classification tasks.
In this work, we propose to replace the 3D convolution visual frontend
typically used for AV-ASR and lip-reading tasks by a video transformer
frontend. We train our systems on a large-scale dataset composed of
YouTube videos and evaluate performance on the publicly available
LRS3-TED set, as well as on a large set of YouTube videos. On a
lip-reading task, the transformer-based frontend shows superior
performance compared to a strong convolutional baseline. On an AV-ASR
task, the transformer frontend performs as well as a VGG frontend for
clean audio, but outperforms the VGG frontend when the audio is
corrupted by noise.

Research Areas