Transformer-Based Video Front-Ends for Audio-Visual Speech Recognition for Single and Multi-Person Video

Dmitriy (Dima) Serdyuk
Interspeech (2022) (to appear)
Google Scholar

Abstract

Audio-visual automatic speech recognition (AV-ASR) extends the speech recognition by introducing the video modality.
In particular, the information contained in the motion of the speaker's mouth is used to augment the audio features.
The video modality is traditionally processed with a 3D convolutional neural network (e.g. 3D version of VGG).
Recently, image transformer networks~\cite{Dosovitskiy2020-nh} demonstrated the ability to extract rich visual features for the image classification task.
In this work, we propose to replace the 3D convolution with a video transformer video feature extractor.
We train our baselines and the proposed model on a large scale corpus of the YouTube videos.
Then we evaluate the performance on a labeled subset of YouTube as well as on the public corpus LRS3-TED.
Our best model video-only model achieves the performance of 34.9\% WER on YTDEV18 and 19.3\% on LRS3-TED which is a 10\% and 9\% relative improvements over the convolutional baseline.
We achieve the state of the art performance of the audio-visual recognition on the LRS3-TED after fine-tuning our model (1.6\% WER).

Research Areas