Journal article
Deep audio-visual speech recognition
- Abstract:
- The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem -- unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release two new datasets for audio-visual speech recognition: LRS2-BBC, consisting of thousands of natural sentences from British television; and LRS3-TED, consisting of hundreds of hours of TED and TEDx talks obtained from YouTube. The models that we train surpass the performance of all previous work on lip reading benchmark datasets by a significant margin.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Accepted manuscript, pdf, 2.3MB, Terms of use)
-
- Publisher copy:
- 10.1109/TPAMI.2018.2889052
Authors
- Publisher:
- IEEE
- Journal:
- IEEE Transactions on Pattern Analysis and Machine Intelligence More from this journal
- Volume:
- 44
- Issue:
- 12
- Pages:
- 8717-8727
- Publication date:
- 2018-12-21
- Acceptance date:
- 2018-12-21
- DOI:
- EISSN:
-
1939-3539
- ISSN:
-
0162-8828
- Language:
-
English
- Keywords:
- Pubs id:
-
pubs:963662
- UUID:
-
uuid:430e1ab8-42f6-418d-b2f0-012faaecffaa
- Local pid:
-
pubs:963662
- Source identifiers:
-
963662
- Deposit date:
-
2019-01-18
Terms of use
- Copyright holder:
- IEEE
- Copyright date:
- 2018
- Rights statement:
- © 2018 IEEE
- Notes:
- This is the accepted manuscript version of the article. The final version is available online from IEEE at: https://doi.org/10.1109/TPAMI.2018.2889052
If you are the owner of this record, you can report an update to it here: Report update to this record