Conference item icon

Conference item

You said that?

Abstract:
We present a method for generating a video of a talking face. The method takes as inputs: (i) still images of the target face, and (ii) an audio speech segment; and outputs a video of the target face lip synched with the audio. The method runs in real time and is applicable to faces and audio not seen at training time. To achieve this we propose an encoder-decoder CNN model that uses a joint embedding of the face and audio to generate synthesised talking face video frames. The model is trained on tens of hours of unlabelled videos. We also show results of re-dubbing videos using speech from a different perso
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
Role:
Author
More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
Role:
Author
More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Oxford college:
Brasenose College
Role:
Author


Publisher:
British Machine Vision Association and Society for Pattern Recognition
Host title:
28th British Machine Vision Conference, 2017, Imperial College London, 4th-7th September 2017
Journal:
British Machine Vision Conference, 2017 More from this journal
Publication date:
2017-09-04
Acceptance date:
2017-07-01


Pubs id:
pubs:742559
UUID:
uuid:79911294-b5c8-48f2-a716-7f66e9685b28
Local pid:
pubs:742559
Source identifiers:
742559
Deposit date:
2017-11-03

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP