Conference item icon

Conference item

You said that?

Abstract:

We present a method for generating a video of a talking face. The method takes as inputs: (i) still images of the target face, and (ii) an audio speech segment; and outputs a video of the target face lip synched with the audio. The method runs in real time and is applicable to faces and audio not seen at training time. To achieve this we propose an encoder-decoder CNN model that uses a joint embedding of the face and audio to generate synthesised talking face video frames. The model is traine...

Expand abstract
Publication status:
Published
Peer review status:
Peer reviewed
Version:
Publisher's version

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Engineering Science
Oxford college:
Brasenose College
Publisher:
The British Machine Vision Association and Society for Pattern Recognition Publisher's website
Publication date:
2017-09-04
Acceptance date:
2017-07-01
Pubs id:
pubs:742559
URN:
uri:79911294-b5c8-48f2-a716-7f66e9685b28
UUID:
uuid:79911294-b5c8-48f2-a716-7f66e9685b28
Local pid:
pubs:742559

Terms of use


Metrics



If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP