Conference item
VoxCeleb: a large-scale speaker identification dataset
- Abstract:
- Most existing datasets for speaker identification contain samples obtained under quite constrained conditions, and are usually hand-annotated, hence limited in size. The goal of this paper is to generate a large scale text-independent speaker identi- fication dataset collected ‘in the wild’. We make two contributions. First, we propose a fully automated pipeline based on computer vision techniques to create the dataset from open-source media. Our pipeline involves obtaining videos from YouTube; performing active speaker verifi- cation using a two-stream synchronization Convolutional Neural Network (CNN), and confirming the identity of the speaker using CNN based facial recognition. We use this pipeline to curate VoxCeleb which contains hundreds of thousands of ‘real world’ utterances for over 1,000 celebrities. Our second contribution is to apply and compare various state of the art speaker identification techniques on our dataset to establish baseline performance. We show that a CNN based architecture obtains the best performance for both identification and verification.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Accepted manuscript, pdf, 1.4MB, Terms of use)
-
- Publisher copy:
- 10.21437/Interspeech.2017-950
Authors
+ Engineering and Physical Sciences Research Council
More from this funder
- Grant:
- Seebibyte EP/M013774/1
- Publisher:
- ISCA
- Host title:
- Proceedings Interspeech 2017
- Journal:
- Interspeech 2017 More from this journal
- Pages:
- 2616-2620
- Publication date:
- 2017-01-01
- Acceptance date:
- 2017-05-22
- DOI:
- ISSN:
-
1990-9772
- Keywords:
- Pubs id:
-
pubs:744138
- UUID:
-
uuid:3dc3662e-0043-402b-8c37-6952ac9a9523
- Local pid:
-
pubs:744138
- Source identifiers:
-
744138
- Deposit date:
-
2017-11-09
Terms of use
- Copyright holder:
- ISCA
- Copyright date:
- 2017
- Notes:
- Copyright © 2017 ISCA. This is the accepted manuscript version of the paper. The final version is available online from ISCA at: https://doi.org/10.21437/Interspeech.2017-950
If you are the owner of this record, you can report an update to it here: Report update to this record