CSTR VCTK Corpus 

English Multi-speaker Corpus for CSTR Voice Cloning Toolkit 

Overview
This CSTR VCTK Corpus includes speech data uttered by 109 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive. The newspaper texts were taken from Herald Glasgow, with permission from Herald & Times Group. Each speaker has a different set of the newspaper texts selected based a greedy algorithm that increases the contextual and phonetic coverage. The rainbow passage and elicitation paragraph are the same for all speakers. The rainbow passage can be found at International Dialects of English Archive: (http://web.ku.edu/~idea/readings/rainbow.htm). The elicitation paragraph is identical to the one used for the speech accent archive (http://accent.gmu.edu). The details of the the speech accent archive can be found at http://www.ualberta.ca/~aacl2009/PDFs/WeinbergerKunath2009AACL.pdf All speech data was recorded using an identical recording setup: an omni-directional microphone (DPA 4035), 96kHz sampling frequency at 24 bits and in a hemi-anechoic chamber of the University of Edinburgh. All recordings were converted into 16 bits, were down-sampled to 48 kHz based on STPK, and were manually end-pointed. This corpus is aimed for HMM-based text-to-speech synthesis systems, especially for speaker-adaptive HMM-based speech synthesis that uses average voice models trained on multiple speakers and speaker adaptation technologies.

Download
http://www.udialogue.org/download/VCTK-Corpus.tar.gz (10.4GB)

COPYING
This corpus is licensed under Open Data Commons Attribution License (ODC-By) v1.0.
http://opendatacommons.org/licenses/by/1.0/
http://opendatacommons.org/licenses/by/summary/

ACKNOWLEDGEMENTS
The CSTR VCTK Corpus was constructed by:
Christophe Veaux (University of Edinburgh)
Junichi Yamagishi (University of Edinburgh)
Kirsten MacDonald

The research leading to these results was partly funded from EPSRC grants EP/I031022/1 (NST) and EP/J002526/1 (CAF), from the RSE-NSFC grant (61111130120), and from the JST CREST (uDialogue).

Original source
http://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html