Skip to main content

Improved Prosodic Clustering for Multispeaker and Speaker-Independent Phoneme-Level Prosody Control

  • Conference paper
  • First Online:
Speech and Computer (SPECOM 2021)

Abstract

This paper presents a method for phoneme-level prosody control of F0 and duration on a multispeaker text-to-speech setup, which is based on prosodic clustering. An autoregressive attention-based model is used, incorporating multispeaker architecture modules in parallel to a prosody encoder. Several improvements over the basic single-speaker method are proposed that increase the prosodic control range and coverage. More specifically we employ data augmentation, F0 normalization, balanced clustering for duration, and speaker-independent prosodic clustering. These modifications enable fine-grained phoneme-level prosody control for all speakers contained in the training set, while maintaining the speaker identity. The model is also fine-tuned to unseen speakers with limited amounts of data and it is shown to maintain its prosody control capabilities, verifying that the speaker-independent prosodic clustering is effective. Experimental results verify that the model maintains high output speech quality and that the proposed method allows efficient prosody control within each speaker’s range despite the variability that a multispeaker setting introduces.

M. Christidou and A. Vioni—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Angelini, O., Moinet, A., Yanagisawa, K., Drugman, T.: Singing synthesis: with a little help from my attention. In: Proceedings of Interspeech (2020)

    Google Scholar 

  2. Battenberg, E., et al.: Effective use of variational embedding capacity in expressive end-to-end speech synthesis. arXiv:1906.03402 (2019)

  3. Blaauw, M., Bonada, J.: A neural parametric singing synthesizer modeling timbre and expression from natural songs. Appl. Sci. 7(12), 1313 (2017)

    Article  Google Scholar 

  4. Chalamandaris, A., Tsiakoulis, P., Raptis, S., Karabetsos, S.: Corpus design for a unit selection TTS system with application to Bulgarian. In: Proceedings of 4th Conference on Human Language Technology: Challenges for Computer Science and Linguistics, pp. 35–46 (2009)

    Google Scholar 

  5. Chien, C.M., Lee, H.: Hierarchical prosody modeling for non-autoregressive speech synthesis. In: Proceedings of SLT (2021)

    Google Scholar 

  6. Cooper, E., Lai, C.I., Yasuda, Y., Yamagishi, J.: Can speaker augmentation improve multi-speaker end-to-end TTS? In: Proceedings of Interspeech (2020)

    Google Scholar 

  7. Corretge, R.: Praat Vocal Toolkit (2012–2020). http://www.praatvocaltoolkit.com

  8. Daxin, T., Tan, L.: Fine-grained style modelling and transfer in text-to-speech synthesis via content-style disentanglement. arXiv:2011.03943 (2020)

  9. Du, C., Yu, K.: Mixture Density Network for Phone-Level Prosody Modelling in Speech Synthesis. arXiv:2102.00851 (2021)

  10. Ellinas, N., et al.: High quality streaming speech synthesis with low, sentence-length-independent latency. In: Proceedings of Interspeech (2020)

    Google Scholar 

  11. Gururani, S., Gupta, K., Shah, D., Shakeri, Z., Pinto, J.: Prosody Transfer in Neural Text to Speech Using Global Pitch and Loudness Features. arXiv:1911.09645 (2019)

  12. Hsu, W.N., et al.: Hierarchical generative modeling for controllable speech synthesis. In: Proceedings of ICLR (2018)

    Google Scholar 

  13. Ito, K., Johnson, L.: The LJ Speech Dataset (2017). https://keithito.com/LJ-Speech-Dataset

  14. Karlapati, S., Moinet, A., Joly, A., Klimkov, V., Sáez-Trigueros, D., Drugman, T.: CopyCat: many-to-many fine-grained prosody transfer for neural text-to-speech. In: Proceedings of Interspeech (2020)

    Google Scholar 

  15. Klimkov, V., Ronanki, S., Rohnke, J., Drugman, T.: Fine-grained robust prosody transfer for single-speaker neural text-to-speech. In: Proceedings of Interspeech (2019)

    Google Scholar 

  16. Kumar, N., Goel, S., Narang, A., Lall, B.: Few Shot Adaptive Normalization Driven Multi-Speaker Speech Synthesis. arXiv:2012.07252 (2020)

  17. Kurihara, K., Seiyama, N., Kumano, T.: Prosodic features control by symbols as input of sequence-to-sequence acoustic modeling for neural TTS. IEICE Trans. Inf. Syst. E104.D(2), 302–311 (2021)

    Google Scholar 

  18. Lee, Y., Kim, T.: Robust and fine-grained prosody control of end-to-end speech synthesis. In: Proceedings of ICASSP (2019)

    Google Scholar 

  19. Neekhara, P., Hussain, S., Dubnov, S., Koushanfar, F., McAuley, J.: Expressive Neural Voice Cloning. arXiv:2102.00151 (2021)

  20. Park, J., Han, K., Jeong, Y., Lee, S.W.: Phonemic-level duration control using attention alignment for natural speech synthesis. In: Proceedings of ICASSP (2019)

    Google Scholar 

  21. Ping, W., et al.: Deep voice 3: scaling text-to-speech with convolutional sequence learning. In: Proceedings of ICLR (2018)

    Google Scholar 

  22. Raitio, T., Rasipuram, R., Castellani, D.: Controllable neural text-to-speech synthesis using intuitive prosodic features. In: Proceedings of Interspeech (2020)

    Google Scholar 

  23. Shechtman, S., Sorin, A.: Sequence to sequence neural speech synthesis with prosody modification capabilities. In: Proceedings of SSW (2019)

    Google Scholar 

  24. Shen, J., et al.: Natural TTS synthesis by conditioning WaveNet on mel spectrogram predictions. In: Proceedings of ICASSP (2018)

    Google Scholar 

  25. Skerry-Ryan, R., et al.: Towards end-to-end prosody transfer for expressive speech synthesis with Tacotron. In: Proceedings of ICML (2018)

    Google Scholar 

  26. Sun, G., et al.: Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and autoregressive prosody prior. In: Proceedings of ICASSP (2020)

    Google Scholar 

  27. Sun, G., Zhang, Y., Weiss, R.J., Cao, Y., Zen, H., Wu, Y.: Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis. In: Proceedings of ICASSP (2020)

    Google Scholar 

  28. Valle, R., Li, J., Prenger, R., Catanzaro, B.: Mellotron: multispeaker expressive voice synthesis by conditioning on rhythm, pitch and global style tokens. In: Proceedings of ICASSP (2020)

    Google Scholar 

  29. Vioni, A., et al.: Prosodic clustering for phoneme-level prosody control in end-to-end speech synthesis. In: Proceedings of ICASSP (2021)

    Google Scholar 

  30. Vipperla, R., et al.: Bunched LPCNet: vocoder for low-cost neural text-to-speech systems. In: Proceedings of Interspeech (2020)

    Google Scholar 

  31. Wan, V., an Chan, C., Kenter, T., Vit, J., Clark, R.: CHiVE: varying prosody in speech synthesis with a linguistically driven dynamic hierarchical conditional variational network. In: Proceedings of ICML (2019)

    Google Scholar 

  32. Wang, J., Li, J., Zhao, X., Wu, Z., Meng, H.: Adversarially learning disentangled speech representations for robust multi-factor voice conversion. arXiv:2102.00184 (2021)

  33. Wang, Y., et al.: Tacotron: towards end-to-end speech synthesis. In: Proceedings of Interspeech (2017)

    Google Scholar 

  34. Wang, Y., et al.: Style tokens: unsupervised style modeling, control and transfer in end-to-end speech synthesis. In: Proceedings of ICML (2018)

    Google Scholar 

  35. Zhang, G., Qin, Y., Lee, T.: Learning syllable-level discrete prosodic representation for expressive speech generation. In: Proceedings of Interspeech (2020)

    Google Scholar 

  36. Zhang, J.X., et al.: Voice Conversion by Cascading Automatic Speech Recognition and Text-to-Speech Synthesis with Prosody Transfer. arXiv:2009.01475 (2020)

  37. Zhang, Y.J., Pan, S., He, L., Ling, Z.H.: Learning latent representations for style control and transfer in end-to-end speech synthesis. In: Proceedings of ICASSP (2019)

    Google Scholar 

  38. Zhang, Y., et al.: Learning to speak fluently in a foreign language: multilingual speech synthesis and cross-language voice cloning. In: Proceedings of Interspeech (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Myrsini Christidou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Christidou, M. et al. (2021). Improved Prosodic Clustering for Multispeaker and Speaker-Independent Phoneme-Level Prosody Control. In: Karpov, A., Potapova, R. (eds) Speech and Computer. SPECOM 2021. Lecture Notes in Computer Science(), vol 12997. Springer, Cham. https://doi.org/10.1007/978-3-030-87802-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87802-3_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87801-6

  • Online ISBN: 978-3-030-87802-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics