the Tensorflow version of multi-speaker TTS training with feedback constraint
-
Updated
Oct 12, 2020 - Python
the Tensorflow version of multi-speaker TTS training with feedback constraint
In this repo, I developed a step-by-step pipeline for a standard MultiSpeaker Text-to-Speech system 😄 In general, I used Portaspeech as an acoustic model and iSTFTNet as vocoder...
This is a TTS model based on VITS that can control the output speech emotion through natural language and control the speaker through reference audio.
Add a description, image, and links to the multispeaker-speech-synthesis topic page so that developers can more easily learn about it.
To associate your repository with the multispeaker-speech-synthesis topic, visit your repo's landing page and select "manage topics."