I see that it's possible to train both the variance and acoustic models with multiple languages and multiple speakers.
I have a few questions:
- In a
multi-dictionary or multi-language setup, does the performance in one language affect the others?
- From my experiments with
multi-speaker models, it seems that speakers with shorter training data tend to be influenced by other voices with longer training data, especially when the generated pitch goes beyond their original training range. Is this expected behavior?
Also, is there any prior evaluation or research on multi-speaker and multi-language performance?