Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In the EMIME project we have studied unsupervised cross-lingual speaker adaptation. We have employed an HMM statistical framework for both speech recognition and synthesis which provides transformation mechanisms to adapt the synthesized voice in TTS (text-to-speech) using the recognized voice in ASR (automatic speech recognition). An important application for this research is personalised speech-to-speech translation that will use the voice of the speaker in the input language to utter the translated sentences in the output language. In mobile environments this enhances the users’ interaction across language barriers by making the output speech sound more like the original speaker’s way of speaking, even if she or he could not speak the output language.
Subrahmanya Pavankumar Dubagunta
Petr Motlicek, Hynek Hermansky, Sriram Ganapathy, Amrutha Prasad
, , ,