Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.
AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.
This paper reports a study on short-time subharmonic pitch breaks in vocal fold vibration, which are found to be a common feature of the human voice in spoken language. The observed pitch breaks correspond to a change in periodicity of the electrolaryngogr ...
Change in voice quality (VQ) is one of the first precursors of Parkinson's disease (PD). Specifically, impacted phonation and articulation causes the patient to have a breathy, husky-semiwhisper and hoarse voice. A goal of this paper is to characterize a V ...
The existence of highly developed and fully automated telephone networks in industrialized countries and the pervasiveness of speech communication technology have resulted in the ever-increasing use of the human voice as an instrument in the commission of ...
While public speech resources become increasingly available, there is a growing interest to preserve the privacy of the speakers, through methods that anonymize the speaker information from speech while preserving the spoken linguistic content. In this pap ...
Speech recognition applications embedded on a PDA are already available on the market. The usual hardware for this kind of systems is a single microphone mounted on the PDA, giving good results within quiet environments. Though, the recognition rate falls ...
Emotional sounds are processed within a large cortico-subcortical network, of which the auditory cortex, the voice area, and the amygdala are the core regions. Using 7T fMRI, we have compared the effect of emotional valence (positive, neutral, and negative ...
[...] Primates are social animals whose communication is based on their conspecifics' vocalizations and facial expressions. Although a lot of work to date has studied the unimodal representation of vocal or facial information, little is known about the way ...
Mobile service robots are going to play an increasing role in the society of humans. Voice-enabled interaction with service robots becomes very important, if such robots are to be deployed in real-world environments and accepted by the vast majority of pot ...
Evidence from behavioral studies suggests that the spatial origin of sounds may influence the perception of emotional valence. Using 7T fMRI we have investigated the impact of the categories of sound (vocalizations; non-vocalizations), emotional valence (p ...
This paper describes a multimodal approach for speaker verification. The system consists of two classifiers, one using visual features and the other using acoustic features. A lip tracker is used to extract visual information from the speaking face which p ...