Where sound position influences sound object representations: A 7-T fMRI study
Related publications (33)
Graph Chatbot
Chat with Graph Search
Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
Evidence from behavioral studies suggests that the spatial origin of sounds may influence the perception of emotional valence. Using 7T fMRI we have investigated the impact of the categories of sound (vocalizations; non-vocalizations), emotional valence (p ...
Learning to play an instrument at an advanced age may help to counteract or slow down age-related cognitive decline. However, studies investigating the neural underpinnings of these effects are still scarce. One way to investigate the effects of brain plas ...
AbstractExcitatory projection neurons of the neocortex are thought to play important roles in per-ceptual and cognitive functions of the brain by directly connecting diverse cortical and subcortical areas. However, many aspects of the anatomical and func ...
Recently, flexible and soft bioelectronic interfaces have been proposed as a solution to improve existing neural interfaces that currently present mechanical mismatch with the soft tissue. These are devices fabricated with thin polymeric or elastomeric bac ...
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functi ...
Human voices consist of specific patterns of acoustic features that are considerably enhanced during affective vocalizations. These acoustic features are presumably used by listeners to accurately discriminate between acoustically or emotionally similar vo ...
The speech signal conveys information on different time scales from short (20--40 ms) time scale or segmental, associated to phonological and phonetic information to long (150--250 ms) time scale or supra segmental, associated to syllabic and prosodic info ...
Certain brain disorders, resulting from brainstem infarcts, traumatic brain injury, stroke and amyotrophic lateral sclerosis, limit verbal communication despite the patient being fully aware. People that cannot communicate due to neurological disorders wou ...
Over the last decade, technological advances in the field of functional magnetic resonance imaging (fMRI) have made it possible to obtain localized measures of brain activity in real-time. This allows for applications such as online quality control of the ...
The speech signal conveys information on different time scales from short (20--40 ms) time scale or segmental, associated to phonological and phonetic information to long (150--250 ms) time scale or supra segmental, associated to syllabic and prosodic info ...