Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
A remote microphone (RM) system can be used in combination with wearable binaural communication devices, such as hearing aids (HAs), to improve speech intelligibility. Typically, a speaker is equipped with a body-worn microphone which enables to pick up their voice with a high signal-to-noise ratio (SNR). However, if this signal is played diotically through the receivers (i.e. the same signal in both ears), the necessary cues enabling the auditory system to locate the sound source are bypassed. This can affect the ability of the listener to feel immersed in the environment or to follow a conversation, especially for hearing-impaired (HI) listeners. Auditory sound source localization in humans is performed by the auditory system owing to the interpretation of various cues related to the physical sound propagation from this source to the eardrums of the listener. This is mainly enabled by the binaural structure of the auditory system.Previous works have successfully developed a method to provide a simplified spatialization of the RM signal which enabled normal-hearing (NH) and HI listeners to locate sound sources in azimuth, while preserving speech intelligibility. However, the proposed method yielded common spatial hearing perceptual artefacts, such as in-head localization and front/back confusion.This thesis is devoted to the investigation and the perceptual evaluation of wearable devices-compatible audio playback solutions aiming at enhancing the realism of this spatialization. In particular, the goal is to improve the externalization of a sound source, i.e. to ensure it is perceived outside the head, and possibly at a certain distance corresponding to its physical location in the environment. For this purpose, early reflections (ERs) in a room play a key role. Hence, several signal processing approaches to provide ERs in the binaural synthesis were investigated. Subsequently, a subjective listening study was conducted, in which NH and HI aided listeners had to evaluate the perceived auditory distance with various binaural rendering strategies. The results show that the superimposition of ERs with the considered methods significantly improves the perception of auditory distance for NH and HI listeners. The study also provides insights about auditory distance perception in aided HI listeners with severe-to-profound hearing loss. A follow-up study was conducted with NH listeners and showed that a complete implementation of these strategies might improve auditory distance perception while preserving spatial awareness. In these studies, subjects had their head fixed.Previous studies have shown that head movements coupled with head-tracking can contribute to the auditory externalization of a virtual sound source. The next part of this thesis reports the development of a head-tracking algorithm compatible with wearable devices, relying solely on two 3-axis accelerometers. While showing promising results, the limitations of the developed algorithm still yield mismatches in the estimation. Consequently, a subjective listening test was conducted with NH listeners to study the effect of head-tracking artefacts on the perception of externalization and the performance in azimuth localization of a sound source. The results suggest that auditory externalization is not affected by a large latency or the amplitude mismatch. Latency did not decrease the performance in localization either, contrarily to the amplitude mismatch.
Hervé Lissek, Gilles André Courtois, Vincent Pierre Olivier Grimaldi
, ,