**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Dalia Salem Hassan Fahmy El Badawy

This person is no longer with EPFL

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related research domains (1)

Related publications (7)

Non-negative matrix factorization

Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered.

Dalia Salem Hassan Fahmy El Badawy

Inspired by the human ability to localize sounds, even with only one ear, as well as to recognize objects using active echolocation, we investigate the role of sound scattering and prior knowledge in regularizing ill-posed inverse problems in acoustics. In particular, we study direction of arrival estimation with one microphone, acoustic imaging with a small number of microphones, and microphone array localization. Not only are these problems ill-posed but also non-convex in the variables of interest when formulated as optimization problems. To restore well-posedness, we thus use sound scattering which we construe as a physical form of regularization. We additionally use standard regularization in the form of appropriate priors on the variables. The non-convexity is then handled with tools such as linearization or semidefinite relaxation.
We begin with direction of arrival estimation. While conventional approaches require at least two microphones, we show how to estimate the direction of one or more sound sources using only one. This is made possible thanks to regularization by sound scattering which we achieve by compact structures made from LEGO that scatter the sound in a direction-dependent manner. We also impose a prior on the source spectra where we assume they can be sparsely represented in a learned dictionary. Using algorithms based on non-negative matrix factorization, we show how to use the LEGO devices and a speaker-independent dictionary to successfully localize one or two simultaneous speakers.
Next, we study acoustic imaging of 2D shapes using a small number of microphones. Unlike in echolocation where the source is known, we show how to image an unknown object using an unknown source. In this case, we enforce a prior on the object using a total variation norm penalty but no priors on the source. We also show how to use microphones embedded in the ears of a dummy head to benefit from the diversity encoded in the head-related transfer function. We then propose an algorithm to jointly reconstruct the shape of the object and the sound source spectrum. We demonstrate the effectiveness of our approach using numerical and real experiments with speech and noise sources.
Finally, the need to know the microphone positions in acoustic imaging and a number of other applications led us to study microphone localization. We assume the positions of the loudspeakers are also unknown and that all devices are not synchronized. In this case, the times of arrival from the loudspeakers to the microphones are shifted by unknown source emission times and unknown sensor capture times. We thus propose an objective that is timing-invariant allowing us to localize the setup without first having to estimate the unknown timing information. We also propose an approach to handle missing data as well as show how to include side information such as knowledge of some of the distances between the devices. We derive a semidefinite relaxation of the objective which provides a good initialization to a subsequent refinement using the Levenberg-Marquardt algorithm. Using numerical and real experiments, we show we can localize unsynchronized devices even in near-minimal configurations.

Ivan Dokmanic, Dalia Salem Hassan Fahmy El Badawy

Conventional approaches to sound source localization require at least two microphones. It is known, however, that people with unilateral hearing loss can also localize sounds. Monaural localization is possible thanks to the scattering by the head, though it hinges on learning the spectra of the various sources. We take inspiration from this human ability to propose algorithms for accurate sound source localization using a single microphone embedded in an arbitrary scattering structure. The structure modifies the frequency response of the microphone in a direction-dependent way giving each direction a signature. While knowing those signatures is sufficient to localize sources of white noise, localizing speech is much more challenging: it is an ill-posed inverse problem which we regularize by prior knowledge in the form of learned non-negative dictionaries. We demonstrate a monaural speech localization algorithm based on non-negative matrix factorization that does not depend on sophisticated, designed scatterers. In fact, we show experimental results with ad hoc scatterers made of LEGO bricks. Even with these rudimentary structures we can accurately localize arbitrary speakers; that is, we do not need to learn the dictionary for the particular speaker to be localized. Finally, we discuss multi-source localization and the related limitations of our approach.

2018Ivan Dokmanic, Dalia Salem Hassan Fahmy El Badawy

We propose a method for sensor array self-localization using a set of sources at unknown locations. The sources produce signals whose times of arrival are registered at the sensors. We look at the general case where neither the emission times of the sources nor the reference time frames of the receivers are known. Unlike previous work, our method directly recovers the array geometry, instead of first estimating the timing information. The key component is a new loss function which is insensitive to the unknown timings. We cast the problem as a minimization of a non-convex functional of the Euclidean distance matrix of microphones and sources subject to certain non-convex constraints. After convexification, we obtain a semidefinite relaxation which gives an approximate solution; subsequent refinement on the proposed loss via the Levenberg-Marquardt scheme gives the final locations. Our method achieves state-of-the-art performance in terms of reconstruction accuracy, speed, and the ability to work with a small number of sources and receivers. It can also handle missing measurements and exploit prior geometric and temporal knowledge, for example if either the receiver offsets or the emission times are known, or if the array contains compact subarrays with known geometry.