Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In this thesis, timing is everything. In the first part, we mean this literally, as we tackle systems that encode information using timing alone. In the second part, we adopt the standard, metaphoric interpretation of this saying and show the importance of choosing the right time to sample a system, when efficiency is a key consideration.Time encoding machines, or, alternately, integrate-and-fire neurons, encode their inputs using spikes, the timings of which depend on the input and therefore hold information about it. These devices can be made more power efficient than their clocked counterparts and have thus been studied in the fields of signal processing, event-based vision, computational neuroscience and machine learning. However, their timing-based spiking output has so far often been considered a nuisance that one must make do with, rather than a potential advantage. In this thesis, we show that this timing-based output equips spiking devices with capabilities that are out of reach for classical encoding and processing systems.We first discover the benefits of time encoding on multi-channel encoding and recovery of a signal: with time encoding, clock alignment is easy to solve, although it poses problems in the classical sampling scenario. Then, we study the time encoding of low-dimensional signals and see that the asynchrony of spikes allows for a lower sample complexity in comparison with synchronous sampling. Thanks to this same asynchrony, time encoding of video results in an entanglement between spatial sampling density and temporal resolution---a relationship which is not present in frame-based video. Finally, we show that the all-or-none nature of the spikes allows training spiking neural networks in a layer-by-layer fashion---a feat that is impossible with clocked, artificial neural networks, due to the credit assignment problem.The second part of this thesis shows that choosing the right timing of samples can be crucial to ensure efficiency when performing nanoscale magnetic sensing. We are given a stochastic process, where each sample at time t follows a Bernoulli distribution, and which is characterized by oscillation frequencies that we are interested in recovering. We search for an optimal approach to sample this process, such that the variance of the frequencies' estimates is minimized, given constraints on the measurement time. The models we assume stem from the field of nanoscale magnetic sensing, where the number of parameters to be estimated varies with the number of spins one is trying to sense. We present an adaptive approach to choosing samples in both the single-spin and two-spin cases and compare the adaptive algorithm's performance to classical approaches to sampling.In both parts of the thesis, we move away from classical amplitude sampling and consider cases where timing takes the forefront and amplitude information is merely binary, to shows that timing can carry information and that it can control the amount of information gain.
Tilo Schwalger, Valentin Marc Schmutz
Martin Louis Lucien Rémy Barry