**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Martin Hasler

Biography

After a PhD and a postdoc in theoretical physics, Martin Hasler has pursued reasearch in electrical circuit and filter theory. His current interests are the applications of nonlinear dynamics in engineering and biology. In particular, he is interested in information processing in biological and technological networks. He is most well-known for his work in communications using chaos and in synchronization of networks of dynamical systems.

He joined EPFL in 1974, became a titular professor in 1984 and a full professor in 1998. In 2002, he was acting Dean of the School of Computer and Communication Sciences. He was elected Fellow of the IEEE in 1993. He was the general chair of ISCAS 2000 in Geneva. He was Associate Editor of the IEEE Transactions in Circuits and Systems from 1991 to 1993 and Editor-in-Chief from 1993 to 1995. He was elected vice-president for Technical Activities of the IEEE Circuits and Systems Society from 2002 to 2005. He is a member of the Research Council of the Swiss National Science Foundation.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related research domains (34)

Related publications (183)

Chaos theory

Chaos theory is an interdisciplinary area of scientific study and branch of mathematics focused on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions, and were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals, and self-organization.

Nonlinear system

In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.

Computer simulation

Computer simulation is the process of mathematical modelling, performed on a computer, which is designed to predict the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering.

We consider dynamical systems whose parameters are switched within a discrete set of values at equal time intervals. Similar to the blinking of the eye, switching is fast and occurs stochastically and independently for different time intervals. There are two time scales present in such systems, namely the time scale of the dynamical system and the time scale of the stochastic process. If the stochastic process is much faster, we expect the blinking system to follow the averaged system where the dynamical law is given by the expectation of the stochastic variables. We prove that, with high probability, the trajectories of the two systems stick together for a certain period of time. We give explicit bounds that relate the probability, the switching frequency, the precision, and the length of the time interval to each other. We discover the apparent presence of a soft upper bound for the time interval, beyond which it is almost impossible to keep the two trajectories together. This comes as a surprise in view of the known perturbation analysis results. From a probability theory perspective, our results are obtained by directly deriving large deviation bounds. They are more conservative than those derived by using the action functional approach, but they are explicit in the parameters of the blinking system.

Martin Hasler, Leonidas Georgopoulos

We propose an algorithm to learn from distributed data on a network of arbitrarily connected machines without exchange of the data-points. Parts of the dataset are processed locally at each machine, and then the consensus communication algorithm is employed to consolidate the results. This iterative two stage process converges as if the entire dataset had been on a single machine. The principal contribution of this paper is the proof of convergence of the distributed learning process in the general case that the learning algorithm is a contraction. Moreover, we derive the distributed update equation of a feed-forward neural network with back-propagation for the purpose of verifying the theoretical results. We employ a toy classification example and a real world binary classification dataset. (C) 2013 Elsevier B.V. All rights reserved.

We study stochastically blinking dynamical systems as in the companion paper (Part I). We analyze the asymptotic properties of the blinking system as time goes to infinity. The trajectories of the averaged and blinking system cannot stick together forever, but the trajectories of the blinking system may converge to an attractor of the averaged system. There are four distinct classes of blinking dynamical systems. Two properties differentiate them: single or multiple attractors of the averaged system and their invariance or noninvariance under the dynamics of the blinking system. In the case of invariance, we prove that the trajectories of the blinking system converge to the attractor(s) of the averaged system with high probability if switching is fast. In the noninvariant single attractor case, the trajectories reach a neighborhood of the attractor rapidly and remain close most of the time with high probability when switching is fast. In the noninvariant multiple attractor case, the trajectory may escape to another attractor with small probability. Using the Lyapunov function method, we derive explicit bounds for these probabilities. Each of the four cases is illustrated by a specific example of a blinking dynamical system. From a probability theory perspective, our results are obtained by directly deriving large deviation bounds. They are more conservative than those derived by using the action functional approach, but they are explicit in the parameters of the blinking system.