Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This paper extends a recent time-domain feedback analysis of Perceptron learning networks to recurrent networks and provides a study of the robustness performance of the training phase in the presence of uncertainties. In particular. a bound is established on the step-size parameter in order to guarantee that the training algorithm will behave as a robust filter in the sense of H∞ -theory. The paper also establishes that the training scheme can be interpreted in terms of a feedback interconnection that consists of two major blocks: a time-variant lossless (i.e., energy preserving) feedforward block and a time-variant dynamic feedback block. The l2-stability of the feedback structure is thell analyzed by using the small-gain and the mean-value theorems.