Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
This paper extends a recent time-domain feedback analysis of Perceptron learning networks to recurrent networks and provides a study of the robustness performance of the training phase in the presence of uncertainties. In particular. a bound is established on the step-size parameter in order to guarantee that the training algorithm will behave as a robust filter in the sense of H∞ -theory. The paper also establishes that the training scheme can be interpreted in terms of a feedback interconnection that consists of two major blocks: a time-variant lossless (i.e., energy preserving) feedforward block and a time-variant dynamic feedback block. The l2-stability of the feedback structure is thell analyzed by using the small-gain and the mean-value theorems.