**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Predictors and predictands of linear response in spatially extended systems

Résumé

The goal of response theory, in each of its many statistical mechanical formulations, is to predict the perturbed response of a system from the knowledge of the unperturbed state and of the applied perturbation. A new recent angle on the problem focuses on providing a method to perform predictions of the change in one observable of the system using the change in a second observable as a surrogate for the actual forcing. Such a viewpoint tries to address the very relevant problem of causal links within complex system when only incomplete information is available. We present here a method for quantifying and ranking the predictive ability of observables and use it to investigate the response of a paradigmatic spatially extended system, the Lorenz '96 model. We perturb locally the system and we then study to what extent a given local observable can predict the behaviour of a separate local observable. We show that this approach can reveal insights on the way a signal propagates inside the system. We also show that the procedure becomes more efficient if one considers multiple acting forcings and, correspondingly, multiple observables as predictors of the observable of interest.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (16)

Système

Un système est un ensemble d' interagissant entre eux selon certains principes ou règles. Par exemple une molécule, le système solaire, une ruche, une société humaine, un parti, une armée etc.
Un s

Observable

Une observable est l'équivalent en mécanique quantique d'une grandeur physique en mécanique classique, comme la position, la quantité de mouvement, le spin, l'énergie, etc. Ce terme provient d'une ex

Prédiction dynamique

La prédiction dynamique est une méthode inventée par Newton et Leibniz. Newton l’a appliquée avec succès au mouvement des planètes et de leurs satellites. Depuis elle est devenue la grande méthode de

Publications associées (12)

Chargement

Chargement

Chargement

High-energy particle physics is going through a crucial moment of its history, one in which it can finally aspire to give a precise answer to some of the fundamental questions it has been conceived for. On the one side, the theoretical picture describing the elementary strong and electroweak interactions below the TeV scale, the Standard Model, has been well consolidated over the decades by the observation and the precise characterization of its constituents. On the other hand, the enormous technological potentialities nowadays available, and the skills accumulated in decades of collider experiments with increasingly high complexity, render for the first time plausible the possibility of addressing complicated and conceptually deep questions like the ones at hand. The best incarnation of this high level of sophistication is the CERN Large Hadron Collider (LHC), the most powerful experimental apparatus ever built, which is designed to shed light on the true nature of fundamental interactions at energies never attained before, and which has already started to open a new era in physics with the recent discovery of the longed-for Higgs boson, a true milestone for the human knowledge as well as one of the most important discoveries in the modern epoch. The knowledge that has been and is going to be reached in these crucial years would of course not be conceivable without a deep interplay between the theoretical and the experimental efforts. In particular, on the theoretical side, not only there are wide groups of researchers devoted to building possible extensions to the Standard Model, which draws the guidelines of current and future experiments, but also there is a vast community whose research is rather aimed at the precise predictions of all the physical observables that could be measured at colliders, and at the systematic improvement of the approximations that currently constrain such predictions. On top of representing the state-of-the-art of the human understanding of the properties that regulate elementary-particle interactions and of the formalisms that describe them, the developments of this line of research have an immediate and significant impact on experiments. Firstly, these detailed calculations are the very theoretical predictions against which experimental data are compared, so they are crucial in establishing the validity or not of the theories according to which they are performed. Secondly, the signals one wants to extract from data at modern colliders are so tiny and difficult to single out that the experimental searches themselves need be supplemented by a detailed work of theoretical modelling and simulation. In this respect, high-precision computations play an essential role in all analysis strategies devised by experimental collaborations, and in many aspects of the detector calibration. It is clear that, for theoretical computations to be useful in experimental analyses and simulations, the predictions they yield should be reliable for all possible configurations of the particles to be detected. Thus the key feature for the present theoretical collider physics is not particularly the computation of observables with high precision only in a limited region of the phase space, but the capability of combining (‘matching’) in a consistent way different approaches, each of which is reliable in a particular kinematic regime. With this perspective, matching techniques represent one of the most promising and successful theoretical frameworks currently available, and are considered as eminently valuable tools both on the theoretical and on the experimental sides. Matched computations are based on a perturbation-theory approach for the description of configurations in which the scattering products are well separated and/or highly energetic: in particular the precision currently attained for all but a few of the relevant processes within the Standard Model is the next-to-leading order (NLO) in powers of the strong quantum-chromodynamics (QCD) coupling constant αS; for the description of configurations in which the particles outgoing the collisions are close to each other and/or have low energy, it can be shown that the perturbation-theory expansion breaks down, and then a complementary method, like the parton shower Monte Carlo (PSMC), has instead to be employed. The task of matching is precisely that of giving a prediction that interpolates between the two approaches in a smooth and theoretically-consistent way. This thesis is focused on MC@NLO, a high-energy physics formalism capable of matching computations performed at the NLO in QCD to PSMC generators, in such a way as to retain the virtues of both approaches while discarding their mutual deficiencies. In particular, the thesis reports on the work successfully achieved in extending MC@NLO from its original numerical implementation, tailored on the HERWIG PSMC, to the other main PSMC programs currently employed by experimental collaborations, PYTHIA and Herwig++, confirming the advocated universality of the method. Differences in the various realizations are explained in detail both at the formal level and through the simulation of various Standard-Model reactions. Moreover we describe how the MC@NLO framework has been developed so as to render its implementation automatic with respect to the physics process one is about to simulate: beyond yielding an enormous increase in its potential for present and future collider phenomenology, and upgrading the standard of precision for high-energy computations to the NLO+PSMC level, this development allows for the first time the application of the MC@NLO formalism to a huge number of relevant and highly complicated reactions, through an implementation which is also easily usable by people well-outside the community of experts in QCD calculations. As example of this new version, called aMC@NLO, recent results are presented for complex scattering processes, involving four or five final-state particles. Finally, possible extensions of the framework to theories beyond the Standard Model, like the supersymmetric version of QCD, are briefly introduced.

Software engineering always cares to provide solutions for building applications as close as possible to what they should be, according to the requirements and the final users needs. Systems behavior simulation is a very common application to virtually reproduce and often predict the real-world behavior. Simulation is one of the most operational research tool in a large variety of engineering and scientific domains: Transport, telecommunication, medicine, chemical processes, physics, etc. The complexity of such application is relative to the increasing complexity of the systems. In this context, it is relevant to bring together different tools and formalisms such as markovian chain, Petri nets, etc., to improve the existent approaches and so to answer the simulations performances needs. The principle objective of this thesis is to bring together techniques from software engineering and safety engineering in order to improve the state of the art of modeling and simulation of dynamic systems in the industrial context. In addressing this objective, this work initially involves defining the essential limitations of the used formalisms, methods and tools regarding from one hand the software engineering modeling and simulation techniques and from the other hand the existent risk analysis methodologies. This work is conducted with respect to the problem of danger identification, considering the context of the complex systems behavior and their interaction with the human operator. In software engineering, it is well known that Petri/high-level nets have attractive characteristics to be used in systems simulation and behavior prediction such as the natural graphical representation, and their well-defined semantic. They are well-suited for the description of complex situations with concurrency (interleaving and true concurrency depending on the underlying semantics), conflict and confusion. However, the absence of structuring capabilities has been one of the main criticisms raised against Petri nets/high-level nets. Thus, there have been many attempts to introduce structuring principles in nets of this kind [BCM88] [Kie89] [JR91]. The attractive characteristics of Petri/high-level nets have prompted researchers to enrich these formalisms with object-oriented features. CO-OPN (Concurrent Object-Oriented Petri Net) approach, brings together the power of both Petri/high-level nets and object-orientation techniques, it has been devised so as to offer an adequate framework for the specification and design of large scale concurrent system [BG91]. CO-OPN, as a powerful modeling tool, has been used in a limited way to simulate systems. This work aims to provide a CO-OPN extension to allow a more realistic systems' simulation. Actually, its simulator semantic uses to be a suitable approach for modeling near closed systems and software components, because they need to loose coupling with external world. But, when we model more realistic problems like industrial processes, where human interaction is a relevant event, this approach is not sufficient to catch all system activity attributes. Moreover, the CO-OPN interpretation process does not allow interaction with the object internal states. This work provides a new solution to overcome CO-OPN simulation limitations and a set of prototypes to assist dynamic systems simulations. Furthermore, this work has been conducted in a Risk Analysis (RA) context, a domain where computer-based simulations research are of utmost interest. Actually, classical approaches used to address complex workplace hazard in a limited way, using checklists or sequence models. Moreover, the use of single oriented methods, such as AEA (man-oriented), FMEA (machine oriented) or HAZOP (process oriented), is not satisfactory to overcome the increasing sophistication of industrial processes. The automation of a part of the analysis process as well as the multiple-oriented approach allowed by dynamic modeling may indeed enhance significantly the analysis completeness and reduce the time analyzing time. This work, based on Object Oriented Petri net formalism (CO-OPN), propose an alternative multi-oriented approach where existent methods limitations have been criticized to develop a dynamic model, MORM (Man-machine Occupational Risk Modeling). A real industrial system (metal wire making process) has been specified to implement the different approach steps (system identification, model application, system simulation, system analysis).

Federico Belloni, Volker Gass, Christophe Dominique Paccolat, Camille Sébastien Pirat, Muriel Richard, Reto Wiesendanger

Within the frame of the ESA's General Support Technology Programme, two In-Orbit Demonstration (IOD) missions using CubeSat technologies were studied. These IODs aim at alleviating the technical risk inherent to new technologies required for Active Debris Removal (ADR) of large space objects, by using small and low-cost CubeSat systems. Rendezvous and docking with uncooperative debris has been only partially demonstrated and still raises technological issues. To test technologies such as the navigation and Rendezvous (RV) sensors or capture systems, our studies show that CubeSat missions are appropriate. Already, Guidance, Navigation and Control (GNC), communications and power technologies have successfully been miniaturised and the corresponding equipments are now available to the CubeSat community. This fact extends the range of feasible CubeSat missions from the initial flight of simple sensors to more complex systems. This paper presents two CubeSat ADR experiments and demonstrates how mission design and GNC can serve the verification or navigation sensors performances as well as the validation of uncooperative debris capture using a net. Each mission is composed of a Chaser and a Target. The former being an 8 Units (8U) CubeSat and the latter a 4U, launched together in a 12U deployer. Both satellites are 3-axis attitude controlled. The Chaser has in addition 3 Degrees of Freedom (DoF) translation capability using 1 mN cold gas thrusters. Both CubeSats will carry GNSS receivers to assist in the determination of range and relative-velocity. The global positioning and attitude data or the Target will be transmitted to the Chaser using an inter-satellite link having the additional capacity to measure the corresponding range. This system provides a reference validation for the RV sensors. The relative position and velocity to be controlled are fully observable. Thus a linear quadratic regulator is appropriate to ensure robust and optimal control. Based on the mission design, various close inspection configurations are demonstrated. To emphasise the feasibility of such missions, a system approach will be briefly addressed. Both missions are analysed using a 6 DoF simulator. The performances and absolute errors of the GNC as well as fuel consumption are provided. Power consumption, Telecom capability and thermal aspects will be shown for sake of completeness. Current issues and limitations of the CubeSat GNC will be discussed, as well as conclusions regarding the feasibility of such missions.