**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Concept# Prédiction dynamique

Résumé

La prédiction dynamique est une méthode inventée par Newton et Leibniz. Newton l’a appliquée avec succès au mouvement des planètes et de leurs satellites. Depuis elle est devenue la grande méthode de prédiction des mathématiques appliquées. Sa portée est universelle. Tout ce qui est matériel, tout ce qui est en mouvement, peut être étudié avec les outils de la théorie des systèmes dynamiques. Mais il ne faut pas en conclure que pour connaître un système il est nécessaire de connaître sa dynamique. Sinon on ne pourrait pas connaître beaucoup de choses. L’autre grande méthode de prédiction est simplement l’usage ordinaire de la raison. Si on connaît des lois prédictives on peut faire des déductions qui conduisent à des prédictions.
Prédiction dynamique
Le principe fondamental de toute prédiction, c’est le déterminisme, c’est que l’avenir est déterminé par le passé. Si on veut prédire l’avenir, il faut observer le présent et connaître des lois qui déterminent l’avenir en fon

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Concepts associés (60)

Méthode scientifique

La méthode scientifique désigne l'ensemble des canons guidant ou devant guider le processus de production des connaissances scientifiques, qu'il s'agisse d'observations, d'expériences, de raisonnement

Science

thumb|Allégorie de la Science par Jules Blanchard, située sur le parvis de l'hôtel de ville de Paris.
La (du latin scientia, « connaissance », ) est dans son sens premier « la somme des connaissance

Prévision

La prévision est une .
D'une façon générale, . Dans un sens plus restrictif, en épistémologie contemporaine, la prévision se distingue de la prédiction, qui est issue d'une loi ou théorie scientifi

Cours associés (135)

EE-512: Applied biomedical signal processing

The goal of this course is twofold: (1) to introduce physiological basis, signal acquisition solutions (sensors) and state-of-the-art signal processing techniques, and (2) to propose concrete examples of applications for vital sign monitoring and diagnosis purposes.

PHYS-202: Analytical mechanics (for SPH)

Présentation des méthodes de la mécanique analytique (équations de Lagrange et de Hamilton) et introduction aux notions de modes normaux et de stabilité.

CS-423: Distributed information systems

This course introduces the key concepts and algorithms from the areas of information retrieval, data mining and knowledge bases, which constitute the foundations of today's Web-based distributed information systems.

Personnes associées (190)

Publications associées (100)

Chargement

Chargement

Chargement

Unités associées (80)

Séances de cours associées (299)

Our brain continuously self-organizes to construct and maintain an internal representation of the world based on the information arriving through sensory stimuli. Remarkably, cortical areas related to different sensory modalities appear to share the same functional unit, the neuron, and develop through the same learning mechanism, synaptic plasticity. It motivates the conjecture of a unifying theory to explain cortical representational learning across sensory modalities. In this thesis we present theories and computational models of learning and optimization in neural networks, postulating functional properties of synaptic plasticity that support the apparent universal learning capacity of cortical networks. In the past decades, a variety of theories and models have been proposed to describe receptive field formation in sensory areas. They include normative models such as sparse coding, and bottom-up models such as spike-timing dependent plasticity. We bring together candidate explanations by demonstrating that in fact a single principle is sufficient to explain receptive field development. First, we show that many representative models of sensory development are in fact implementing variations of a common principle: nonlinear Hebbian learning. Second, we reveal that nonlinear Hebbian learning is sufficient for receptive field formation through sensory inputs. A surprising result is that our findings are independent of specific details, and allow for robust predictions of the learned receptive fields. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities. The Hebbian learning theory substantiates that synaptic plasticity can be interpreted as an optimization procedure, implementing stochastic gradient descent. In stochastic gradient descent inputs arrive sequentially, as in sensory streams. However, individual data samples have very little information about the correct learning signal, and it becomes a fundamental problem to know how many samples are required for reliable synaptic changes. Through estimation theory, we develop a novel adaptive learning rate model, that adapts the magnitude of synaptic changes based on the statistics of the learning signal, enabling an optimal use of data samples. Our model has a simple implementation and demonstrates improved learning speed, making this a promising candidate for large artificial neural network applications. The model also makes predictions on how cortical plasticity may modulate synaptic plasticity for optimal learning. The optimal sampling size for reliable learning allows us to estimate optimal learning times for a given model. We apply this theory to derive analytical bounds on times for the optimization of synaptic connections. First, we show this optimization problem to have exponentially many saddle-nodes, which lead to small gradients and slow learning. Second, we show that the number of input synapses to a neuron modulates the magnitude of the initial gradient, determining the duration of learning. Our final result reveals that the learning duration increases supra-linearly with the number of synapses, suggesting an effective limit on synaptic connections and receptive field sizes in developing neural networks.

When assessing the economic viability of a wind farm, the estimation of the on-site wind power potential is perhaps the most important step. The most common way of evaluating the wind power potential of an area of interest consists of making on-site measurements for a period of one year. In order to take account of the inter-annual variation of wind speed, the one year of data are normally correlated with data recorded at a reference site where long-term data (typically > 10 years) are available. A correlation analysis is formulated for the concurrent data sets at the reference and prediction sites. This correlation is then used to transform the long-term wind speed at the reference site to the long-term wind speed that would have been expected at the prediction site had long-term measurements been made at this site. An alternative approach is also used, which consists of establishing site-to-site relationships using a numerical model to simulate meteorological situations which are typical for the area of interest. These relationships are then used to transpose the known long-term wind statistics of the reference site to the prediction site. Such an approach is applied in this work to the region of Chasseral & Mt-Crosin. The wind data available for a period of 16 years at Chasseral are transposed to the Mt-Crosin site where they are then compared to the data measured at the location of the installed wind farm. Over complex terrain, the linearised models traditionally used for wind power potential assessment fail to reproduce accurate wind fields. Therefore, to be applied to mountainous terrain such as that found in Switzerland, the approach relying on numerical simulation requires the development and validation of a numerical tool capable of simulating wind fields over complex topography. As the numerical model would have to deal with relatively steep slopes requiring a fine horizontal (50-100 m) and vertical resolution (∼5-10 m in the lowest levels), a fluid dynamics model was used which solves the complete set of Navier-Stokes equations with κ-ε turbulence closure. The standard version of the model used (CFX4) is modified in a novel way to extend its field of application so that atmospheric phenomena could be simulated which are typical of the meso-scale. The modified version solves the flow equations with the anelastic approximation (deep Boussinesq) and assuming a background rotation of the wind field (with the high altitude wind field following the geostrophic approximation). In the first part of this work, the numerical model is validated. The results obtained in this phase show that for meteorological situations for which the wind at the ground is coupled to the high altitude wind, the numerical model is able to satisfactorily reproduce: the flow in the surface layer, reproducing the effects associated with the ground roughness, roughness change, or heat flux through the ground; the flow in the Ekman layer together with the interaction between the free flow thermal stability conditions and the boundary layer; the linear and non-linear effects associated with the perturbation induced by a mountain in a stably stratified flow. In the second stage of this work, an extension of the standard Measure-Correlate-Predict method is presented to calculate the wind speed distribution at the prediction site, from transposition relationships and from the wind statistics at the reference site. The validity of the underlying assumptions is confirmed using concurrent data sets that were collected at both the reference and prediction sites. To evaluate the accuracy that can be achieved with the transposition assumptions, a back-prediction is performed using the transposition relationships obtained using the observations. Different types of transposition relationships have been investigated. Finally, the transposition methodology is applied to calculate the wind speed conditions at Mt-Crosin from the Chassera1 data, using the transposition relationships calculated by the numerical model for a range of meteorological situations typical for the area considered. The Mt-Crosin to Chassera1 sector wind speed ratios calculated by the numerical model tend to slightly underestimate those observed. The mean wind speeds obtained from the transposition are underestimated by 7% to 18% at the three measuring mast locations on Mt-Crosin. The yearly energy output that can be produced by a wind turbine in these conditions is underestimated by 8% to 36%. For a further period, the actual energy production of the three installed wind turbines has been compared with the model prediction at hub height, which showed that the transposition results underestimate the actual yearly production by 22% to 24%. From the transposition of the long-term data at Chassera1 (16 years), with the relationships obtained by the numerical model, a wind power potential of between 470 MWh/year (Côte Est) and 596 MWh/year (Côte Nord) is predicted using the characteristics of a Vestas-V44 wind turbine. From the work presented here, it appears that for well-exposed sites such as those located along the Jura Crest, the methods developed are able to give a wind power potential prediction with a similar accuracy as a one year measurement campaign performed on site.

The goal of this master thesis is to analyze the structure of the commercial activities in the cities of Geneva and Barcelona. This study implemented a network‐based approach to predict the location of commercial activities, on the basis of which a tentative inference of urban centrality indices was carried out. This was done by using a ‘grouping algorithm’ and by calculating a ‘location quality index’, both presented and described in the ‘Network‐based predictions of retail store commercial categories and optimal locations’ paper published by Pablo Jensen (2006). In the first part of the work, an existing prototype of the algorithm – implemented during a semester project as VBscript in the GIS software ‘Manifold’ – was optimized. Time gains of a factor 10 for the ‘link computation’ and more stable results for the ‘grouping algorithm’ could be achieved. A spatial representation of the grouping‐results was elaborated and permitted to recognize the dominance areas of the activity‐groups for both cities. The maps created with the ‘location quality index’ (Q‐Index) allowed to identify preference areas and location patterns for the different categories of commercial activities. It was possible to notice that the initial definition or choice of the categories is very important to get clear results. Indeed, the results on Geneva with only 18 categories performed less well than the ones with 45 and 48 categories on Geneva and Barcelona respectively. Furthermore, tests were carried out on the basis of hectometric referenced data, like the database of the Swiss federal census of enterprises. The results obtained are very similar to the ones from the postal address‐based data (precision 2‐3m). This shows that the algorithm used here could be applied to whole Switzerland in a possible future study to determine the overall structure of commercial activities in the country, to identify possible central locations, or also to propose it as useful tool for cantonal or regional economic development agencies to determine optimal locations for new coming large foreign industrial commercial groups. The results of the comparisons between the Q‐Index and centrality indices showed that an approximation of the latter by the first one is not possible. But the Q‐Index (describing location preference per category of commercial activity) can be used as a complementary indication to centrality (which describes accessibility in a broad sense). As a perspective, it would be particularly interesting to work on a definition of a Q‐Index for groups of categories or even of a global integrated one: it would then be probably possible to use this index as a surrogate for the evaluation of value landed property in particular.

2009