**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

MOOC# Advanced statistical physics

Description

This course covers non-equilibrium statistical processes and the treatment of fluctuation dissipation relations by Einstein, Boltzmann and Kubo. Moreover, the fundamentals of Markov processes, stochastic differential and Fokker Planck equations, mesoscopic master equation, etc will be treated in detail. Prior knowledge of statistical physics is highly recommended but not required.

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (11)

Related concepts (210)

Related courses (185)

X-ray tube

An X-ray tube is a vacuum tube that converts electrical input power into X-rays. The availability of this controllable source of X-rays created the field of radiography, the imaging of partly opaque objects with penetrating radiation. In contrast to other sources of ionizing radiation, X-rays are only produced as long as the X-ray tube is energized. X-ray tubes are also used in CT scanners, airport luggage scanners, X-ray crystallography, material and structure analysis, and for industrial inspection.

Normal distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution (and also its median and mode), while the parameter is its standard deviation. The variance of the distribution is . A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

X-ray generator

An X-ray generator is a device that produces X-rays. Together with an X-ray detector, it is commonly used in a variety of applications including medicine, X-ray fluorescence, electronic assembly inspection, and measurement of material thickness in manufacturing operations. In medical applications, X-ray generators are used by radiographers to acquire x-ray images of the internal structures (e.g., bones) of living organisms, and also in sterilization. An X-ray generator generally contains an X-ray tube to produce the X-rays.

PHYS-467: Machine learning for physicists

Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi

COM-406: Foundations of Data Science

We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas an

PHYS-100: Advanced physics I (mechanics)

La Physique Générale I (avancée) couvre la mécanique du point et du solide indéformable. Apprendre la mécanique, c'est apprendre à mettre sous forme mathématique un phénomène physique, en modélisant l

In multiple testing problems where the components come from a mixture model of noise and true effect, we seek to first test for the existence of the non-zero components, and then identify the true alternatives under a fixed significance level $\alpha$. Two parameters, namely the fraction of the non-null components $\varepsilon$ and the size of the effects $\mu$, characterise the two-point mixture model under the global alternative. When the number of hypotheses $m$ goes to infinity, we are interested in an asymptotic framework where the fraction of the non-null components is vanishing, and the true effects need to be sizable to be detected. Donoho and Jin give an explicit form of the asymptotic detectable boundary based on the Gaussian mixture model under the classic calibration of the parameters of the mixture model. We prove the analogous results for the Cauchy mixture distribution as an example heavy-tailed case. This requires a different formulation of the parameters, which reflects the added difficulties.
We also propose a multiple testing procedure based on a filtering approach that can discover the true alternatives.
Benjamini and Hochberg (BH) compare the observed $p$-values to a linear threshold curve and reject the null hypotheses from the minimum up to the last up-crossing, and prove the false discovery rate (FDR) is controlled.
However, there is an intrinsic difference in heavy-tailed settings. Were we to use the BH procedure we would get a highly variable positive false discovery rate (pFDR). In our study we analyse the distribution of the $p$-values and devise a new multiple testing procedure to combine the usual case and the heavy-tailed case based on the empirical properties of the $p$-values. The filtering approach is designed to eliminate most $p$-values that are more likely to be uniform, while preserving most of the true alternatives. Based on the filtered $p$-values, we estimate the mode $\vartheta$ and define the rejection region $\mathscr{R}(\vartheta, \delta)=\left[ \vartheta -\delta/2, \vartheta +\delta/2 \right]$ such that the most informative $p$-values are included. The length $\delta$ is chosen by controlling the data-dependent estimation of FDR at a desired level.

Andreas Loukas, Nikolaos Karalias

Combinatorial optimization (CO) problems are notoriously challenging for neural networks, especially in the absence of labeled instances. This work proposes an unsupervised learning framework for CO problems on graphs that can provide integral solutions of certified quality. Inspired by Erdos' probabilistic method, we use a neural network to parametrize a probability distribution over sets. Crucially, we show that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution that obeys the constraints of the combinatorial problem. The probabilistic proof of existence is then derandomized to decode the desired solutions. We demonstrate the efficacy of this approach to obtain valid solutions to the maximum clique problem and to perform local graph clustering. Our method achieves competitive results on both real datasets and synthetic hard instances.

2020In this thesis, we study the stochastic heat equation (SHE) on bounded domains and on the whole Euclidean space $\R^d.$ We confirm the intuition that as the bounded domain increases to the whole space, both solutions become arbitrarily close to one another. Both vanishing Dirichlet and Neumann boundary conditions are considered.We first study the nonlinear SHE in any space dimension with multiplicative correlated noise and bounded initial data. We prove that the solutions to SHE on an increasing sequence of domains converge exponentially fast to the solution to SHE on $\R^d.$ Uniform convergence on compact set is obtained for all $p$-moments. The conditions that need to be imposed on the noise are the same as those required to ensure existence of a random field solution. A Gronwall-type iteration argument is used together with uniform bounds on the solutions, which are surprisingly valid for the entire sequence of increasing domains.We then study SHE in space dimension $d\ge 2$ with additive white noise and bounded initial data. Even though both solutions need to be considered as distributions, their difference is proved to be smooth. If fact, the order of smoothness depends only on the regularity of the boundary of the increasing sequence of domains. We prove that the Fourier transform, in the sense of distributions, of the solution to SHE on $\R^d$ do not have any locally mean-square integrable representative. Therefore, convergence is studied in local versions of Sobolev spaces. Again, exponential rate is obtained.Finally, we study the Anderson model for SHE with correlated noise and initial data given by a measure. We obtain a special expression for the second moment of the difference of the solution on $\R^d$ with that on a bounded domain. The contribution of the initial condition is made explicit. For example, exponentially fast convergence on compact sets is obtained for any initial condition with polynomial growth. More interestingly, from a given convergence rate, we can decide whether some initial data is admissible.

Lectures in this MOOC (31)

Brownian Motion: Fundamentals and ApplicationsMOOC: Advanced statistical physics

Explores the fundamentals of Brownian motion, including particle positions and distribution functions.

Statistical Mechanics: Velocity and Langevin FormulationMOOC: Advanced statistical physics

Covers the statistical mechanics of velocity and Langevin formulation, discussing the Langevin force and thermal equilibrium.

Statistical Mechanics: Langevin HypothesisMOOC: Advanced statistical physics

Covers the basics of statistical mechanics, focusing on the Langevin hypothesis and its implications.

Stochastic Calculus: Foundations and ApplicationsMOOC: Advanced statistical physics

Explores the foundation of stochastic calculus, emphasizing deterministic and memoryless processes.

Stochastic Processes: Review and PropertiesMOOC: Advanced statistical physics

Covers the review of random variables, probability density functions, variance, and Gaussian processes.