**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Localization errors of the stochastic heat equation

Abstract

In this thesis, we study the stochastic heat equation (SHE) on bounded domains and on the whole Euclidean space $\R^d.$ We confirm the intuition that as the bounded domain increases to the whole space, both solutions become arbitrarily close to one another. Both vanishing Dirichlet and Neumann boundary conditions are considered.We first study the nonlinear SHE in any space dimension with multiplicative correlated noise and bounded initial data. We prove that the solutions to SHE on an increasing sequence of domains converge exponentially fast to the solution to SHE on $\R^d.$ Uniform convergence on compact set is obtained for all $p$-moments. The conditions that need to be imposed on the noise are the same as those required to ensure existence of a random field solution. A Gronwall-type iteration argument is used together with uniform bounds on the solutions, which are surprisingly valid for the entire sequence of increasing domains.We then study SHE in space dimension $d\ge 2$ with additive white noise and bounded initial data. Even though both solutions need to be considered as distributions, their difference is proved to be smooth. If fact, the order of smoothness depends only on the regularity of the boundary of the increasing sequence of domains. We prove that the Fourier transform, in the sense of distributions, of the solution to SHE on $\R^d$ do not have any locally mean-square integrable representative. Therefore, convergence is studied in local versions of Sobolev spaces. Again, exponential rate is obtained.Finally, we study the Anderson model for SHE with correlated noise and initial data given by a measure. We obtain a special expression for the second moment of the difference of the solution on $\R^d$ with that on a bounded domain. The contribution of the initial condition is made explicit. For example, exponentially fast convergence on compact sets is obtained for any initial condition with polynomial growth. More interestingly, from a given convergence rate, we can decide whether some initial data is admissible.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (7)

Loading

Loading

Loading

Related concepts (21)

Euclidean space

Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's Elements, it was the three-dimensional space of Euclidean geometry, but i

Fourier transform

In physics and mathematics, the Fourier transform (FT) is a transform that converts a function into a form that describes the frequencies present in the original function. The output of the transfo

Uniform convergence

In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions (f_n) converges uniformly to

Thomas Marie Jean-Baptiste Humeau

We study various aspects of stochastic partial differential equations driven by Lévy white noise. This driving noise, which is a generalization of Gaussian white noise, can be viewed either as a generalized random process or as an independently scattered random measure. After unifying these approaches and establishing appropriate stochastic integral representations, we show that a necessary and sufficient condition for a Lévy white noise to have values in the space of tempered Schwartz distributions, is that the underlying Lévy measure have a positive absolute moment.
In the case of a linear stochastic partial differential equation with a general differential operator and driven by a symmetric pure jump Lévy white noise, we show that when the mild solution is locally Lebesgue integrable, then it is equal to the generalized solution, and that a random field representation exists for the generalized solution if and only if the fundamental solution of the operator has certain integrability properties. In that case, we show that the random field representation is equal to the mild solution. For this purpose, a new stochastic Fubini theorem is proved. These results are applied to the linear stochastic heat and wave equations driven by a symmetric alpha-stable noise.
We then study the non-linear stochastic heat equation driven by a general type of Lévy white noise, possibly with heavy tails and non-summable small jumps. Our framework includes in particular the alpha-stable noise. In the case of the equation on the whole space, we show that the law of the solution that we construct does not depend on the space variable. Then we show in various domains D that the solution u to the stochastic heat equation is such that t -> u(t,·) has a càdlàg version in a fractional Sobolev space of order r < -d /2. Finally, we show that the partial functions have a continuous version under some optimal moment conditions. In the alpha-stable case, we show that for the choices of alpha for which this moment condition is not satisfied, the sample paths of the partial functions are unbounded on any non-empty open subset.

This thesis focuses on developing efficient algorithmic tools for processing large datasets. In many modern data analysis tasks, the sheer volume of available datasets far outstrips our abilities to process them. This scenario commonly arises in tasks including parameter tuning of machine learning models (e.g., Google Vizier) and training neural networks. These tasks often require solving numerical linear algebraic problems on large matrices, making the classical primitives prohibitively expensive. Hence, there is a crucial need to efficiently compress the available datasets. In other settings, even collecting the input dataset is extremely expensive, making it vital to design optimal data sampling strategies. This is common in applications such as MRI acquisition and spectrum sensing.
The fundamental questions above are often dual to each other, and hence can be addressed using the same set of core techniques. Indeed, exploiting structured Fourier sparsity is a recurring source of efficiency in this thesis, leading to both fast numerical linear algebra methods and sample efficient data acquisition schemes.
One of the main results that we present in this thesis is the first Sublinear-time Model-based Sparse FFT algorithm that achieves a nearly optimal sample complexity for recovery of every signal whose Fourier transform is well approximated by a small number of blocks (e.g., such structure is common in spectrum sensing). Our method matches in sublinear time the result of Baraniuk et. al. (2010), which started the field of model-based compressed sensing. Another highlight of this thesis includes the first Dimension-independent Sparse FFT algorithm that, computes the Fourier transform of a sparse signal in sublinear runtime in any dimension. This is the first algorithm that just like the FFT of Cooley and Tukey is dimension independent and avoids the curse of dimensionality inherent to all previously known techniques. Finally, we give a Universal Sampling Scheme for the reconstruction of structured Fourier signals from continuous measurements. Our approach matches the classical results of Slepian, Pollak, and Landau (1960s) on the reconstruction of bandlimited signals via Prolate Spheroidal Wave Functions and extends these results to a wide class of Fourier structure types.
Besides having classical applications in signal processing and data analysis, Fourier techniques have been at the core of many machine learning tasks such as Kernel Matrix Approximation. The second half of this thesis is dedicated to finding compressed and low-rank representations of kernel matrices, which are the primary means of computation with large kernel matrices in machine learning. We build on Fourier techniques and achieve spectral approximation guarantees to the Gaussian kernel using an optimal number of samples, significantly improving upon the classical Random Fourier Features of Rahimi and Recht (2008). Finally, we present a nearly-optimal Oblivious Subspace Embedding for high-degree Polynomial kernels which leads to nearly-optimal embeddings of the high-dimensional Gaussian kernel. This is the first result that does not suffer from an exponential loss in the degree of the polynomial kernel or the dimension of the input point set, providing exponential improvements over the prior works, including the TensorSketch of Pagh (2013) and application of the celebrated Fast Multipole Method of Greengard and Rokhlin (1986) to the kernel approximation problem.

It is shown that the Euclidean group of translations, when treated as a Lie group, generates translations not only in Euclidean space but on any space, curved or not. Translations are then not necessarily vectors (straight lines); they can be any curve compatible with the parameterization of the considered space. In particular, attention is drawn to the fact that one and only one finite and free module of the Lie algebra of the group of translations can generate both modulated and non-modulated lattices, the modulated character being given only by the parameterization of the space in which the lattice is generated. Moreover, it is shown that the diffraction pattern of a structure is directly linked to the action of that free and finite module. In the Fourier transform of a whole structure, the Fourier transform of the electron density of one unit cell (i.e. the structure factor) appears concretely, whether the structure is modulated or not. Thus, there exists a neat separation: the geometrical aspect on the one hand and the action of the group on the other, without requiring additional dimensions.