**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Sur quelques équations aux dérivées partielles en lien avec le lemme de Poincaré

Abstract

In this thesis, we study two distinct problems. The first problem consists of studying the linear system of partial differential equations which consists of taking a k-form, and applying the exterior derivative 'd' to it and add the wedge product with a 1-form 'a'. The study of this differential operator is linked to the study of the multiplication by a two form, that is the system of linear equations where we take a k-form and apply the exterior wedge product by 'da', the exterior derivative of 'a'. We establish links between the partial differential equation and the linear system. The second problem is a generalization of the symmetric gradient and the curl equation. The equation of a symmetric gradient consists of taking a vector field, apply the gradient and then add the transpose of the gradient, whereas in the curl equation we subtract the transpose of the gradient. Both can be seen as an equation of the form A * grad u + (grad u)t * A, where A is a symmetric matrix for the case of the symmetric gradient and skew symmetric for the curl equation. We generalize to the case where A verifies no symmetry assumption and more significantly add a Dirichlet condition on the boundary.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related concepts (27)

Partial differential equation

In mathematics, a partial differential equation (PDE) is an equation which computes a function between various partial derivatives of a multivariable function.
The function is often thought of as

Equation

In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign . The word equation and its cognates in other languages

Derivative

In mathematics, the derivative shows the sensitivity of change of a function's output with respect to the input. Derivatives are a fundamental tool of calculus. For example, the derivative of the p

Related publications (46)

Loading

Loading

Loading

We introduce a new class of wavelets that behave like a given differential operator L. Our construction is inspired by the derivative-like behavior of classical wavelets. Within our framework, the wavelet coefficients of a signal y are the samples of a smoothed version of L{y}. For a linear system characterized by an operator equation L{y} = x, the operator-like wavelet transform essentially deconvolves the system output y and extracts the "innovation" signal x. The main contributions of the thesis include: Exponential-spline wavelets. We consider the system L described by a linear differential equation and build wavelets that mimic the behavior of L. The link between the wavelet and the operator is an exponential B-spline function; its integer shifts span the multiresolution space. The construction that we obtain is non-stationary in the sense that the wavelets and the scaling functions depend on the scale. We propose a generalized version of Mallat's fast filterbank algorithm with scale-dependent filters to efficiently perform the decomposition and reconstruction in the new wavelet basis. Activelets in fMRI. As a practical biomedical imaging application, we study the problem of activity detection in event-related fMRI. For the differential system that links the measurements and the stimuli, we use a linear approximation of the balloon/windkessel model for the hemodynamic response. The corresponding wavelets (we call them activelets) are specially tuned for temporal fMRI signal analysis. We assume that the stimuli are sparse in time and extract the activity-related signal by optimizing a criterion with a sparsity regularization term. We test the method with synthetic fMRI data. We then apply it to a high-resolution fMRI retinotopy dataset to demonstrate its applicability to real data. Operator-like wavelets. Finally, we generalize the operator-like wavelet construction for a wide class of differential operators L in multiple dimensions. We give conditions that L must satisfy to generate a valid multiresolution analysis. We show that Matérn and polyharmonic wavelets are particular cases of our construction.

In this thesis we describe a path integral formalism to evaluate approximations to the probability density function for the location and orientation of one end of a continuum polymer chain at thermodynamic equilibrium with a heat bath. We concentrate on those systems for which the associated energy density is at most quadratic in its variables. Our main motivation is to exploit continuum elastic rod models for the approximate computation of DNA looping probabilities. We first re-derive, for a polymer chain system, an expression for the second order correction term due to quadratic fluctuations about a unique minimal energy configuration. The result, originally stated for a quantum mechanical system by G. Papadopoulos (1975), relies on an elegant algebraic argument that carries over to the real-valued path integrals of interest here. The conclusion is that the appropriate expression can be evaluated in terms of the energy of the minimizer and the inverse square root of the determinant of a matrix satisfying a certain non-linear system of differential equations. We then construct a change of variables, which establishes a mapping between the solutions of the aforementioned non-linear Papadopoulos equations and a matrix satisfying an initial value problem for the classic linear system of Jacobi equations associated with the second variation of the energy functional. This conclusion is trivial if no cross-term is present in the second variation, but ceases to be so otherwise. Cross-terms are always present in the application of rod models to DNA. We therefore can conclude that the second order fluctuation correction term to the probability density function for a chain is always given by the inverse square root of the determinant of a matrix of solutions to the Jacobi equations. We believe this conclusion to be original for the real-valued case when the second-variation involves cross-terms. Similar results are known for quantum mechanical systems, and, in this context, a connection between the so called Van-Hove-Morette determinant, which involves partial derivatives of the classical action with respect to the boundary values of the configuration variable, and the Jacobi determinant have also been established. We next apply the formula described above to the specific context of rods, for which the configuration space is that of framed curves, or curves in R3 × SO(3). An immediate application of our theory is possible if the rod model encompasses bend, twist, stretch and shear. However the constrained case, where the rod is considered to be inextensible and unshearable, is more standard in polymer physics. In this last case, our results are more delicate as the Lagrangian description breaks down, and the Hamiltonian formulation must be invoked. It is known that the unconstrained local minimizers approach constrained minimizers as the coefficients in the shear and extension terms of the energy are sent to infinity. Here we observe that the Hamiltonian form of the unconstrained Jacobi system similarly has a limit, so that the fluctuation correction in the path integral can still be expressed as the square root of the determinant of a matrix solution of a set of Jacobi equations appropriate to the constrained problem. As in reality DNA or biological macromolecules are certainly at least slightly shearable and extensible, the limit of the fluctuation correction is undoubtedly physically appropriate. The above theory provides a computationally highly tractable approach to the estimation of the appropriate probability density functions. For application to sequence-dependent models of DNA the associated systems of equations has non-constant coefficients, which is of little consequence for a numerical treatment, but precludes the possibility of finding closed form expressions. On the other hand the theory also applies to simplified homogeneous models. Accordingly, we conclude by applying our approach in a completely analytic and closed-form way to the computation of the approximate probability density function for a uniform, non-isotropic, intrinsically straight and untwisted rod to form a circular loop.

The focus of this thesis is on developing efficient algorithms for two important problems arising in model reduction, estimation of the smallest eigenvalue for a parameter-dependent Hermitian matrix and solving large-scale linear matrix equations, by extracting and exploiting underlying low-rank properties. Availability of reliable and efficient algorithms for estimating the smallest eigenvalue of a parameter-dependent Hermitian matrix $A(\mu)$ for many parameter values $\mu$ is important in a variety of applications. Most notably, it plays a crucial role in \textit{a posteriori} estimation of reduced basis methods for parametrized partial differential equations. We propose a novel subspace approach, which builds upon the current state-of-the-art approach, the Successive Constraint Method (SCM), and improves it by additionally incorporating the sampled smallest eigenvectors and implicitly exploiting their smoothness properties. Like SCM, our approach also provides rigorous lower and upper bounds for the smallest eigenvalues on the parameter domain $D$. We present theoretical and experimental evidence to demonstrate that our approach represents a significant improvement over SCM in the sense that the bounds are often much tighter, at a negligible additional cost. We have successfully applied the approach to computation of the coercivity and the inf-sup constants, as well as computation of $\varepsilon$-pseudospectra. Solving an $m\times n$ linear matrix equation $A_1 X B_1^T + \cdots + A_K X B_K^T = C$ as an $m n \times m n$ linear system, typically limits the feasible values of $m,n$ to a few hundreds at most. We propose a new approach, which exploits the fact that the solution $X$ can often be well approximated by a low-rank matrix, and computes it by combining greedy low-rank techniques with Galerkin projection as well as preconditioned gradients. This can be implemented in a way where only linear systems of size $m \times m$ and $n \times n$ need to be solved. Moreover, these linear systems inherit the sparsity of the coefficient matrices, which allows to address linear matrix equations as large as $m = n = O(10^5)$. Numerical experiments demonstrate that the proposed methods perform well for generalized Lyapunov equations, as well as for the standard Lyapunov equations. Finally, we combine the ideas used for addressing matrix equations and parameter-dependent eigenvalue problems, and propose a low-rank reduced basis approach for solving parameter-dependent Lyapunov equations.