**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Inclusions différentielles et problèmes variationnels

Abstract

In this thesis we deal with three different but connected questions. Firstly (cf. Chapter 2) we make a systematic study of the generalized notions of convexity for sets. We study the notions of polyconvex, quasiconvex and rank one convex set. We remark that these notions are nowadays well known in the context of functions, but not in the context of sets. Following the classical approach, we give precise definitions of generalized convex sets and we study several issues, in this generalized sense, as the concept of convex hull, Carathéodory and separation theorems and the notion of extremal point. Secondly we have studied some differential inclusions of the form The method we have used to solve this kind of problems is the Baire categories method developed by Dacorogna-Marcellini [14]. Known sufficient conditions for this problem are connected to the generalized convex hull of the set E. In Chapter 3, we compute the rank one convex hull of some matrix sets to obtain, in Chapter 4, existence results. Namely, we have considered the problem of finding u : Ω ⊂ Rn → RN with Dirichlet boundary condition such that Φ (Du(x)) ∈ {α, β}, a.e. x ∈ Ω, Φ being an arbitrary quasi-affine function. We have also considered the problem of finding u : Ω ⊂ Rn → Rn such that where λ1(Du) ≤...≤ λn(Du) are the singular values of Du ∈ Rn×n. Finally, in Chapter 5, we deal with several minimizing problems of the form Denoting by Qf the quasiconvex envelope of f, we verify that solving the equation Qf(Du(x)) = f(Du(x)), a.e. x ∈ Ω is, under some conditions, sufficient to ensure the existence of solution of (P). The differential inclusions that we consider in Chapter 4 are helpful to solve some equations of the form (2) and thus, it allows us to solve problems of type (P). In particular, we have considered the problem (P) with f(ξ) = g(Φ(ξ)), ∀ ξ ∈ RN×n Φ being an arbitrary quasi-affine function.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (4)

Loading

Loading

Loading

Related concepts (17)

Quasiconvex function

In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the of any set of the form (-\infty,a)

Convex set

In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins

Extreme point

In mathematics, an extreme point of a convex set S in a real or complex vector space is a point in S which does not lie in any open line segment joining two points of

In this thesis we study calculus of variations for differential forms. In the first part we develop the framework of direct methods of calculus of variations in the context of minimization problems for functionals of one or several differential forms of the type, $\int_{\Omega} f(d\omega), \quad \int_{\Omega} f(d\omega_{1}, \ldots, d\omega_{m}) \quad \text{ and } \int_{\Omega} f(d\omega, \delta\omega).$ We introduce the appropriate convexity notions in each case, called \emph{ext. polyconvexity}, \emph{ext. quasiconvexity} and \emph{ext. one convexity} for functionals of the type $\int_{\Omega} f(d\omega),$ \emph{vectorial ext. polyconvexity}, \emph{vectorial ext. quasiconvexity} and \emph{vectorial ext. one convexity} for functionals of the type $\int_{\Omega} f(d\omega_{1}, \ldots, d\omega_{m})$ and \emph{ext-int. polyconvexity}, \emph{ext-int. quasiconvexity} and \emph{ext-int. one convexity} for functionals of the type $\int_{\Omega} f(d\omega, \delta\omega).$ We study their interrelationships and the connections of these convexity notions with the classical notion of polyconvexity, quasiconvexity and rank one convexity in classical vectorial calculus of variations. We also study weak lower semicontinuity and weak continuity of these functionals in appropriate spaces, address coercivity issues and obtain existence theorems for minimization problems for functionals of one differential forms.\smallskip In the second part we study different boundary value problems for linear, semilinear and quasilinear Maxwell type operator for differential forms. We study existence and derive interior regularity and $L^{2}$ boundary regularity estimates for the linear Maxwell operator $\delta (A(x)d\omega) = f$ with different boundary conditions and the related Hodge Laplacian type system $\delta (A(x)d\omega) + d\delta\omega = f,$ with appropriate boundary data. We also deduce, as a corollary, some existence and regularity results for div-curl type first order systems. We also deduce existence results for semilinear boundary value problems \begin{align*} \left\lbrace \begin{gathered} \delta ( A (x) ( d\omega ) ) + f( \omega ) = \lambda\omega \text{ in } \Omega, \ \nu \wedge \omega = 0 \text{ on } \partial\Omega. \end{gathered} \right. \end{align*} Lastly, we briefly discuss existence results for quasilinear Maxwell operator \begin{align*} \delta ( A ( x, d \omega ) ) = f , \end{align*} with different boundary data.

Martin Jaggi, Anastasiia Koloskova

Decentralized optimization methods enable on-device training of machine learning models without a central coordinator. In many scenarios communication between devices is energy demanding and time consuming and forms the bottleneck of the entire system. We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators to the communicated messages. By combining our scheme with a new variance reduction technique that progressively throughout the iterations reduces the adverse effect of the injected quantization noise, we obtain a scheme that converges linearly on strongly convex decentralized problems while using compressed communication only. We prove that our method can solve the problems without any increase in the number of communications compared to the baseline which does not perform any communication compression while still allowing for a significant compression factor which depends on the conditioning of the problem and the topology of the network. We confirm our theoretical findings in numerical experiments.

2020Nicolas Henri Bernard Flammarion, Loucas Pillaud-Vivien, Aditya Vardhan Varre

Motivated by the recent successes of neural networks that have the ability to fit the data perfectly \emph{and} generalize well, we study the noiseless model in the fundamental least-squares setup. We assume that an optimum predictor fits perfectly inputs and outputs $\langle \theta_* , \phi(X) \rangle = Y$, where $\phi(X)$ stands for a possibly infinite dimensional non-linear feature map. To solve this problem, we consider the estimator given by the last iterate of stochastic gradient descent (SGD) with constant step-size. In this context, our contribution is two fold: (i) \emph{from a (stochastic) optimization perspective}, we exhibit an archetypal problem where we can show explicitly the convergence of SGD final iterate for a non-strongly convex problem with constant step-size whereas usual results use some form of average and (ii) \emph{from a statistical perspective}, we give explicit non-asymptotic convergence rates in the over-parameterized setting and leverage a \emph{fine-grained} parameterization of the problem to exhibit polynomial rates that can be faster than $O(1/T)$. The link with reproducing kernel Hilbert spaces is established.

2021