In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set. A twice-differentiable function of a single variable is convex if and only if its second derivative is nonnegative on its entire domain. Well-known examples of convex functions of a single variable include a linear function (where is a real number), a quadratic function ( as a nonnegative real number) and a exponential function ( as a nonnegative real number). In simple terms, a convex function refers to a function whose graph is shaped like a cup (or a straight line like a linear function), while a concave function's graph is shaped like a cap .
Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on an open set has no more than one minimum. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in the calculus of variations. In probability theory, a convex function applied to the expected value of a random variable is always bounded above by the expected value of the convex function of the random variable. This result, known as Jensen's inequality, can be used to deduce inequalities such as the arithmetic–geometric mean inequality and Hölder's inequality.
Let be a convex subset of a real vector space and let be a function.
Then is called if and only if any of the following equivalent conditions hold:
For all and all :
The right hand side represents the straight line between and in the graph of as a function of increasing from to or decreasing from to sweeps this line.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course provides an overview of key advances in continuous optimization and statistical analysis for machine learning. We review recent learning formulations and models as well as their guarantees
The first part is devoted to Monge and Kantorovitch problems, discussing the existence and the properties of the optimal plan. The second part introduces the Wasserstein distance on measures and devel
Learn to optimize on smooth, nonlinear spaces: Join us to build your foundations (starting at "what is a manifold?") and confidently implement your first algorithm (Riemannian gradient descent).
Le contenu de ce cours correspond à celui du cours d'Analyse I, comme il est enseigné pour les étudiantes et les étudiants de l'EPFL pendant leur premier semestre. Chaque chapitre du cours correspond
In mathematics, a concave function is the negative of a convex function. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap, or upper convex. A real-valued function on an interval (or, more generally, a convex set in vector space) is said to be concave if, for any and in the interval and for any , A function is called strictly concave if for any and . For a function , this second definition merely states that for every strictly between and , the point on the graph of is above the straight line joining the points and .
In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector.
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
This code is used for developing the project entitled “Study on conformal antennas, proof of concept prototype for a UAV”, from the aspects of theory, design, and implementation. This code aims to speed up the investigation of an arbitrary phased array ant ...
This paper develops a fast algorithm for computing the equilibrium assignment with the perturbed utility route choice (PURC) model. Without compromise, this allows the significant advantages of the PURC model to be used in large-scale applications. We form ...
. We study very weak solutions to scalar Euler-Lagrange equations associated with quadratic convex functionals. We investigate whether W1,1 solutions are necessarily W 1,2 Nash and Schauder applicable. We answer this question positively for a suitable clas ...