**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Saddle point

Summary

In mathematics, a saddle point or minimax point is a point on the surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a critical point), but which is not a local extremum of the function. An example of a saddle point is when there is a critical point with a relative minimum along one axial direction (between peaks) and at a relative maximum along the crossing axis. However, a saddle point need not be in this form. For example, the function f(x,y) = x^2 + y^3 has a critical point at (0, 0) that is a saddle point since it is neither a relative maximum nor relative minimum, but it does not have a relative maximum or relative minimum in the y-direction.
The name derives from the fact that the prototypical example in two dimensions is a surface that curves up in one direction, and curves down in a different direction, resembling a riding saddle or a mountain pass between two peaks forming a l

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (6)

Loading

Loading

Loading

Related people

No results

Related units

No results

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related concepts (19)

Critical point is a wide term used in many branches of mathematics.
When dealing with functions of a real variable, a critical point is a point in the domain of the function where the function is ei

In mathematical analysis, the maximum and minimum of a function are, respectively, the largest and smallest value taken by the function. Known generically as extremum, they may be defined either wit

In mathematics, the Hessian matrix, Hessian or (less commonly) Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the loca

Related courses (19)

MATH-106(a): Analysis II

Étudier les concepts fondamentaux d'analyse, et le calcul différentiel et intégral des fonctions réelles de plusieurs
variables.

MATH-106(b): Analysis II

Étudier les concepts fondamentaux d'analyse et le calcul différentiel et intégral des fonctions réelles de plusieurs
variables.

BIO-341: Dynamical systems in biology

Ce cours introduit les systèmes dynamiques pour modéliser des réseaux biologiques simples. L'analyse qualitative de modèles dynamiques non-linéaires est développée de pair avec des simulations numériques. L'accent est mis sur les exemples biologiques.

Related lectures (72)

Jan Felix Binder, Alfredo Pasquarello

Using a density-functional scheme, we study the migration of a single O atom in a (110) plane between two adjacent bond-center sites in bulk Si. The minimum energy migration path is found through the nudged elastic band method within a generalized gradient approximation for the electronic structure. The energy barrier is then also evaluated within a hybrid functional scheme. We achieve for the transition barrier a best estimate of 2.3 eV in the generalized gradient approximation and of 2.7 eV in the hybrid functional scheme, both in fair agreement with the commonly accepted experimental value of 2.53 eV. The transition is characterized by a saddle point which does not occur at the midpoint between the two bond-center sites and by a pattern of displacements extending up to the second nearest-neighbor Si atoms. The atomistic mechanism of oxygen migration is analyzed from three complementary viewpoints involving the evolution of the structure, the Wannier centers, and the single-particle energies and wave functions. The diffusion process can be separated into two distinct parts. In one part, the exchange of the Si atoms in the first-neighbor shell of the diffusing O atom occurs through the formation of a threefold coordinated O center and an overcoordinated Si atom. In the other part, the Si-Si bond flips its position through the creation of occupied and unoccupied Si dangling bonds which give rise to states in the band gap.

,

We consider minimizing a nonconvex, smooth function f on a Riemannian manifold M. We show that a perturbed version of Riemannian gradient descent algorithm converges to a second-order stationary point (and hence is able to escape saddle points on the manifold). The rate of convergence depends as 1/epsilon(2) on the accuracy c, which matches a rate known only for unconstrained smooth minimization. The convergence rate depends polylogarithmically on the manifold dimension d, hence is almost dimension -free. The rate also has a polynomial dependence on the parameters describing the curvature of the manifold and the smoothness of the function. While the unconstrained problem (Euclidean setting) is well -studied, our result is the first to prove such a rate for nonconvex, manifold-constrained problems.

2019Volkan Cevher, Maria-Luiza Vladarean

We consider the problem of finding a saddle point for the convex-concave objective $\min_x \max_y f(x) + \langle Ax, y\rangle - g^*(y)$, where $f$ is a convex function with locally Lipschitz gradient and $g$ is convex and possibly non-smooth. We propose an adaptive version of the Condat-Vũ algorithm, which alternates between primal gradient steps and dual proximal steps. The method achieves stepsize adaptivity through a simple rule involving $\|A\|$ and the norm of recently computed gradients of $f$. Under standard assumptions, we prove an $\mathcal{O}(k^{-1})$ ergodic convergence rate. Furthermore, when $f$ is also locally strongly convex and $A$ has full row rank we show that our method converges with a linear rate. Numerical experiments are provided for illustrating the practical performance of the algorithm.

2021