In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
The integral to be estimated is often of the form
where C is a contour, and λ is large. One version of the method of steepest descent deforms the contour of integration C into a new path integration C′ so that the following conditions hold:
C′ passes through one or more zeros of the derivative g′(z),
the imaginary part of g(z) is constant on C′.
The method of steepest descent was first published by , who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note by about hypergeometric functions. The contour of steepest descent has a minimax property, see . described some other unpublished notes of Riemann, where he used this method to derive the Riemann–Siegel formula.
The method of steepest descent is a method to approximate a complex integral of the formfor large , where and are analytic functions of . Because the integrand is analytic, the contour can be deformed into a new contour without changing the integral. In particular, one seeks a new contour on which the imaginary part of is constant. Then and the remaining integral can be approximated with other methods like Laplace's method.
The method is called the method of steepest descent because for analytic , constant phase contours are equivalent to steepest descent contours.
If is an analytic function of , it satisfies the Cauchy–Riemann equationsThen so contours of constant phase are also contours of steepest descent.
Let f, S : Cn → C and C ⊂ Cn. If
where denotes the real part, and there exists a positive real number λ0 such that
then the following estimate holds:
Proof of the simple estimate:
Let x be a complex n-dimensional vector, and
denote the Hessian matrix for a function S(x).
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course teaches an overview of modern optimization methods, for applications in machine learning and data science. In particular, scalability of algorithms to large datasets will be discussed in t
This course presents the problem of static optimization, with and without (equality and inequality) constraints, both from the theoretical (optimality conditions) and methodological (algorithms) point
The introduction to asymptotic analysis provides the basis for constructing many simplified analytical models in mechanics and for testing computations in limiting cases.
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior. As an illustration, suppose that we are interested in the properties of a function f (n) as n becomes very large. If f(n) = n2 + 3n, then as n becomes very large, the term 3n becomes insignificant compared to n2. The function f(n) is said to be "asymptotically equivalent to n2, as n → ∞". This is often written symbolically as f (n) ~ n2, which is read as "f(n) is asymptotic to n2".
We study the average of the product of the central values of two L-functions of modular forms f and g twisted by Dirichlet characters to a large prime modulus q. As our principal tools, we use spectral theory to develop bounds on averages of shifted convol ...
In this manuscript we consider denoising of large rectangular matrices: given a noisy observation of a signal matrix, what is the best way of recovering the signal matrix itself? For Gaussian noise and rotationally-invariant signal priors, we completely ch ...
2022
We derive a covariance formula for the class of 'topological events' of smooth Gaussian fields on manifolds; these are events that depend only on the topology of the level sets of the field, for example, (i) crossing events for level or excursion sets, (ii ...