In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable methods. The algorithm tries to use the potentially fast-converging secant method or inverse quadratic interpolation if possible, but it falls back to the more robust bisection method if necessary. Brent's method is due to Richard Brent and builds on an earlier algorithm by Theodorus Dekker. Consequently, the method is also known as the Brent–Dekker method.
Modern improvements on Brent's method include Chandrupatla's method, which is simpler and faster for functions that are flat around their roots; Ridders' method, which performs exponential interpolations instead of quadratic providing a simpler closed formula for the iterations; and the ITP method which is a hybrid between regula-falsi and bisection that achieves optimal worst-case and asymptotic guarantees.
The idea to combine the bisection method with the secant method goes back to .
Suppose that we want to solve the equation f(x) = 0. As with the bisection method, we need to initialize Dekker's method with two points, say a0 and b0, such that f(a0) and f(b0) have opposite signs. If f is continuous on [a0, b0], the intermediate value theorem guarantees the existence of a solution between a0 and b0.
Three points are involved in every iteration:
bk is the current iterate, i.e., the current guess for the root of f.
ak is the "contrapoint," i.e., a point such that f(ak) and f(bk) have opposite signs, so the interval [ak, bk] contains the solution. Furthermore, |f(bk)| should be less than or equal to |f(ak)|, so that bk is a better guess for the unknown solution than ak.
bk−1 is the previous iterate (for the first iteration, we set bk−1 = a0).
Two provisional values for the next iterate are computed.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course presents the problem of static optimization, with and without (equality and inequality) constraints, both from the theoretical (optimality conditions) and methodological (algorithms) point
This course introduces students to modern computational and mathematical techniques for solving problems in chemistry and chemical engineering. The use of introduced numerical methods will be demonstr
Molecular quantum dynamics simulations are essential for understanding many fundamental phenomena in physics and chemistry. They often require solving the time-dependent Schrödinger equation for molecular nuclei, which is challenging even for medium-sized ...
Partial discharge (PD) occurrence in power transformers can lead to irreparable damage to the power network. In this paper, the inverse filter (IF) method to localize PDs in power transformers is proposed. To the best of the authors’ knowledge, this is the ...
The parallel Schwarz method (PSM) is an overlapping domain decomposition (DD) method to solve partial differential equations (PDEs). Similarly to classical nonoverlapping DD methods, the PSM admits a substructured formulation, that is, it can be formulated ...
In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years. For finding a zero of a function f, the secant method is defined by the recurrence relation. As can be seen from this formula, two initial values x0 and x1 are required.
In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).