Curvature of Riemannian manifoldsIn mathematics, specifically differential geometry, the infinitesimal geometry of Riemannian manifolds with dimension greater than 2 is too complicated to be described by a single number at a given point. Riemann introduced an abstract and rigorous way to define curvature for these manifolds, now known as the Riemann curvature tensor. Similar notions have found applications everywhere in differential geometry of surfaces and other objects. The curvature of a pseudo-Riemannian manifold can be expressed in the same way with only slight modifications.
Rate of convergenceIn numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence that converges to is said to have order of convergence and rate of convergence if The rate of convergence is also called the asymptotic error constant. Note that this terminology is not standardized and some authors will use rate where this article uses order (e.g., ).
Stochastic gradient descentStochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data).
Constant curvatureIn mathematics, constant curvature is a concept from differential geometry. Here, curvature refers to the sectional curvature of a space (more precisely a manifold) and is a single number determining its local geometry. The sectional curvature is said to be constant if it has the same value at every point and for every two-dimensional tangent plane at that point. For example, a sphere is a surface of constant positive curvature.
Pseudoconvex functionIn convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positive directional derivative. The property must hold in all of the function domain, and not only for nearby points.
Concave functionIn mathematics, a concave function is the negative of a convex function. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap, or upper convex. A real-valued function on an interval (or, more generally, a convex set in vector space) is said to be concave if, for any and in the interval and for any , A function is called strictly concave if for any and . For a function , this second definition merely states that for every strictly between and , the point on the graph of is above the straight line joining the points and .
Convex optimizationConvex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
Convex setIn geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex region is a subset that intersects every line into a single line segment (possibly empty). For example, a solid cube is a convex set, but anything that is hollow or has an indent, for example, a crescent shape, is not convex. The boundary of a convex set is always a convex curve.
Bisection methodIn mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods.
Secant methodIn numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years. For finding a zero of a function f, the secant method is defined by the recurrence relation. As can be seen from this formula, two initial values x0 and x1 are required.