Numerical methods for partial differential equationsNumerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs). In principle, specialized methods for hyperbolic, parabolic or elliptic partial differential equations exist. Finite difference method In this method, functions are represented by their values at certain grid points and derivatives are approximated through differences in these values.
Linear algebraLinear algebra is the branch of mathematics concerning linear equations such as: linear maps such as: and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to spaces of functions.
Relaxation (iterative method)In numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems. Relaxation methods were developed for solving large sparse linear systems, which arose as finite-difference discretizations of differential equations. They are also used for the solution of linear equations for linear least-squares problems and also for systems of linear inequalities, such as those arising in linear programming. They have also been developed for solving nonlinear systems of equations.
Jordan matrixIn the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R (whose identities are the zero 0 and one 1), where each block along the diagonal, called a Jordan block, has the following form: Every Jordan block is specified by its dimension n and its eigenvalue , and is denoted as Jλ,n. It is an matrix of zeroes everywhere except for the diagonal, which is filled with and for the superdiagonal, which is composed of ones.
PreconditionIn computer programming, a precondition is a condition or predicate that must always be true just prior to the execution of some section of code or before an operation in a formal specification. If a precondition is violated, the effect of the section of code becomes undefined and thus may or may not carry out its intended work. Security problems can arise due to incorrect preconditions. Often, preconditions are simply included in the documentation of the affected section of code.
PeriodogramIn signal processing, a periodogram is an estimate of the spectral density of a signal. The term was coined by Arthur Schuster in 1898. Today, the periodogram is a component of more sophisticated methods (see spectral estimation). It is the most common tool for examining the amplitude vs frequency characteristics of FIR filters and window functions. FFT spectrum analyzers are also implemented as a time-sequence of periodograms. There are at least two different definitions in use today.
Linear system of divisorsIn algebraic geometry, a linear system of divisors is an algebraic generalization of the geometric notion of a family of curves; the dimension of the linear system corresponds to the number of parameters of the family. These arose first in the form of a linear system of algebraic curves in the projective plane. It assumed a more general form, through gradual generalisation, so that one could speak of linear equivalence of divisors D on a general scheme or even a ringed space (X, OX).
Iterative deepening depth-first searchIn computer science, iterative deepening search or more specifically iterative deepening depth-first search (IDS or IDDFS) is a state space/graph search strategy in which a depth-limited version of depth-first search is run repeatedly with increasing depth limits until the goal is found. IDDFS is optimal like breadth-first search, but uses much less memory; at each iteration, it visits the nodes in the search tree in the same order as depth-first search, but the cumulative order in which nodes are first visited is effectively breadth-first.
Primary decompositionIn mathematics, the Lasker–Noether theorem states that every Noetherian ring is a Lasker ring, which means that every ideal can be decomposed as an intersection, called primary decomposition, of finitely many primary ideals (which are related to, but not quite the same as, powers of prime ideals). The theorem was first proven by for the special case of polynomial rings and convergent power series rings, and was proven in its full generality by .
Coordinate conditionsIn general relativity, the laws of physics can be expressed in a generally covariant form. In other words, the description of the world as given by the laws of physics does not depend on our choice of coordinate systems. However, it is often useful to fix upon a particular coordinate system, in order to solve actual problems or make actual predictions. A coordinate condition selects such coordinate system(s). The Einstein field equations do not determine the metric uniquely, even if one knows what the metric tensor equals everywhere at an initial time.