Euler methodIn mathematics and computational science, the Euler method (also called the forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who first proposed it in his book Institutionum calculi integralis (published 1768–1870).
DerivativeIn mathematics, the derivative shows the sensitivity of change of a function's output with respect to the input. Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point.
EstimationEstimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is derived from the best information available. Typically, estimation involves "using the value of a statistic derived from a sample to estimate the value of a corresponding population parameter".
Linear combinationIn mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). The concept of linear combinations is central to linear algebra and related fields of mathematics. Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article.
Unbiased estimation of standard deviationIn statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation (a measure of statistical dispersion) of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis.
Phase-contrast X-ray imagingPhase-contrast X-ray imaging or phase-sensitive X-ray imaging is a general term for different technical methods that use information concerning changes in the phase of an X-ray beam that passes through an object in order to create its images. Standard X-ray imaging techniques like radiography or computed tomography (CT) rely on a decrease of the X-ray beam's intensity (attenuation) when traversing the sample, which can be measured directly with the assistance of an X-ray detector.
Coefficient of variationIn probability theory and statistics, the coefficient of variation (COV), also known as Normalized Root-Mean-Square Deviation (NRMSD), Percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is defined as the ratio of the standard deviation to the mean (or its absolute value, , and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay.
Biconjugate gradient methodIn mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations Unlike the conjugate gradient method, this algorithm does not require the matrix to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose A*. Choose initial guess , two other vectors and and a preconditioner for do In the above formulation, the computed and satisfy and thus are the respective residuals corresponding to and , as approximate solutions to the systems is the adjoint, and is the complex conjugate.
Conical combinationGiven a finite number of vectors in a real vector space, a conical combination, conical sum, or weighted sum of these vectors is a vector of the form where are non-negative real numbers. The name derives from the fact that a conical sum of vectors defines a cone (possibly in a lower-dimensional subspace). The set of all conical combinations for a given set S is called the conical hull of S and denoted cone(S) or coni(S). That is, By taking k = 0, it follows the zero vector (origin) belongs to all conical hulls (since the summation becomes an empty sum).
Gaussian functionIn mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c (the standard deviation, sometimes called the Gaussian RMS width) controls the width of the "bell".