Time complexityIn computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
Hermite polynomialsIn mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence. The polynomials arise in: signal processing as Hermitian wavelets for wavelet transform analysis probability, such as the Edgeworth series, as well as in connection with Brownian motion; combinatorics, as an example of an Appell sequence, obeying the umbral calculus; numerical analysis as Gaussian quadrature; physics, where they give rise to the eigenstates of the quantum harmonic oscillator; and they also occur in some cases of the heat equation (when the term is present); systems theory in connection with nonlinear operations on Gaussian noise.
Approximation errorThe approximation error in a data value is the discrepancy between an exact value and some approximation to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute error divided by the data value). An approximation error can occur for a variety of reasons, among them a computing machine precision or measurement error (e.g. the length of a piece of paper is 4.53 cm but the ruler only allows you to estimate it to the nearest 0.
Laguerre polynomialsIn mathematics, the Laguerre polynomials, named after Edmond Laguerre (1834–1886), are solutions of Laguerre's differential equation: which is a second-order linear differential equation. This equation has nonsingular solutions only if n is a non-negative integer. Sometimes the name Laguerre polynomials is used for solutions of where n is still a non-negative integer. Then they are also named generalized Laguerre polynomials, as will be done here (alternatively associated Laguerre polynomials or, rarely, Sonine polynomials, after their inventor Nikolay Yakovlevich Sonin).
Chebyshev polynomialsThe Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as and . They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebyshev polynomials of the first kind are defined by Similarly, the Chebyshev polynomials of the second kind are defined by That these expressions define polynomials in may not be obvious at first sight, but follows by rewriting and using de Moivre's formula or by using the angle sum formulas for and repeatedly.
Round-off errorIn computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them. This is a form of quantization error.
Stratified samplingIn statistics, stratified sampling is a method of sampling from a population which can be partitioned into subpopulations. In statistical surveys, when subpopulations within an overall population vary, it could be advantageous to sample each subpopulation (stratum) independently. Stratification is the process of dividing members of the population into homogeneous subgroups before sampling. The strata should define a partition of the population.
Simple random sampleIn statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sample in a random way. In SRS, each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals. A simple random sample is an unbiased sampling technique. Simple random sampling is a basic type of sampling and can be a component of other more complex sampling methods.
Stirling's approximationIn mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of . It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre. One way of stating the approximation involves the logarithm of the factorial: where the big O notation means that, for all sufficiently large values of , the difference between and will be at most proportional to the logarithm.
Discretization errorIn numerical analysis, computational physics, and simulation, discretization error is the error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a lattice. Discretization error can usually be reduced by using a more finely spaced lattice, with an increased computational cost. Discretization error is the principal source of error in methods of finite differences and the pseudo-spectral method of computational physics.