Trace (linear algebra)In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal (from the upper left to the lower right) of A. The trace is only defined for a square matrix (n × n). It can be proven that the trace of a matrix is the sum of its (complex) eigenvalues (counted with multiplicities). It can also be proven that tr(AB) = tr(BA) for any two matrices A and B. This implies that similar matrices have the same trace.
Island of stabilityIn nuclear physics, the island of stability is a predicted set of isotopes of superheavy elements that may have considerably longer half-lives than known isotopes of these elements. It is predicted to appear as an "island" in the chart of nuclides, separated from known stable and long-lived primordial radionuclides. Its theoretical existence is attributed to stabilizing effects of predicted "magic numbers" of protons and neutrons in the superheavy mass region.
Mathematical economicsMathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. Often, these applied methods are beyond simple geometry, and may include differential and integral calculus, difference and differential equations, matrix algebra, mathematical programming, or other computational methods. Proponents of this approach claim that it allows the formulation of theoretical relationships with rigor, generality, and simplicity.
Markov decision processIn mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming. MDPs were known at least as early as the 1950s; a core body of research on Markov decision processes resulted from Ronald Howard's 1960 book, Dynamic Programming and Markov Processes.
Root-finding algorithmsIn mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).
Frequency domainIn mathematics, physics, electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time. Put simply, a time-domain graph shows how a signal changes over time, whereas a frequency-domain graph shows how the signal is distributed within different frequency bands over a range of frequencies. A frequency-domain representation consists of both the magnitude and the phase of a set of sinusoids (or other basis waveforms) at the frequency components of the signal.
Protestant work ethicThe Protestant work ethic, also known as the Calvinist work ethic or the Puritan work ethic, is a work ethic concept in scholarly sociology, economics, and historiography. It emphasizes that diligence, discipline, and frugality are a result of a person's subscription to the values espoused by the Protestant faith, particularly Calvinism. The phrase was initially coined in 1905 by Max Weber in his book The Protestant Ethic and the Spirit of Capitalism.
Laplace transformIn mathematics, the 'Laplace transform, named after its discoverer Pierre-Simon Laplace (ləˈplɑ:s), is an integral transform that converts a function of a real variable (usually , in the time domain) to a function of a complex variable (in the complex frequency domain, also known as s-domain', or s-plane). The transform has many applications in science and engineering because it is a tool for solving differential equations. In particular, it transforms ordinary differential equations into algebraic equations and convolution into multiplication.
Jacobi methodIn numerical linear algebra, the Jacobi method (a.k.a. the Jacobi iteration method) is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.
Generalized minimal residual methodIn mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector. The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986. It is a generalization and improvement of the MINRES method due to Paige and Saunders in 1975.