In the mathematics of a square matrix, the Wronskian (or Wrońskian) is a determinant introduced by the Polish mathematician . It is used in the study of differential equations, where it can sometimes show linear independence of a set of solutions.
The Wronskian of two differentiable functions f and g is .
More generally, for n real- or complex-valued functions f1, ..., fn, which are n – 1 times differentiable on an interval I, the Wronskian is a function on defined by
This is the determinant of the matrix constructed by placing the functions in the first row, the first derivatives of the functions in the second row, and so on through the derivative, thus forming a square matrix.
When the functions fi are solutions of a linear differential equation, the Wronskian can be found explicitly using Abel's identity, even if the functions fi are not known explicitly. (See below.)
If the functions fi are linearly dependent, then so are the columns of the Wronskian (since differentiation is a linear operation), and the Wronskian vanishes. Thus, one may show that a set of differentiable functions is linearly independent on an interval by showing that their Wronskian does not vanish identically. It may, however, vanish at isolated points.
A common misconception is that W = 0 everywhere implies linear dependence, but pointed out that the functions x2 and x have continuous derivatives and their Wronskian vanishes everywhere, yet they are not linearly dependent in any neighborhood of 0. There are several extra conditions which combine with vanishing of the Wronskian in an interval to imply linear dependence.
Maxime Bôcher observed that if the functions are analytic, then the vanishing of the Wronskian in an interval implies that they are linearly dependent.
gave several other conditions for the vanishing of the Wronskian to imply linear dependence; for example, if the Wronskian of n functions is identically zero and the n Wronskians of n – 1 of them do not all vanish at any point then the functions are linearly dependent.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This work is a contribution to the development of a specific method to assess the presence of residues in agricultural commodities. The following objectives are formulated: to identify and describe main processes in environment — plant exchanges, to build ...
In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix with entries , the jth power of the number , for all zero-based indices and . Most authors define the Vandermonde matrix as the transpose of the above matrix. The determinant of a square Vandermonde matrix (when ) is called a Vandermonde determinant or Vandermonde polynomial. Its value is: This is non-zero if and only if all are distinct (no two are equal), making the Vandermonde matrix invertible.
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x. Such an equation is an ordinary differential equation (ODE).