In linear algebra, an augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices.
Given the matrices A and B, where
the augmented matrix (A|B) is written as
This is useful when solving systems of linear equations.
For a given number of unknowns, the number of solutions to a system of linear equations depends only on the rank of the matrix representing the system and the rank of the corresponding augmented matrix. Specifically, according to the Rouché–Capelli theorem, any system of linear equations is inconsistent (has no solutions) if the rank of the augmented matrix is greater than the rank of the coefficient matrix; if, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there are an infinitude of solutions.
An augmented matrix may also be used to find the inverse of a matrix by combining it with the identity matrix.
Let C be the square 2×2 matrix
To find the inverse of C we create (C|I) where I is the 2×2 identity matrix. We then reduce the part of (C|I) corresponding to C to the identity matrix using only elementary row operations on (C|I).
the right part of which is the inverse of the original matrix.
Consider the system of equations
The coefficient matrix is
and the augmented matrix is
Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are an infinite number of solutions.
In contrast, consider the system
The coefficient matrix is
and the augmented matrix is
In this example the coefficient matrix has rank 2 while the augmented matrix has rank 3; so this system of equations has no solution.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian elimination has operated on the columns. In other words, a matrix is in column echelon form if its transpose is in row echelon form. Therefore, only row echelon forms are considered in the remainder of this article. The similar properties of column echelon form are easily deduced by transposing all the matrices.
In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. The elementary matrices generate the general linear group GLn(F) when F is a field. Left multiplication (pre-multiplication) by an elementary matrix represents elementary row operations, while right multiplication (post-multiplication) represents elementary column operations. Elementary row operations are used in Gaussian elimination to reduce a matrix to row echelon form.
This paper is concerned with the numerical solution of symmetric large-scale Lyapunov equations with low-rank right-hand sides and coefficient matrices depending on a parameter. Specifically, we consider the situation when the parameter dependence is suffi ...
Wiley-Blackwell2013
,
A novel method is presented for the systematic identification of the minimum requirements regarding mathematical pre-treatment, a priori information, or experimental design, in order to allow optimising rate constants and pure component spectra associated ...