In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist.
Strassen's algorithm works for any ring, such as plus/multiply, but not all semirings, such as min-plus or boolean algebra, where the naive algorithm still works, and so called combinatorial matrix multiplication.
Volker Strassen first published this algorithm in 1969 and thereby proved that the general matrix multiplication algorithm was not optimal. The Strassen algorithm's publication resulted in more research about matrix multiplication that led to both asymptotically lower bounds and improved computational upper bounds.
Let , be two square matrices over a ring , for example matrices whose entries are integers or the real numbers. The goal of matrix multiplication is to calculate the matrix product . The following exposition of the algorithm assumes that all of these matrices have sizes that are powers of two (i.e., ), but this is only conceptually necessary -- if the matrices , are not of type , the "missing" rows and columns can be filled with zeros to obtain matrices with sizes of powers of two -- though real implementations of the algorithm do not do this in practice.
The Strassen algorithm partitions , and into equally sized block matrices
with . The naive algorithm would be:
This construction does not reduce the number of multiplications: 8 multiplications of matrix blocks are still needed to calculate the matrices, the same number of multiplications needed when using standard matrix multiplication.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors (perhaps over a network).
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C ("CBLAS interface") and Fortran ("BLAS interface").
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
This course describes theory and methods for Reinforcement Learning (RL), which revolves around decision making under uncertainty. The course covers classic algorithms in RL as well as recent algorith
The students learn the theory and practice of basic concepts and techniques in algorithms. The course covers mathematical induction, techniques for analyzing algorithms, elementary data structures, ma
This work is concerned with approximating matrix functions for banded matrices, hierarchically semiseparable matrices, and related structures. We develop a new divide-and-conquer method based on (rational) Krylov subspace methods for performing low-rank up ...
SIAM PUBLICATIONS2022
, , , , ,
The invention relates to a rail pad (10, lOa-v, 10α-η) comprising: a) a pad matrix (1) comprising a first material and b) at least one damping element (2) at least partially encapsulated or embedded in the pad matrix (1), said damping element (2) comprisin ...
2022
Multiple tensor-times-matrix (Multi-TTM) is a key computation in algorithms for computing and operating with the Tucker tensor decomposition, which is frequently used in multidimensional data analysis. We establish communication lower bounds that determine ...