Concept

Computational complexity of matrix multiplication

In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the right amount of time it should take is of major practical relevance. Directly applying the mathematical definition of matrix multiplication gives an algorithm that requires n3 field operations to multiply two n × n matrices over that field (Θ(n3) in big O notation). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered was Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". The optimal number of field operations needed to multiply two square n × n matrices up to constant factors is still unknown. This is a major open question in theoretical computer science. the best announced bound on the asymptotic complexity of a matrix multiplication algorithm is O(n2.371552) time, given by Williams, Xu, Xu, and Zhou, announced in a preprint. This improves on the bound of O(n2.371866) given by a preprint of Duan, Wu and Zhou. However, this and similar improvements to Strassen are not used in practice, because they are galactic algorithms: the constant coefficient hidden by the Big O notation is so large that they are only worthwhile for matrices that are too large to handle on present-day computers. The best time-bound confirmed by peer review is O(n2.3728596). If A, B are n × n matrices over a field, then their product AB is also an n × n matrix over that field, defined entrywise as The simplest approach to computing the product of two n × n matrices A and B is to compute the arithmetic expressions coming from the definition of matrix multiplication.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (15)
CS-233: Introduction to machine learning
Machine learning and data analysis are becoming increasingly central in many sciences and applications. In this course, fundamental principles and methods of machine learning will be introduced, analy
CS-433: Machine learning
Machine learning methods are becoming increasingly central in many sciences and applications. In this course, fundamental principles and methods of machine learning will be introduced, analyzed and pr
MGT-499: Statistics and data science
This class provides a hands-on introduction to statistics and data science, with a focus on causal inference, applications to sustainability issues using Python, and dissemination of scientific result
Show more
Related lectures (139)
Dynamic Programming: Longest Common Subsequence
Explores dynamic programming with a focus on the Longest Common Subsequence problem and its efficient solutions.
Composite of Applications
Covers the concept of composite of applications and the importance of certain hypotheses in matrix multiplication.
Recursion Trees and Matrix Multiplication
Covers recursion trees, the maximum subarray problem, and matrix multiplication approaches.
Show more
Related publications (52)

COMMUNICATION LOWER BOUNDS AND OPTIMAL ALGORITHMS FOR MULTIPLE TENSOR-TIMES-MATRIX COMPUTATION

Laura Grigori

Multiple tensor-times-matrix (Multi-TTM) is a key computation in algorithms for computing and operating with the Tucker tensor decomposition, which is frequently used in multidimensional data analysis. We establish communication lower bounds that determine ...
Philadelphia2024

A 16-bit Floating-Point Near-SRAM Architecture for Low-power Sparse Matrix-Vector Multiplication

David Atienza Alonso, Giovanni Ansaloni, Grégoire Axel Eggermann, Marco Antonio Rios

State-of-the-art Artificial Intelligence (AI) algorithms, such as graph neural networks and recommendation systems, require floating-point computation of very large matrix multiplications over sparse data. Their execution in resource-constrained scenarios, ...
2023

Preconditioning techniques for generalized Sylvester matrix equations

Yannis Dirk Voet

Sylvester matrix equations are ubiquitous in scientific computing. However, few solution techniques exist for their generalized multiterm version, as they recently arose in stochastic Galerkin finite element discretizations and isogeometric analysis. In th ...
2023
Show more
Related concepts (2)
Matrix multiplication algorithm
Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors (perhaps over a network).
Computational complexity of mathematical operations
The following tables list the computational complexity of various algorithms for common mathematical operations. Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used. Note: Due to the variety of multiplication algorithms, below stands in for the complexity of the chosen multiplication algorithm. This table lists the complexity of mathematical operations on integers.
Related MOOCs (9)
Algebra (part 1)
Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.
Algebra (part 1)
Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.
Algebra (part 2)
Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.