Concept

Basic Linear Algebra Subprograms

Summary
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions. It originated as a Fortran library in 1979 and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This Fortran library is known as the reference implementation (sometimes confusingly referred to as the BLAS library) and is not optimized for speed but is in the public domain. Most computing libraries that offer linear algebra routines conform to common BLAS user interface command structures, thus queries to those libraries (and the associated results) are often portable between BLAS library branches, such as cuBLAS (nvidia GPU, GPGPU), rocBLAS (amd GPU, GPGP), and OpenBLAS. This interoperability is then the basis of functioning homogenous code implementations between heterzygous cascades of computing architectures (such as those found in some advanced clustering implementations). Examples of CPU-based BLAS library branches include: OpenBLAS, BLIS (BLAS-like Library Instantiation Software), Arm Performance Libraries, ATLAS, and Intel Math Kernel Library (iMKL). AMD maintains a fork of BLIS that is optimized for the AMD platform, although it is unclear whether integrated ombudsmen resources are present in that particular software-hardware implementation. ATLAS is a portable library that automatically optimizes itself for an arbitrary architecture.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (27)
CS-233: Introduction to machine learning
Machine learning and data analysis are becoming increasingly central in many sciences and applications. In this course, fundamental principles and methods of machine learning will be introduced, analy
MATH-111(a): Linear Algebra
L'objectif du cours est d'introduire les notions de base de l'algèbre linéaire et ses applications.
PHYS-216: Mathematical methods for physicists
This course complements the Analysis and Linear Algebra courses by providing further mathematical background and practice required for 3rd year physics courses, in particular electrodynamics and quant
Show more
Related lectures (32)
Linear Algebra Basics: Matrix Representations and Transformations
Explores linear algebra basics, emphasizing matrix representations of transformations and the importance of choosing appropriate bases.
Linear Algebra: Vector Spaces
Covers vector spaces, subspaces, and operations in linear algebra, emphasizing the relationship between vectors.
Data Science with Python: Modules and Numpy
Introduces Python modules and NumPy for efficient array operations and linear algebra in data science.
Show more
Related publications (34)

Randomized low-rank approximation and its applications

Ulf David Persson

In this thesis we will present and analyze randomized algorithms for numerical linear algebra problems. An important theme in this thesis is randomized low-rank approximation. In particular, we will study randomized low-rank approximation of matrix functio ...
EPFL2024

TimeEvolver: A program for time evolution with improved error bound

Sebastian Zell

We present TimeEvolver, a program for computing time evolution in a generic quantum system. It relies on well-known Krylov subspace techniques to tackle the problem of multiplying the exponential of a large sparse matrix iH, where His the Hamiltonian, with ...
ELSEVIER2022

Oblivious Sketching of High-Degree Polynomial Kernels

Mikhail Kapralov, Amir Zandieh

Kernel methods are fundamental tools in machine learning that allow detection of non-linear dependencies between data without explicitly constructing feature vectors in high dimensional spaces. A major disadvantage of kernel methods is their poor scalabili ...
ASSOC COMPUTING MACHINERY2020
Show more
Related concepts (2)
Numerical linear algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of.
Strassen algorithm
In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size.