This work is concerned with the computation of the action of a matrix function f(A), such as the matrix exponential or the matrix square root, on a vector b. For a general matrix A, this can be done by computing the compression of A onto a suitable Krylov ...
For a high-dimensional problem, a randomized Gram-Schmidt (RGS) algorithm is beneficial in terms of both computational cost and numerical stability. We apply this dimension reduction technique by random sketching to Krylov subspace methods, e.g., to the ge ...
In this thesis we will present and analyze randomized algorithms for numerical linear algebra problems. An important theme in this thesis is randomized low-rank approximation. In particular, we will study randomized low-rank approximation of matrix functio ...
A key challenge across many disciplines is to extract meaningful information from data which is often obscured by noise. These datasets are typically represented as large matrices. Given the current trend of ever-increasing data volumes, with datasets grow ...
In the rapidly evolving landscape of machine learning research, neural networks stand out with their ever-expanding number of parameters and reliance on increasingly large datasets. The financial cost and computational resources required for the training p ...
Sylvester matrix equations are ubiquitous in scientific computing. However, few solution techniques exist for their generalized multiterm version, as they now arise in an increasingly large number of applications. In this work, we consider algebraic parame ...
The finite element method is a well-established method for the numerical solution of partial differential equations (PDEs), both linear and nonlinear. However, the repeated re -assemblage of finite element matrices for nonlinear PDEs is frequently pointed ...
State-of-the-art Artificial Intelligence (AI) algorithms, such as graph neural networks and recommendation systems, require floating-point computation of very large matrix multiplications over sparse data. Their execution in resource-constrained scenarios, ...
The locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm is a popular approach for computing a few smallest eigenvalues and the corresponding eigenvectors of a large Hermitian positive definite matrix A. In this work, we propose a mix ...
This work is concerned with approximating matrix functions for banded matrices, hierarchically semiseparable matrices, and related structures. We develop a new divide-and-conquer method based on (rational) Krylov subspace methods for performing low-rank up ...