**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Person# Alice Cortinovis

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related units

Loading

Courses taught by this person

Loading

Related research domains

Loading

Related publications

Loading

People doing similar research

Loading

Courses taught by this person

No results

Related units (3)

People doing similar research (60)

Related publications (8)

Loading

Loading

Loading

Related research domains (7)

Algorithm

In mathematics and computer science, an algorithm (ˈælɡərɪðəm) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algo

Low-rank approximation

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable)

Matrix (mathematics)

In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a pr

Alice Cortinovis, Daniel Kressner, Stefano Massei

This work is concerned with approximating matrix functions for banded matrices, hierarchically semiseparable matrices, and related structures. We develop a new divide-and-conquer method based on (rational) Krylov subspace methods for performing low-rank updates of matrix functions. Our convergence analysis of the newly proposed method proceeds by establishing relations to best polynomial and rational approximation. When only the trace or the diagonal of the matrix function is of interest, we demonstrate---in practice and in theory---that convergence can be faster. For the special case of a banded matrix, we show that the divide-and-conquer method reduces to a much simpler algorithm, which proceeds by computing matrix functions of small submatrices. Numerical experiments confirm the effectiveness of the newly developed algorithms for computing large-scale matrix functions from a wide variety of applications.

In this thesis we propose and analyze algorithms for some numerical linear algebra tasks: finding low-rank approximations of matrices, computing matrix functions, and estimating the trace of matrices.In the first part, we consider algorithms for building low-rank approximations of a matrix from some rows or columns of the matrix itself. We prove a priori error bounds for a greedy algorithm for cross approximation and we develop a faster and more efficient variant of an existing algorithm for column subset selection. Moreover, we present a new deterministic polynomial-time algorithm that gives a cross approximation which is quasi-optimal in the Frobenius norm. The second part of the thesis is concerned with matrix functions. We develop a divide-and-conquer algorithm for computing functions of matrices that are banded, hierarchically semiseparable, or have some other off-diagonal low-rank structure. An important building block of our approach is an existing algorithm for updating the function of a matrix that undergoes a low-rank modification (update), for which we present new convergence results. The convergence analysis of our divide-and-conquer algorithm is related to polynomial or rational approximation of the function.In the third part we consider the problem of approximating the trace of a matrix which is available indirectly, through matrix-vector multiplications. We analyze a stochastic algorithm, the Hutchinson trace estimator, for which we prove tail bounds for symmetric (indefinite) matrices. Then we apply our results to the computation of the (log)determinants of symmetric positive definite matrices.

Alice Cortinovis, Daniel Kressner, Ulf David Persson

This paper is concerned with two improved variants of the Hutch++ algorithm for estimating the trace of a square matrix, implicitly given through matrix-vector products. Hutch++ combines randomized low-rank approximation in a first phase with stochastic trace estimation in a second phase. In turn, Hutch++ only requires O (epsilon(-1)) matrix-vector products to approximate the trace within a relative error\varepsilon with high probability, provided that the matrix is symmetric positive semidefinite. This compares favorably with the O (epsilon(-2)) matrix-vector products needed when using stochastic trace estimation alone. In Hutch++, the number of matrix-vector products is fixed a priori and distributed in a prescribed fashion among the two phases. In this work, we derive an adaptive variant of Hutch++, which outputs an estimate of the trace that is within some prescribed error tolerance with a controllable failure probability, while splitting the matrix-vector products in a near-optimal way among the two phases. For the special case of a symmetric positive semidefinite matrix, we present another variant of Hutch++, called Nystrom++, which utilizes the so-called Nystrom approximation and requires only one pass over the matrix, as compared to two passes with Hutch++. We extend the analysis of Hutch++ to Nystrom++. Numerical experiments demonstrate the effectiveness of our two new algorithms.