**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Eigendecomposition of a matrix

Summary

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
Fundamental theory of matrix eigenvectors and eigenvalues
Eigenvalue, eigenvector and eigenspace
A (nonzero) vector v of dimension N is an eigenvector of a square N × N matrix A if it satisfies a linear equation of the form
:\mathbf{A} \mathbf{v} = \lambda \mathbf{v}
for some scalar λ. Then λ is called the eigenvalue corresponding to v. Geometrically speaking, the eigenvectors of A are the vectors that A merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigen

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (4)

Related publications (16)

Loading

Loading

Loading

Related units (2)

Related courses (30)

MATH-111(en): Linear algebra (english)

The purpose of the course is to introduce the basic notions of linear algebra and its applications.

EE-470: Power systems dynamics

This course focuses on the dynamic behavior of a power system. It presents the basic definitions, concepts and models for angular stability analysis with reference to transient stability, steady state stability and long term stability.Fundamentals related to voltage stability are introduced as well.

MICRO-455: Applied machine learning

Real-world engineering applications must cope with a large dataset of dynamic variables, which cannot be well approximated by classical or deterministic models. This course gives an overview of methods from Machine Learning for the analysis of non-linear, highly noisy and multi dimensional data

Related concepts (44)

Eigenvalues and eigenvectors

In linear algebra, an eigenvector (ˈaɪgənˌvɛktər) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is

Matrix (mathematics)

In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a pr

Singular value decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbas

Related lectures (73)

Solar energy has seen tremendous advances in the past years. For thin film photovoltaics, which use less of the expensive semiconductor materials, insufficient light absorption can be a limiting factor. It is hoped that by using diffractive optics to improve the light absorption, the cost per Watt could sink. Correspondingly, the optics of such structures need to compensate for the low absorption by high (structural) resonance, which is challenging to calculate. To estimate optimal structures, a numerical method should be able to assess feasible structures with widely varying geometries quickly. Modal methods allow for an efficient analysis of structures with varying height through the separation of eigenvalue and boundary value problem. First, the thesis aspires to further develop the modal methods for the calculation of optical properties of layered structures containing weakly absorbing metals and semiconductors. Second, the thesis aims to calculate absorption enhancements in idealized, prototypical structures by applying the newly developed methods. The calculations should only depend on material parameters and not contain additional assumptions. These absorption enhancements are not tied to a priori assumptions such as mode couplings, but they solely follow the physics of the structure investigated. The first part of the thesis is concerned with the methodical improvements. A first emphasis is put on studying peculiar properties of the eigenvalue problem, and on new developments of methods to solve it within a layer. Furthermore, it shows several variants for the numerical implementation of the eigenvalue problem. This part includes a new method to calculate the eigenvalues that can be adapted to two dimensional grating problems of arbitrary shape. The new method integrates the eigenvalue problem by making use of a two point trapezoidal formula, and satisfies the boundary condition between different materials exactly. It is energy conserving and the rate of convergence depends on the approximation order. The eigenvalues show a monotonic convergence that allows for extrapolation. The second methodical emphasis is placed on variants of the implementation of the boundary value problem that connects the grating to the incoming and outgoing plane waves. This algorithm describes the propagation of the incident energy to the semiconductor layer and the substrate by solving a non-recursive and numerically stable system of linear equations. A novel variant reduces the bandwidth of the corresponding matrix by a third. The third part of the thesis concerns calculations using the improved methods. First, the improved calculations are verified by showing that the energy conservation of the modal method, as well as the well-behavedness of the condition number of the calculation. Next, numerical results for the new methods are compared to results from the literature for analytic modal methods, and a comparison with existing software is made. Thereafter, the interface plasmons occuring for H polarization are investigated. In the last part of the thesis, calculations are made for the material specific absorption of light in metallic gratings covered by semiconductors, with a special interest in the absorption in the semiconductor. Here, the spectra for rectangular, sinusoidal gratings, and asymmetric gratings are calculated, and the absorption improvement is investigated through an analysis of the involved modes.

Zheng Dang, Pascal Fua, Yinlin Hu, Mathieu Salzmann, Wei Wang

Eigendecomposition (ED) is widely used in deep networks. However, the backpropagation of its results tends to be numerically unstable, whether using ED directly or approximating it with the Power Iteration method, particularly when dealing with large matrices. While this can be mitigated by partitioning the data in small and arbitrary groups, doing so has no theoretical basis and makes its impossible to exploit the power of ED to the full. In this paper, we introduce a numerically stable and differentiable approach to leveraging eigenvectors in deep networks. It can handle large matrices without requiring to split them. We demonstrate the better robustness of our approach over standard ED and PI for ZCA whitening, an alternative to batch normalization, and for PCA denoising, which we introduce as a new normalization strategy for deep networks, aiming to further denoise the network's features.

Zheng Dang, Pascal Fua, Yinlin Hu, Mathieu Salzmann, Wei Wang

Eigendecomposition of symmetric matrices is at the heart of many computer vision algorithms. However, the derivatives of the eigenvectors tend to be numerically unstable, whether using the SVD to compute them analytically or using the Power Iteration (PI) method to approximate them. This instability arises in the presence of eigenvalues that are close to each other. This makes integrating eigendecomposition into deep networks difficult and often results in poor convergence, particularly when dealing with large matrices. While this can be mitigated by partitioning the data into small arbitrary groups, doing so has no theoretical basis and makes it impossible to exploit the full power of eigendecomposition. In previous work, we mitigated this using SVD during the forward pass and PI to compute the gradients during the backward pass. However, the iterative deflation procedure required to compute multiple eigenvectors using PI tends to accumulate errors and yield inaccurate gradients. Here, we show that the Taylor expansion of the SVD gradient is theoretically equivalent to the gradient obtained using PI without relying in practice on an iterative process and thus yields more accurate gradients. We demonstrate the benefits of this increased accuracy for image classification and style transfer.

2021