**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Sparse matrix

Summary

In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements (e.g., m × n for an m × n matrix) is sometimes referred to as the sparsity of the matrix.
Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (52)

Related publications (100)

Loading

Loading

Loading

While wired infrastructure constitutes the backbone of most wireless networks, wireless systems appeal the most to the dynamic and rapidly evolving requirements of today's communication systems because of their ease of deployment and mobility, not to mention the high cost of building a wired infrastructure. This led to an increased interest in the so called wireless ad hoc networks formed of a group of users, known as nodes, capable of communicating with each other through a shared wireless channel. Needless to say, these nodes are asked to use the shared wireless medium in the most efficient fashion, which is not an easy task given the absence of wired backbone. This requires a profound understanding of the wireless medium to establish a decentralized cooperation scheme, if needed, that best utilizes the resources available in the wireless channel. A significant part of this thesis focuses on the properties of the shared wireless channel, whereby we are interested in studying the spatial diversity and the beamforming capabilities in large wireless networks which are crucial in analyzing the throughput of ad hoc networks. In this thesis, we mainly focus on the problem of broadcasting information in the most efficient manner in a large two-dimensional ad hoc wireless network at low SNR and under line-of-sight propagation. A new communication scheme, which we call multi-stage back-and-forth beamforming, is proposed, where source nodes first broadcast their data to the entire network, despite the lack of sufficient available power. The signal's power is then reinforced via successive back-and-forth beamforming transmissions between different groups of nodes in the network, so that all nodes are able to decode the transmitted information at the end. This scheme is shown to achieve asymptotically the broadcast capacity of the network, which is expressed in terms of the largest singular value of the matrix of fading coefficients between the nodes in the network. A detailed mathematical analysis is then presented to evaluate the asymptotic behavior of this largest singular value. We further characterize the maximum achievable broadcast rate under different sparsity regimes. Our result shows that this rate depends negatively on the sparsity of the network. This is to be put in contrast with the number of degrees of freedom available in the network, which have been shown previously to increase with the sparsity of the network. In this context, we further characterize the degrees of freedom versus beamforming gain tradeoff, which reveals that high beamforming gains can only be obtained at the expense of reduced spatial degrees of freedom. Another important factor that impacts the throughput in wireless networks is the transmit/receive capability of the transceiver at the nodes. Traditionally, wireless radios are half-duplex. However, building on self-interference cancellation techniques, full-duplex radios have emerged as a viable paradigm over the recent years. In the last part of this thesis, we ask the fundamental question: how much can full-duplex help? Intuitively, one may expect that full-duplex radios can at most double the capacity of wireless networks, since they enable nodes to transmit and receive at the same time. However, we show that the capacity gain can indeed be larger than a factor of 2; in particular, we construct a specific instance of a wireless network where the the full-duplex capacity is triple the half-duplex capacity.

Related courses (53)

COM-406: Foundations of Data Science

We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas and techniques that come from probability, information theory as well as signal processing.

MATH-111(en): Linear algebra (english)

The purpose of the course is to introduce the basic notions of linear algebra and its applications.

EE-556: Mathematics of data: from theory to computation

This course provides an overview of key advances in continuous optimization and statistical analysis for machine learning. We review recent learning formulations and models as well as their guarantees, describe scalable solution techniques and algorithms, and illustrate the trade-offs involved.

Related concepts (31)

Matrix (mathematics)

In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a pr

Eigenvalues and eigenvectors

In linear algebra, an eigenvector (ˈaɪgənˌvɛktər) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is

Numerical analysis

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathema

Related units (23)

Related lectures (113)

"I choose this restaurant because they have vegan sandwiches" could be a typical explanation we would expect from a human. However, current Reinforcement Learning (RL) techniques are not able to provide such explanations, when trained on raw pixels. RL algorithms for state-of-the-art benchmark environments are based on neural networks, which lack interpretability, because of the very factor that makes them so versatile – they have many parameters and intermediate representations. Enforcing safety guarantees is important when deploying RL agents in the real world, and guarantees require interpretability of the agent. Humans use short explanations that capture only the essential parts and often contain few causes to explain an effect. In our thesis, we address the problem of making RL agents understandable by humans. In addition to the safety concerns, the quest to mimic human-like reasoning is of general scientific interest, as it sheds light on the easy problem of consciousness. The problem of providing interpretable and simple causal explanations of agent's behavior is connected to the problem of learning good state representations. If we lack such a representation, any reasoning algorithm's outputs would be useless for interpretability, since even the "referents" of the "thoughts" of such a system would be obscure to us. One way to define simplicity of causal explanations via the sparsity of the Causal Model that describes the environment: the causal graph has the fewest edges connecting causes to their effects. For example, a model for choosing the restaurant that only depends on the cause "vegan" is simpler and more interpretable than a model that looks at each pixel of a photo of the menu of a restaurant, and possibly relies as well on spurious correlations, such the style of the menu. In this thesis, we propose a framework "CauseOccam" for model-based Reinforcement Learning where the model is regularized for simplicity in terms of sparsity of the causal graph it corresponds to. The framework contains a learned mapping from observations to latent features, and a model predicting latent features at the next time-steps given ones from the current time-step. The latent features are regularized with the sparsity of the model, compared to a more traditional regularization on the features themselves, or via a hand-crafted interpretability loss. To achieve sparsity, we use discrete Bernoulli variables with gradient estimation, and to find the best parameters, we use the primal-dual constrained formulation to achieve a target model quality. The novelty of this work is in learning jointly a sparse causal graph and the representation taking pixels as the input on RL environments. We test this framework on benchmark environments with non-trivial high-dimensional dynamics and show that it can uncover the causal graph with the fewest edges in the latent space. We describe the implications of our work for developing priors enforcing interpretability.

2021A fundamental problem in signal processing is to design computationally efficient algorithms to filter signals. In many applications, the signals to filter lie on a sphere. Meaningful examples of data of this kind are weather data on the Earth, or images of the sky. It is then important to design filtering algorithms that are computationally efficient and capable of exploiting the rotational symmetry of the problem. In these applications, given a continuous signal $f: \mathbb S^2 \rightarrow \mathbb R$ on a 2-sphere $\mathbb S^2 \subset \mathbb R^3$, we can only know the vector of its sampled values $\mathbf f \in \mathbb R^N:\ (\mathbf f)_i = f(\mathbf x_i)$ in a finite set of points $\mathcal P \subset \mathbb S^2,\quad \mathcal P = \{\mathbf x_i\}_{i=0}^{n-1}$ where our sensors are located. Perraudin et al. in \cite{DeepSphere} construct a sparse graph $G$ on the vertex set $\mathcal P$ and then use a polynomial of the corresponding graph Laplacian matrix $\mathbf L \in \mathbb R^{n\times n}$ to perform a computationally efficient - $\mathcal O (n)$ - filtering of the sampled signal $\mathbf f$. In order to study how well this algorithm respects the symmetry of the problem - i.e it is equivariant to the rotation group SO(3) - it is important to guarantee that the spectrum of $\mathbf L$ and spectrum of the Laplace-Beltrami operator $\Delta_\mathbb S^2$ are somewhat ``close''. We study the spectral properties of such graph Laplacian matrix in the special case of \cite{DeepSphere} where the sampling $\mathcal P$ is the so called HEALPix sampling (acronym for \textbf Hierarchical \textbf Equal \textbf Area iso\textbf Latitude \textbf {Pix}elization) and we show a way to build a graph $G'$ such that the corresponding graph Laplacian matrix $\mathbf L'$ shows better spectral properties than the one presented in \cite{DeepSphere}. We investigate other different methods of building the matrix $\mathbf L$ better suited to non uniform sampling measures. In particular, we studied the Finite Element Method approximation of the Laplace-Beltrami operator on the sphere, and how FEM filtering relates to graph filtering, showing the importance of non symmetric discrete Laplacians when it comes to non uniform sampling measures. We finish by showing how the graph Laplacian $\mathbf L'$ proposed in this work improved the performances of DeepSphere in a well known classification task using different sampling schemes of the sphere, and by comparing the different Discrete Laplacians introduced in this work.

2019