**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Design matrix

Summary

In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. The design matrix is used in certain statistical models, e.g., the general linear model. It can contain indicator variables (ones and zeros) that indicate group membership in an ANOVA, or it can contain values of continuous variables.
The design matrix contains data on the independent variables (also called explanatory variables) in statistical models which attempt to explain observed data on a response variable (often called a dependent variable) in terms of the explanatory variables. The theory relating to such models makes substantial use of matrix manipulations involving the design matrix: see for example linear regression. A nota

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people

No results

Related units

Related publications (2)

No results

Loading

Loading

Related concepts (10)

Linear regression

In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variable

Regression analysis

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a '

Covariance matrix

In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the co

Related courses (14)

ENV-548: Sensor orientation

Determination of spatial orientation (i.e. position, velocity, attitude) via integration of inertial sensors with satellite positioning. Prerequisite for many applications related to remote sensing, environmental monitoring, mobile mapping, robotics, space exploration, smart-phone navigation, etc.

MATH-341: Linear models

Regression modelling is a fundamental tool of statistics, because it describes how the law of a random variable of interest may depend on other variables. This course aims to familiarize students with linear models and some of their extensions, which lie at the basis of more general regression model

MATH-449: Biostatistics

This course covers statistical methods that are widely used in medicine and biology. A key topic is the analysis of longitudinal data: that is, methods to evaluate exposures, effects and outcomes that are functions of time. While motivated by real-life problems, some of the material will be abstract

Related lectures (43)

During the last twenty years, Random matrix theory (RMT) has produced numerous results that allow a better understanding of large random matrices. These advances have enabled interesting applications in the domain of communication. Although this theory can contribute to many other domains such as brain imaging or genetic research, its has been rarely applied. The main barrier to the adoption of RMT may be the lack of concrete statistical results from probabilistic Random matrix theory. Indeed, direct generalisation of classical multivariate theory to high dimensional assumptions is often difficult and the proposed procedures often assume strong hypotheses on the data matrix such as normality or overly restrictive independence conditions on the data.
This thesis proposes a statistical procedure for testing the equality of two independent estimated covariance matrices when the number of potentially dependent data vectors is large and proportional to the size of the vectors corresponding to the number of observed variables. Although the existing theory builds a very good intuition of the behaviour of these matrices, it does not provide enough results to build a satisfactory test for both the power and the robustness. Hence, inspired by spike models, we define the residual spikes and prove many theorems describing the behaviour of many statistics using eigenvectors and eigenvalues in very general cases. For example in the two central theorems of this thesis, the Invariant Angle Theorem and the Invariant Dot Product Theorem.
Using numerous generalisations of the theory, this thesis finally proposes a description of the behaviour of a statistic under a null hypothesis. This statistic allows the user to test the equality of two populations, but also other null hypotheses such as the independence of two sets of variables. Finally, the robustness of the procedure is demonstrated for different classes of models and criteria for evaluating robustness are proposed to the reader.
Therefore, the major contribution of this thesis is to propose a methodology both easy to apply and having good properties. Secondly, a large number of theoretical results are demonstrated and could be easily used to build other applications.

Benjamin Bejar Haro, Marta Martinez-Camara, Martin Vetterli

Linear inverse problems are ubiquitous. Often the measurements do not follow a Gaussian distribution. Additionally, a model matrix with a large condition number can complicate the problem further by making it ill-posed. In this case, the performance of popular estimators may deteriorate significantly. We have developed a new estimator that is both nearly optimal in the presence of Gaussian errors while being also robust against outliers. Furthermore, it obtains meaningful estimates when the problem is ill-posed through the inclusion of l1 and l2 regularizations. The computation of our estimate involves minimizing a non-convex objective function. Hence, we are not guaranteed to find the global minimum in a reasonable amount of time. Thus, we propose two algorithms that converge to a good local minimum in a reasonable (and adjustable) amount of time, as an approximation of the global minimum. We also analyze how the introduction of the regularization term affects the statistical properties of our estimator. We confirm high robustness against outliers and asymptotic efficiency for Gaussian distributions by deriving measures of robustness such as the influence function, sensitivity curve, bias, asymptotic variance, and mean square error. We verify the theoretical results using numerical experiments and show that the proposed estimator outperforms recently proposed methods, especially for increasing amounts of outlier contamination. Python code for all of the algorithms are available online in the spirit of reproducible research.

2017