**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Simple linear regression

Summary

In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the x and y coordinates in a Cartesian coordinate system) and finds a linear function (a non-vertical straight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable.
The adjective simple refers to the fact that the outcome variable is related to a single predictor.
It is common to make the additional stipulation that the ordinary least squares (OLS) method should be used: the accuracy of each predicted value is measured by its squared residual (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. Other regression methods that can be used in place of ordinary least squares include least absol

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (18)

Loading

Loading

Loading

Related people (1)

Related units (1)

Related concepts (21)

Linear regression

In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variable

Regression analysis

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a '

Ordinary least squares

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear functio

Related lectures (167)

Related courses (48)

DH-406: Machine learning for DH

This course aims to introduce the basic principles of machine learning in the context of the digital humanities. We will cover both supervised and unsupervised learning techniques, and study and implement methods to analyze diverse data types, such as images, music and social network data.

CS-233(a): Introduction to machine learning (BA3)

Machine learning and data analysis are becoming increasingly central in many sciences and applications. In this course, fundamental principles and methods of machine learning will be introduced, analyzed and practically implemented.

FIN-403: Econometrics

The course covers basic econometric models and methods that are routinely applied to obtain inference results in economic and financial applications.

In this paper we focus on the application of global stochastic optimization methods to extremum estimators. We propose a general stochastic method the master method which includes several stochastic optimization algorithms as a particular case. The proposed method is sufficiently general to include the Solis-Wets method, the improving hit-and-run algorithm, and a stochastic version of the zigzag algorithm. A matrix formulation of the master method is presented and some specific results are given for the stochastic zigzag algorithm. Convergence of the proposed method is established under a mild set of conditions, and a simple regression model is used to illustrate the method. (C) 2011 Elsevier B.V. All rights reserved.

Ricardo Andres Chavarriaga Lozano

Background: One of the current challenges in brain-machine interfacing is to characterize and decode upper limb kinematics from brain signals, e.g. to control a prosthetic device. Recent research work states that it is possible to do so based on low frequency EEG components. However, the validity of these results is still a matter of discussion. In this paper, we assess the feasibility of decoding upper limb kinematics from EEG signals in center-out reaching tasks during passive and active movements. Methods: The decoding of arm movement was performed using a multidimensional linear regression. Passive movements were analyzed using the same methodology to study the influence of proprioceptive sensory feedback in the decoding. Finally, we evaluated the possible advantages of classifying reaching targets, instead of continuous trajectories. Results: The results showed that arm movement decoding was significantly above chance levels. The results also indicated that EEG slow cortical potentials carry significant information to decode active center-out movements. The classification of reached targets allowed obtaining the same conclusions with a very high accuracy. Additionally, the low decoding performance obtained from passive movements suggests that discriminant modulations of low-frequency neural activity are mainly related to the execution of movement while proprioceptive feedback is not sufficient to decode upper limb kinematics. Conclusions: This paper contributes to the assessment of feasibility of using linear regression methods to decode upper limb kinematics from EEG signals. From our findings, it can be concluded that low frequency bands concentrate most of the information extracted from upper limb kinematics decoding and that decoding performance of active movements is above chance levels and mainly related to the activation of cortical motor areas. We also show that the classification of reached targets from decoding approaches may be a more suitable real-time methodology than a direct decoding of hand position.

We introduce the Multiplicative Update Selector and Estimator (MUSE) algorithm for sparse approximation in under-determined linear regression problems. Given ƒ = Φα* + μ, the MUSE provably and efficiently finds a k-sparse vector α̂ such that ∥Φα̂ − ƒ∥∞ ≤ ∥μ∥∞ + O ( 1 over √k), for any k-sparse vector α*, any measurement matrix Φ, and any noise vector μ. We cast the sparse approximation problem as a zero-sum game over a properly chosen new space; this reformulation provides salient computational advantages in recovery. When the measurement matrix Φ provides stable embedding to sparse vectors (the so-called restricted isometry property in compressive sensing), the MUSE also features guarantees on ∥α* − α̂∥2. Simulation results demonstrate the scalability and performance of the MUSE in solving sparse approximation problems based on the Dantzig Selector.