**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Least squares

Summary

The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the residuals (a residual being the difference between an observed value and the fitted value provided by a model) made in the results of each individual equation.
The most important application is in data fitting. When the problem has substantial uncertainties in the independent variable (the x variable), then simple regression and least-squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares.
Least squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis;

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related courses (79)

CS-233(a): Introduction to machine learning (BA3)

Machine learning and data analysis are becoming increasingly central in many sciences and applications. In this course, fundamental principles and methods of machine learning will be introduced, analyzed and practically implemented.

DH-406: Machine learning for DH

This course aims to introduce the basic principles of machine learning in the context of the digital humanities. We will cover both supervised and unsupervised learning techniques, and study and implement methods to analyze diverse data types, such as images, music and social network data.

EE-512: Applied biomedical signal processing

The goal of this course is twofold: (1) to introduce physiological basis, signal acquisition solutions (sensors) and state-of-the-art signal processing techniques, and (2) to propose concrete examples of applications for vital sign monitoring and diagnosis purposes.

Related people (16)

Related units (9)

Related lectures (218)

Related concepts (74)

Statistics

Statistics (from German: Statistik, "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and present

Regression analysis

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a '

Normal distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function

Related publications (100)

Loading

Loading

Loading

This thesis focuses on the development of novel multiresolution image approximations. Specifically, we present two kinds of generalization of multiresolution techniques: image reduction for arbitrary scales, and nonlinear approximations using other metrics than the standard Euclidean one. Traditional multiresolution decompositions are restricted to dyadic scales. As first contribution of this thesis, we develop a method that goes beyond this restriction and that is well suited to arbitrary scale-change computations. The key component is a new and numerically exact algorithm for computing inner products between a continuously defined signal and B-splines of any order and of arbitrary sizes. The technique can also be applied for non-uniform to uniform grid conversion, which is another approximation problem where our method excels. Main applications are resampling and signal reconstruction. Although simple to implement, least-squares approximations lead to artifacts that could be reduced if nonlinear methods would be used instead. The second contribution of the thesis is the development of nonlinear spline pyramids that are optimal for lp-norms. First, we introduce a Banach-space formulation of the problem and show that the solution is well defined. Second, we compute the lp-approximation thanks to an iterative optimization algorithm based on digital filtering. We conclude that l1-approximations reduce the artifacts that are inherent to least-squares methods; in particular, edge blurring and ringing. In addition, we observe that the error of l1-approximations is sparser. Finally, we derive an exact formula for the asymptotic Lp-error; this result justifies using the least-squares approximation as initial solution for the iterative optimization algorithm when the degree of the spline is even; otherwise, one has to include an appropriate correction term. The theoretical background of the thesis includes the modelisation of images in a continuous/discrete formalism and takes advantage of the approximation theory of linear shift-invariant operators. We have chosen B-splines as basis functions because of their nice properties. We also propose a new graphical formalism that links B-splines, finite differences, differential operators, and arbitrary scale changes.

This thesis focuses on the numerical analysis of partial differential equations (PDEs) with an emphasis on first and second-order fully nonlinear PDEs. The main goal is the design of numerical methods to solve a variety of equations such as orthogonal maps, the prescribed Jacobian equation and inequality, the elliptic and parabolic Monge-Ampère equations.
For orthogonal map we develop an \emph{operator-splitting/finite element} approach for the numerical solution of the Dirichlet problem. This approach is built on the variational principle, the introduction of an associated flow problem, and a time-stepping splitting algorithm. Moreover, we propose an extension of this method with an \emph{anisotropic mesh adaptation algorithm}. This extension allows us to track singularities of the solution's gradient more accurately. Various numerical experiments demonstrate the accuracy and the robustness of the proposed method for both constant and adaptive mesh.
For the prescribed Jacobian equation and the three-dimensional Monge-Ampère equation, we consider a \emph{least-squares/relaxation finite element method} for the numerical solution of the Dirichlet problems. We then introduce a relaxation algorithm that splits the least-square problem, which stems from a reformulation of the original equations, into local nonlinear and variational problems. We develop dedicated solvers for the algebraic problems based on Newton method and we solve the differential problems using mixed low-order finite element method. Overall the least squares approach exhibits appropriate convergence orders in $L^2(\Omega)$ and $H^1(\Omega)$ error norms for various numerical tests.
We also design a \emph{second-order time integration method} for the approximation of a parabolic two-dimensional Monge-Ampère equation. The space discretization of this method is based on low-order finite elements, and the time discretization is achieved by the implicit Crank-Nicolson type scheme.
We verify the efficiency of the proposed method on time-dependent and stationary problems. The results of numerical experiments show that the method achieves nearly optimal orders for the $L^2(\Omega)$ and $H^1(\Omega)$ error norms when smooth solutions are approximated.
Finally, we present an adaptive mesh refinement algorithm for the elliptic Monge-Ampere equation based on the residual error estimate. The robustness of the proposed algorithm is verified using various test cases and two different solvers which are inspired by the two previous proposed numerical methods.

Alexandre Caboussat, Dimitrios Gourzoulidis

We consider a least-squares/relaxation finite element method for the numerical solution of the prescribed Jacobian equation. We look for its solution via a least-squares approach. We introduce a relaxation algorithm that decouples this least-squares problem into a sequence of local nonlinear problems and variational linear problems. We develop dedicated solvers for the algebraic problems based on Newton's method and we solve the differential problems using mixed low-order finite elements. Various numerical experiments demonstrate the accuracy, efficiency and the robustness of the proposed method, compared for instance to augmented Lagrangian approaches.