**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.

Publication# Framework for explicit drift correction in multivariate calibration models

Abstract

Latent-variable calibrations using principal component regression and partial least-squares regression are often compromised by drift such as systematic disturbances and offsets. This paper presents a two-step framework that facilitates the evaluation and comparison of explicit drift-correction methods. In the first step, the drift subspace is estimated using different types of correction data in a master/slave setting. The correction data are measured for the slave with drift and computed for the master with no drift. In the second step, the original calibration data are corrected for the estimated drift subspace using shrinkage or orthogonal projection. The two cases of no correction and drift correction by orthogonal projection can be seen as special cases of shrinkage. The two-step framework is illustrated with four different experimental data sets. The first three examples study drift correction on one instrument (temperature effects, spectral differences between samples obtained from different plants, instrumental drift), while the fourth example studies calibration transfer between two instruments.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related MOOCs (15)

Related concepts (33)

Related publications (32)

Algebra (part 1)

Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.

Algebra (part 1)

Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.

Algebra (part 2)

Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.

In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space. Because both the X and Y data are projected to new spaces, the PLS family of methods are known as bilinear factor models.

Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods. The three main linear least squares formulations are: Ordinary least squares (OLS) is the most common estimator.

Michele Ceriotti, Edgar Albert Engel, Benjamin Aaron Helfrecht, Sergei Kliavinek

Selecting the most relevant features and samples out of a large set of candidates is a task that occurs very often in the context of automated data analysis, where it improves the computational performance and often the transferability of a model. Here we ...

Ontological neighbourhood

Nicolas Henri Bernard Flammarion, Aditya Vardhan Varre

We consider stochastic approximation for the least squares regression problem in the non-strongly convex setting. We present the first practical algorithm that achieves the optimal prediction error rates in terms of dependence on the noise of the problem, ...

2022, ,

We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width ...

2023