Concept

Heteroskedasticity-consistent standard errors

The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors (or simply robust standard errors), Eicker–Huber–White standard errors (also Huber–White standard errors or White standard errors), to recognize the contributions of Friedhelm Eicker, Peter J. Huber, and Halbert White. In regression and time-series modelling, basic forms of models make use of the assumption that the errors or disturbances ui have the same variance across all observation points. When this is not the case, the errors are said to be heteroskedastic, or to have heteroskedasticity, and this behaviour will be reflected in the residuals estimated from a fitted model. Heteroskedasticity-consistent standard errors are used to allow the fitting of a model that does contain heteroskedastic residuals. The first such approach was proposed by Huber (1967), and further improved procedures have been produced since for cross-sectional data, time-series data and GARCH estimation. Heteroskedasticity-consistent standard errors that differ from classical standard errors may indicate model misspecification. Substituting heteroskedasticity-consistent standard errors does not resolve this misspecification, which may lead to bias in the coefficients. In most situations, the problem should be found and fixed. Other types of standard error adjustments, such as clustered standard errors or HAC standard errors, may be considered as extensions to HC standard errors. Heteroskedasticity-consistent standard errors are introduced by Friedhelm Eicker, and popularized in econometrics by Halbert White. Consider the linear regression model for the scalar . where is a k x 1 column vector of explanatory variables (features), is a k × 1 column vector of parameters to be estimated, and is the residual error. The ordinary least squares (OLS) estimator is where is a vector of observations , and denotes the matrix of stacked values observed in the data.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (10)
FIN-403: Econometrics
The course covers basic econometric models and methods that are routinely applied to obtain inference results in economic and financial applications.
PHYS-231: Data analysis for Physics
Ce cours présentera les bases de l'analyse des données et de l'apprentissage à partir des données, l'estimation des erreurs et la stochasticité en physique. Les concepts seront introduits théoriquemen
ENG-267: Estimation methods
Les étudiants traitent des observations entachées d'incertitude de manière rigoureuse. Ils maîtrisent les principales méthodes de compensation des mesures et d'estimation des paramètres. Ils appliquen
Show more
Related lectures (37)
Heteroskedasticity and Autocorrelation
Explores heteroskedasticity and autocorrelation in econometrics, covering implications, applications, testing methods, and hypothesis testing consequences.
Heteroskedasticity: Ch. 4a
Explores heteroskedasticity in econometrics, discussing its impact on standard errors, alternative estimators, testing methods, and implications for hypothesis testing.
Quantum Chemistry Fundamentals
Covers the fundamentals of quantum chemistry, including valence bond theory and molecular orbitals.
Show more
Related publications (118)
Related concepts (5)
Linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.
Homoscedasticity and heteroscedasticity
In statistics, a sequence (or a vector) of random variables is homoscedastic (ˌhoʊmoʊskəˈdæstɪk) if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used.
Generalized least squares
In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in the regression model. Least squares and weighted least squares may need to be more statistically efficient and prevent misleading inferences. GLS was first described by Alexander Aitken in 1935. In standard linear regression models one observes data on n statistical units.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.