Concept

Autoregressive–moving-average model

In the statistical analysis of time series, autoregressive–moving-average (ARMA) models provide a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the autoregression (AR) and the second for the moving average (MA). The general ARMA model was described in the 1951 thesis of Peter Whittle, Hypothesis testing in time series analysis, and it was popularized in the 1970 book by George E. P. Box and Gwilym Jenkins. Given a time series of data , the ARMA model is a tool for understanding and, perhaps, predicting future values in this series. The AR part involves regressing the variable on its own lagged (i.e., past) values. The MA part involves modeling the error term as a linear combination of error terms occurring contemporaneously and at various times in the past. The model is usually referred to as the ARMA(p,q) model where p is the order of the AR part and q is the order of the MA part (as defined below). ARMA models can be estimated by using the Box–Jenkins method. Autoregressive model The notation AR(p) refers to the autoregressive model of order p. The AR(p) model is written as where are parameters and the random variable is white noise, usually independent and identically distributed (i.i.d.) normal random variables. In order for the model to remain stationary, the roots of its characteristic polynomial must lie outside of the unit circle. For example, processes in the AR(1) model with are not stationary because the root of lies within the unit circle. Moving-average model The notation MA(q) refers to the moving average model of order q: where the are the parameters of the model, is the expectation of (often assumed to equal 0), and the , ,... are again i.i.d. white noise error terms that are commonly normal random variables. The notation ARMA(p, q) refers to the model with p autoregressive terms and q moving-average terms. This model contains the AR(p) and MA(q) models, The general ARMA model was described in the 1951 thesis of Peter Whittle, who used mathematical analysis (Laurent series and Fourier analysis) and statistical inference.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Ontological neighbourhood
Related courses (18)
MATH-342: Time series
A first course in statistical time series analysis and applications.
FIN-403: Econometrics
The course covers basic econometric models and methods that are routinely applied to obtain inference results in economic and financial applications.
COM-300: Stochastic models in communication
L'objectif de ce cours est la maitrise des outils des processus stochastiques utiles pour un ingénieur travaillant dans les domaines des systèmes de communication, de la science des données et de l'i
Show more
Related lectures (65)
Time Series Models: Autoregressive Processes
Explores time series models, emphasizing autoregressive processes, including white noise, AR(1), and MA(1), among others.
Parametric Signal Models: Matlab Practice
Covers parametric signal models and practical Matlab applications for Markov chains and AutoRegressive processes.
Binary Choice Models and Time Series Analysis
Explores binary choice models like probit and logit, as well as univariate time series analysis with ARIMA models for forecasting economic variables.
Show more
Related publications (123)

On distributional autoregression and iterated transportation

Victor Panaretos, Laya Ghodrati

We consider the problem of defining and fitting models of autoregressive time series of probability distributions on a compact interval of Double-struck capital R. An order-1 autoregressive model in this context is to be understood as a Markov chain, where ...
Hoboken2024

Directional sensitivity of cortical neurons towards TMS induced electric fields

Werner Alfons Hilda Van Geit, Aurélien Tristan Jaquier

We derived computationally efficient average response models of different types of cortical neurons, which are subject to external electric fields from Transcranial Magnetic Stimulation. We used 24 reconstructions of pyramidal cells (PC) from layer 2/3, 24 ...
2023

Comparison of Three Imputation Methods for Groundwater Level Timeseries

Andrea Rinaldo

This study compares three imputation methods applied to the field observations of hydraulic head in subsurface hydrology. Hydrogeological studies that analyze the timeseries of groundwater elevations often face issues with missing data that may mislead bot ...
MDPI2023
Show more
Related concepts (15)
Moving-average model
In time series analysis, the moving-average model (MA model), also known as moving-average process, is a common approach for modeling univariate time series. The moving-average model specifies that the output variable is cross-correlated with a non-identical to itself random-variable. Together with the autoregressive (AR) model, the moving-average model is a special case and key component of the more general ARMA and ARIMA models of time series, which have a more complicated stochastic structure.
Autoregressive model
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation which should not be confused with differential equation).
Autoregressive integrated moving average
In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. To better comprehend the data or to forecast upcoming series points, both of these models are fitted to time series data. ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of mean (but not variance/autocovariance), where an initial differencing step (corresponding to the "integrated" part of the model) can be applied one or more times to eliminate the non-stationarity of the mean function (i.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.