ARFIMA (autoregressive fractionally integrated moving average) extends ARIMA by allowing the differencing parameter (d) to be fractional, which helps model persistent “long memory” dynamics in time series.
The model
An ARFIMA((p,d,q)) process is commonly written as:
[ \Phi(L),(1-L)^d,y_t = \Theta(L),\varepsilon_t, ]
where:
- (L) is the lag operator ((Ly_t=y_{t-1})),
- (\Phi(L)) and (\Theta(L)) are autoregressive and moving-average polynomials,
- ((1-L)^d) is the fractional differencing operator,
- (\varepsilon_t) is a (typically) mean-zero innovation.
What fractional differencing does
When (d) is an integer, differencing removes unit-root type nonstationarity by subtracting lagged values. When (d) is fractional, the operator ((1-L)^d) can be expanded into an infinite weighted sum of lags, producing dependence that decays slowly.
A common rule of thumb:
- (d=0): short memory (an ARMA-type process).
- (0<d<0.5): stationary but long memory (autocorrelations decay slowly).
- (d\ge 0.5): nonstationary behavior becomes important.
When economists use ARFIMA
ARFIMA models are used when empirical autocorrelations decay too slowly for standard short-memory models, for example in some macroeconomic persistence patterns or in certain financial time series.
Related Terms
- ARMA Model
- ARIMA Model
- Autocorrelation Function
- Dickey-Fuller
- Unit Root Process
- Covariance Stationary Process