Augmented Dynamic Adaptive Model

Ivan Svetunkov

2020-12-08

This vignette explains briefly how to use the function adam() and the related auto.adam() in smooth package. It does not aim at covering all aspects of the function, but focuses on the main ones.

ADAM is Augmented Dynamic Adaptive Model. It is a model that underlies ETS, ARIMA and regression, connecting them in a unified framework. The underlying model for ADAM is a Single Source of Error state space model, which is explained in detail separately in an online textbook.

The main philosophy of adam() function is to be agnostic of the provided data. This means that it will work with ts, msts, zoo, xts, data.frame, numeric and other classes of data. The specification of seasonality in the model is done using a separate parameter lags, so you are not obliged to transform the existing data to something specific, and can use it as is. If you provide a matrix, or a data.frame, or a data.table, or any other multivariate structure, then the function will use the first column for the response variable and the others for the explanatory ones. One thing that is currently assumed in the function is that the data is measured at a regular frequency. If this is not the case, you will need to introduce missing values manually.

In order to run the experiments in this vignette, we need to load the following packages:

require(greybox)
require(smooth)
require(Mcomp)

ADAM ETS

First and foremost, ADAM implements ETS model, although in a more flexible way than (Hyndman et al. 2008): it supports different distributions for the error term, which are regulated via distribution parameter. By default, the additive error model relies on Normal distribution, while the multiplicative error one assumes Inverse Gaussian. If you want to reproduce the classical ETS, you would need to specify distribution="dnorm". Here is an example of ADAM ETS(MMM) with Normal distribution on a N2568 data from M3 competition (if you provide an Mcomp object, adam() will automatically set the train and test sets, the forecast horizon and even the needed lags):

testModel <- adam(M3[[2568]], "MMM", lags=c(1,12), distribution="dnorm")
summary(testModel)
#> Model estimated using adam() function: ETS(MMM)
#> Response variable: M3..2568..
#> Distribution used in the estimation: Normal
#> Loss function type: likelihood; Loss function value: 868.7509
#> Coefficients:
#>              Estimate Std. Error Lower 2.5% Upper 97.5%
#> alpha          0.1429     0.0557     0.0323      0.2530
#> beta           0.0121     0.0132     0.0000      0.0382
#> gamma          0.0100     0.0507     0.0000      0.1102
#> level       4407.1174   109.7941  4189.2622   4624.2858
#> trend          1.0064     0.0019     1.0026      1.0101
#> seasonal_1     1.1843     0.0217     1.1574      1.2402
#> seasonal_2     0.8172     0.0148     0.7903      0.8731
#> seasonal_3     0.8267     0.0149     0.7997      0.8826
#> seasonal_4     1.5608     0.0283     1.5338      1.6167
#> seasonal_5     0.7445     0.0136     0.7176      0.8004
#> seasonal_6     1.2706     0.0229     1.2436      1.3265
#> seasonal_7     0.8930     0.0161     0.8661      0.9489
#> seasonal_8     0.9137     0.0166     0.8868      0.9696
#> seasonal_9     1.2313     0.0234     1.2044      1.2872
#> seasonal_10    0.8835     0.0168     0.8565      0.9393
#> seasonal_11    0.8383     0.0159     0.8114      0.8942
#> 
#> Sample size: 116
#> Number of estimated parameters: 17
#> Number of degrees of freedom: 99
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1771.502 1777.747 1818.313 1833.156
plot(forecast(testModel,h=18,interval="parametric"))

You might notice that the summary contains more than what is reported by other smooth functions. This one also produces standard errors for the estimated parameters based on Fisher Information calculation. Note that this is computationally expensive, so if you have a model with more than 30 variables, the calculation of standard errors might take plenty of time. As for the default print() method, it will produce a shorter summary from the model, without the standard errors (similar to what es() does):

testModel
#> Time elapsed: 0.18 seconds
#> Model estimated using adam() function: ETS(MMM)
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 868.7509
#> Persistence vector g:
#>  alpha   beta  gamma 
#> 0.1429 0.0121 0.0100 
#> 
#> Sample size: 116
#> Number of estimated parameters: 17
#> Number of degrees of freedom: 99
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1771.502 1777.747 1818.313 1833.156 
#> 
#> Forecast errors:
#> ME: 645.912; MAE: 817.203; RMSE: 1043.544
#> sCE: 159.713%; sMAE: 11.226%; sMSE: 2.055%
#> MASE: 0.333; RMSSE: 0.329; rMAE: 0.361; rRMSE: 0.344

Also, note that the prediction interval in case of multiplicative error models are approximate. It is advisable to use simulations instead (which is slower, but more accurate):

plot(forecast(testModel,h=18,interval="simulated"))

If you want to do the residuals diagnostics, then it is recommended to use plot function, something like this (you can select, which of the plots to produce):

par(mfcol=c(3,4))
plot(testModel,which=c(1:11))
par(mfcol=c(1,1))
plot(testModel,which=12)

By default ADAM will estimate models via maximising likelihood function. But there is also a parameter loss, which allows selecting from a list of already implemented loss functions (again, see documentation for adam() for the full list) or using a function written by a user. Here is how to do the latter on the example of another M3 series:

lossFunction <- function(actual, fitted, B){
  return(sum(abs(actual-fitted)^3))
}
testModel <- adam(M3[[1234]], "AAN", silent=FALSE, loss=lossFunction)
testModel
#> Time elapsed: 0.02 seconds
#> Model estimated using adam() function: ETS(AAN)
#> Distribution assumed in the model: Normal
#> Loss function type: custom; Loss function value: 23993619
#> Persistence vector g:
#>  alpha   beta 
#> 0.6348 0.2466 
#> 
#> Sample size: 45
#> Number of estimated parameters: 4
#> Number of degrees of freedom: 41
#> Information criteria are unavailable for the chosen loss & distribution.
#> 
#> Forecast errors:
#> ME: -347.014; MAE: 347.014; RMSE: 395.482
#> sCE: -34.097%; sMAE: 4.262%; sMSE: 0.236%
#> MASE: 4.801; RMSSE: 4.417; rMAE: 3.943; rRMSE: 3.568

Note that you need to have parameters actual, fitted and B in the function, which correspond to the vector of actual values, vector of fitted values on each iteration and a vector of the optimised parameters.

loss and distribution parameters are independent, so in the example above, we have assumed that the error term follows Normal distribution, but we have estimated its parameters using a non-conventional loss because we can. Some of distributions assume that there is an additional parameter, which can either be estimated or provided by user. These include Asymmetric Laplace (distribution="dalaplace") with alpha, Generalised Normal and Log Generalised normal (distribution=c("gnorm","dlgnorm")) with beta and Student’s T (distribution="dt") with nu:

testModel <- adam(M3[[1234]], "MMN", silent=FALSE, distribution="dgnorm", beta=3)

The model selection in ADAM ETS relies on information criteria and works correctly only for the loss="likelihood". There are several options, how to select the model, see them in the description of the function: ?adam(). The default one uses branch-and-bound algorithm, similar to the one used in es(), but only considers additive trend models (the multiplicative trend ones are less stable and need more attention from a forecaster):

testModel <- adam(M3[[2568]], "ZXZ", lags=c(1,12), silent=FALSE)
#> Forming the pool of models based on... ANN , ANA , MNM , MAM , Estimation progress:    71 %86 %100 %... Done!
testModel
#> Time elapsed: 0.74 seconds
#> Model estimated using adam() function: ETS(MAM)
#> Distribution assumed in the model: Inverse Gaussian
#> Loss function type: likelihood; Loss function value: 866.4522
#> Persistence vector g:
#>  alpha   beta  gamma 
#> 0.1096 0.0088 0.0000 
#> 
#> Sample size: 116
#> Number of estimated parameters: 17
#> Number of degrees of freedom: 99
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1766.904 1773.149 1813.715 1828.558 
#> 
#> Forecast errors:
#> ME: 635.957; MAE: 820.168; RMSE: 1044.133
#> sCE: 157.252%; sMAE: 11.267%; sMSE: 2.057%
#> MASE: 0.334; RMSSE: 0.33; rMAE: 0.362; rRMSE: 0.344

Note that the function produces point forecasts if h>0, but it won’t generate prediction interval. This is why you need to use forecast() method (as shown in the first example in this vignette).

Similarly to es(), function supports combination of models, but it saves all the tested models in the output for a potential reuse. Here how it works:

testModel <- adam(M3[[2568]], "CXC", lags=c(1,12))
testForecast <- forecast(testModel,h=18,interval="semiparametric", level=c(0.9,0.95))
testForecast
#>          Point forecast Lower bound (5%) Lower bound (2.5%) Upper bound (95%)
#> Sep 1992      10886.130         9243.361           8966.660         12700.725
#> Oct 1992       7831.519         4292.762           3809.616         12391.888
#> Nov 1992       7437.760         4096.437           3635.120         11712.391
#> Dec 1992      10112.336         5856.079           5271.983         15567.949
#> Jan 1993      10478.405         6176.327           5580.797         15958.839
#> Feb 1993       7237.466         4226.174           3792.666         10978.934
#> Mar 1993       7354.970         4409.162           3979.430         10980.145
#> Apr 1993      14002.164         9150.384           8447.093         19980.513
#> May 1993       6647.976         4258.192           3892.750          9491.269
#> Jun 1993      11353.768         7881.474           7351.010         15478.587
#> Jul 1993       7973.697         5717.992           5359.163         10577.689
#> Aug 1993       8188.519         6298.729           5988.622         10319.498
#> Sep 1993      11052.078         9331.459           9043.073         12959.623
#> Oct 1993       7951.095         4342.503           3845.551         12576.735
#> Nov 1993       7567.014         4138.194           3661.816         11937.159
#> Dec 1993      10295.036         5938.505           5337.368         15860.975
#> Jan 1994      10653.875         6252.239           5639.939         16244.902
#> Feb 1994       7358.324         4257.516           3809.220         11201.412
#>          Upper bound (97.5%)
#> Sep 1992            13091.87
#> Oct 1992            13556.76
#> Nov 1992            12796.42
#> Dec 1992            16952.12
#> Jan 1993            17340.31
#> Feb 1993            11899.32
#> Mar 1993            11862.79
#> Apr 1993            21434.68
#> May 1993            10158.92
#> Jun 1993            16443.81
#> Jul 1993            11168.53
#> Aug 1993            10790.26
#> Sep 1993            13372.51
#> Oct 1993            13752.32
#> Nov 1993            13041.58
#> Dec 1993            17268.87
#> Jan 1994            17650.49
#> Feb 1994            12144.65
plot(testForecast)

Yes, now we support vectors for the levels in case you want to produce several. In fact, we also support side for prediction interval, so you can extract specific quantiles without a hustle:

forecast(testModel,h=18,interval="semiparametric", level=c(0.9,0.95,0.99), side="upper")
#>          Point forecast Upper bound (90%) Upper bound (95%) Upper bound (99%)
#> Sep 1992      10880.472         12257.295         12694.128          13554.02
#> Oct 1992       7836.843         11156.238         12399.946          15027.48
#> Nov 1992       7437.482         10552.958         11711.738          14152.77
#> Dec 1992      10120.871         14099.718         15580.822          18700.28
#> Jan 1993      10495.539         14502.427         15984.369          19097.21
#> Feb 1993       7238.434          9988.339         10980.548          13044.77
#> Mar 1993       7351.591         10021.508         10975.070          12950.42
#> Apr 1993      14003.358         18409.330         19982.218          23237.20
#> May 1993       6621.839          8729.270          9455.427          10937.37
#> Jun 1993      11371.633         14446.145         15502.513          17654.32
#> Jul 1993       7976.726          9930.462         10581.702          11891.71
#> Aug 1993       8177.270          9783.891         10305.729          11343.75
#> Sep 1993      11040.897         12486.089         12946.453          13854.20
#> Oct 1993       7962.470         11336.417         12594.089          15245.94
#> Nov 1993       7563.596         10750.698         11931.899          14416.88
#> Dec 1993      10284.532         14340.082         15844.983          19010.85
#> Jan 1994      10647.246         14729.539         16235.149          19394.44
#> Feb 1994       7356.377         10181.484         11198.437          13312.36

A brand new thing in the function is the possibility to use several frequencies (double / triple / quadruple / … seasonal models). Here is an example of what we can have in case of half-hourly data:

testModel <- adam(forecast::taylor, "MMdM", lags=c(1,48,336), silent=FALSE, h=336, holdout=TRUE)
testModel
#> Time elapsed: 42.53 seconds
#> Model estimated using adam() function: ETS(MMdM)[48, 336]
#> Distribution assumed in the model: Inverse Gaussian
#> Loss function type: likelihood; Loss function value: 25478.85
#> Persistence vector g:
#>  alpha   beta gamma1 gamma2 
#> 0.9960 0.3651 0.0011 0.0040 
#> Damping parameter: 0.75
#> Sample size: 3696
#> Number of estimated parameters: 390
#> Number of degrees of freedom: 3306
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 51737.70 51829.97 54161.55 54540.58 
#> 
#> Forecast errors:
#> ME: 309.978; MAE: 829.421; RMSE: 1088.231
#> sCE: 351.993%; sMAE: 2.803%; sMSE: 0.135%
#> MASE: 1.276; RMSSE: 1.153; rMAE: 0.124; rRMSE: 0.133

Note that the more lags you have, the more initial seasonal components the function will need to estimate, which is a difficult task. The optimiser might not get close to the optimal value, so we can help it. First, we can give more time for the calculation, increasing the number of iterations via maxeval (the default value is 20 iterations for each optimised parameter. So, in case of the previous model it is 389*20=7780):

testModel <- adam(forecast::taylor, "MMdM", lags=c(1,48,336), silent=FALSE, h=336, holdout=TRUE,
                  maxeval=10000)
testModel
#> Time elapsed: 27.16 seconds
#> Model estimated using adam() function: ETS(MMdM)[48, 336]
#> Distribution assumed in the model: Inverse Gaussian
#> Loss function type: likelihood; Loss function value: 25623.23
#> Persistence vector g:
#>  alpha   beta gamma1 gamma2 
#> 0.9959 0.3651 0.0011 0.0040 
#> Damping parameter: 0.7226
#> Sample size: 3696
#> Number of estimated parameters: 390
#> Number of degrees of freedom: 3306
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 52026.47 52118.75 54450.32 54829.35 
#> 
#> Forecast errors:
#> ME: 349.11; MAE: 849.097; RMSE: 1109.359
#> sCE: 396.429%; sMAE: 2.87%; sMSE: 0.141%
#> MASE: 1.306; RMSSE: 1.175; rMAE: 0.127; rRMSE: 0.135

This will take more time, but will typically lead to more refined parameters. You can control other parameters of the optimiser as well, such as algorithm, xtol_rel, print_level and others, which are explained in the documentation for nloptr function from nloptr package (run nloptr.print.options() for details). Second, we can give a different set of initial parameters for the optimiser, have a look at what the function saves:

testModel$B

and use this as a starting point (e.g. with a different algorithm):

testModel <- adam(forecast::taylor, "MMdM", lags=c(1,48,336), silent=FALSE, h=336, holdout=TRUE,
                  B=testModel$B)
testModel
#> Time elapsed: 41.02 seconds
#> Model estimated using adam() function: ETS(MMdM)[48, 336]
#> Distribution assumed in the model: Inverse Gaussian
#> Loss function type: likelihood; Loss function value: 25123.54
#> Persistence vector g:
#>  alpha   beta gamma1 gamma2 
#> 0.9964 0.9964 0.0034 0.0000 
#> Damping parameter: 0.6517
#> Sample size: 3696
#> Number of estimated parameters: 390
#> Number of degrees of freedom: 3306
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 51027.08 51119.36 53450.93 53829.97 
#> 
#> Forecast errors:
#> ME: -9.782; MAE: 755.618; RMSE: 1041.161
#> sCE: -11.108%; sMAE: 2.554%; sMSE: 0.124%
#> MASE: 1.162; RMSSE: 1.103; rMAE: 0.113; rRMSE: 0.127

Finally, we can speed up the process by using a different initialisation of the state vector, such as backcasting:

testModel <- adam(forecast::taylor, "MMdM", lags=c(1,48,336), silent=FALSE, h=336, holdout=TRUE,
                  initial="b")

The result might be less accurate than in case of the optimisation, but it should be faster.

In addition, you can specify some parts of the initial state vector or some parts of the persistence vector, here is an example:

testModel <- adam(forecast::taylor, "MMdM", lags=c(1,48,336), silent=TRUE, h=336, holdout=TRUE,
                  initial=list(level=30000, trend=1), persistence=list(beta=0.1))
testModel
#> Time elapsed: 41.44 seconds
#> Model estimated using adam() function: ETS(MMdM)[48, 336]
#> Distribution assumed in the model: Inverse Gaussian
#> Loss function type: likelihood; Loss function value: 25738.4
#> Persistence vector g:
#>  alpha   beta gamma1 gamma2 
#> 0.9670 0.1000 0.0001 0.0330 
#> Damping parameter: 0.7634
#> Sample size: 3696
#> Number of estimated parameters: 387
#> Number of degrees of freedom: 3309
#> Number of provided parameters: 3
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 52250.81 52341.59 54656.01 55028.91 
#> 
#> Forecast errors:
#> ME: 171.838; MAE: 764.081; RMSE: 1039.778
#> sCE: 195.129%; sMAE: 2.582%; sMSE: 0.123%
#> MASE: 1.175; RMSSE: 1.102; rMAE: 0.114; rRMSE: 0.127

The function also handles intermittent data (the data with zeroes) and the data with missing values. This is partially covered in the vignette on the oes() function. Here is a simple example:

testModel <- adam(rpois(120,0.5), "MNN", silent=FALSE, h=12, holdout=TRUE,
                  occurrence="odds-ratio")
testModel
#> Time elapsed: 0.04 seconds
#> Model estimated using adam() function: iETS(MNN)
#> Occurrence model type: Odds ratio
#> Distribution assumed in the model: Mixture of Bernoulli and Inverse Gaussian
#> Loss function type: likelihood; Loss function value: -24.4472
#> Persistence vector g:
#> alpha 
#>     0 
#> 
#> Sample size: 108
#> Number of estimated parameters: 5
#> Number of degrees of freedom: 103
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 109.8981 110.1289 123.3087 114.4847 
#> 
#> Forecast errors:
#> Bias: -47.26%; sMSE: 17.771%; rRMSE: 0.909; sPIS: 1284.304%; sCE: -222.302%

Finally, adam() is faster than es() function, because its code is more efficient and it uses a different optimisation algorithm with more finely tuned parameters by default. Let’s compare:

adamModel <- adam(M3[[2568]], "CCC")
esModel <- es(M3[[2568]], "CCC")
"adam:"
#> [1] "adam:"
adamModel
#> Time elapsed: 1.97 seconds
#> Model estimated: ETS(CCC)
#> Loss function type: likelihood
#> 
#> Number of models combined: 30
#> Sample size: 116
#> Average number of estimated parameters: 22.496
#> Average number of degrees of freedom: 93.504
#> 
#> Forecast errors:
#> ME: 626.704; MAE: 810.672; RMSE: 1029.509
#> sCE: 154.964%; sMAE: 11.136%; sMSE: 2%
#> MASE: 0.33; RMSSE: 0.325; rMAE: 0.358; rRMSE: 0.339
"es():"
#> [1] "es():"
esModel
#> Time elapsed: 4.02 seconds
#> Model estimated: ETS(CCC)
#> Initial values were optimised.
#> 
#> Loss function type: MSE
#> Error standard deviation: 415.2322
#> Sample size: 116
#> Information criteria:
#> (combined values)
#>      AIC     AICc      BIC     BICc 
#> 1763.958 1769.656 1807.938 1820.633 
#> 
#> Forecast errors:
#> MPE: 2.6%; sCE: 83.7%; Bias: 46%; MAPE: 6.7%
#> MASE: 0.284; sMAE: 9.6%; sMSE: 1.3%; rMAE: 0.308; rRMSE: 0.276

ADAM ARIMA

As mentioned above, ADAM does not only contain ETS, it also contains ARIMA model, which is regulated via orders parameter. If you want to have a pure ARIMA, you need to switch off ETS, which is done via model="NNN":

testModel <- adam(M3[[1234]], "NNN", silent=FALSE, orders=c(0,2,2))
testModel
#> Time elapsed: 0.18 seconds
#> Model estimated using adam() function: ARIMA(0,2,2)
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 253.4134
#> ARMA parameters of the model:
#> MA:
#> theta1[1] theta2[1] 
#>   -0.9930   -0.0784 
#> 
#> Sample size: 45
#> Number of estimated parameters: 5
#> Number of degrees of freedom: 40
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 516.8269 518.3654 525.8602 528.7884 
#> 
#> Forecast errors:
#> ME: -360.822; MAE: 360.822; RMSE: 408.083
#> sCE: -35.454%; sMAE: 4.432%; sMSE: 0.251%
#> MASE: 4.992; RMSSE: 4.558; rMAE: 4.1; rRMSE: 3.682

Given that both models are implemented in the same framework, they can be compared using information criteria.

The functionality of ADAM ARIMA is similar to the one of msarima function in smooth package, although there are several differences.

First, changing the distribution parameter will allow switching between additive / multiplicative models. For example, distribution="dlnorm" will create an ARIMA, equivalent to the one on logarithms of the data:

testModel <- adam(M3[[2568]], "NNN", silent=FALSE, lags=c(1,12),
                  orders=list(ar=c(1,1),i=c(1,1),ma=c(2,2)), distribution="dlnorm")
testModel
#> Time elapsed: 0.43 seconds
#> Model estimated using adam() function: SARIMA(1,1,2)[1](1,1,2)[12]
#> Distribution assumed in the model: Log Normal
#> Loss function type: likelihood; Loss function value: 921.5619
#> ARMA parameters of the model:
#> AR:
#>  phi1[1] phi1[12] 
#>   0.0926   0.1027 
#> MA:
#>  theta1[1]  theta2[1] theta1[12] theta2[12] 
#>    -0.9155     0.2220    -0.1598    -0.0181 
#> 
#> Sample size: 116
#> Number of estimated parameters: 33
#> Number of degrees of freedom: 83
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1909.124 1936.490 1999.992 2065.035 
#> 
#> Forecast errors:
#> ME: 359.234; MAE: 611.446; RMSE: 716.375
#> sCE: 88.827%; sMAE: 8.399%; sMSE: 0.968%
#> MASE: 0.249; RMSSE: 0.226; rMAE: 0.27; rRMSE: 0.236

Second, it does not have intercept. If you want to have one, you can do this reintroducing ETS component and imposing some restrictions:

testModel <- adam(M3[[2568]], "ANN", silent=FALSE, lags=c(1,12), persistence=0,
                  orders=list(ar=c(1,1),i=c(1,1),ma=c(2,2)), distribution="dnorm")
testModel
#> Time elapsed: 0.38 seconds
#> Model estimated using adam() function: ETS(ANN)+SARIMA(1,1,2)[1](1,1,2)[12]
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 920.849
#> Persistence vector g:
#> alpha 
#>     0 
#> 
#> ARMA parameters of the model:
#> AR:
#>  phi1[1] phi1[12] 
#>   0.1597   0.0800 
#> MA:
#>  theta1[1]  theta2[1] theta1[12] theta2[12] 
#>    -0.9542     0.0936    -0.1334     0.0385 
#> 
#> Sample size: 116
#> Number of estimated parameters: 34
#> Number of degrees of freedom: 82
#> Number of provided parameters: 1
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1909.698 1939.081 2003.320 2073.157 
#> 
#> Forecast errors:
#> ME: 440.603; MAE: 667.885; RMSE: 787.571
#> sCE: 108.947%; sMAE: 9.175%; sMSE: 1.17%
#> MASE: 0.272; RMSSE: 0.249; rMAE: 0.295; rRMSE: 0.259

This way we get the global level, which acts as an intercept. The drift is not supported in the model either.

Third, you can specify parameters of ARIMA via the arma parameter in the following manner:

testModel <- adam(M3[[2568]], "NNN", silent=FALSE, lags=c(1,12),
                  orders=list(ar=c(1,1),i=c(1,1),ma=c(2,2)), distribution="dnorm",
                  arma=list(ar=c(0.1,0.1), ma=c(-0.96, 0.03, -0.12, 0.03)))
testModel
#> Time elapsed: 0.42 seconds
#> Model estimated using adam() function: SARIMA(1,1,2)[1](1,1,2)[12]
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 920.0852
#> ARMA parameters of the model:
#> AR:
#>  phi1[1] phi1[12] 
#>      0.1      0.1 
#> MA:
#>  theta1[1]  theta2[1] theta1[12] theta2[12] 
#>      -0.96       0.03      -0.12       0.03 
#> 
#> Sample size: 116
#> Number of estimated parameters: 27
#> Number of degrees of freedom: 89
#> Number of provided parameters: 6
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1894.170 1911.352 1968.517 2009.355 
#> 
#> Forecast errors:
#> ME: 435.692; MAE: 661.24; RMSE: 779.401
#> sCE: 107.733%; sMAE: 9.084%; sMSE: 1.146%
#> MASE: 0.269; RMSSE: 0.246; rMAE: 0.292; rRMSE: 0.257

Finally, the initials for the states can also be provided, although getting the correct ones might be a challenging task (you also need to know how many of them to provide; checking testModel$initial might help):

testModel <- adam(M3[[2568]], "NNN", silent=FALSE, lags=c(1,12),
                  orders=list(ar=c(1,1),i=c(1,1),ma=c(2,0)), distribution="dnorm",
                  initial=list(arima=M3[[2568]]$x[1:24]))
testModel
#> Time elapsed: 0.3 seconds
#> Model estimated using adam() function: SARIMA(1,1,2)[1](1,1,0)[12]
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 919.7511
#> ARMA parameters of the model:
#> AR:
#>  phi1[1] phi1[12] 
#>   0.0817   0.0966 
#> MA:
#> theta1[1] theta2[1] 
#>   -0.9937    0.0823 
#> 
#> Sample size: 116
#> Number of estimated parameters: 31
#> Number of degrees of freedom: 85
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1901.502 1925.121 1986.863 2043.001 
#> 
#> Forecast errors:
#> ME: 428.915; MAE: 641.698; RMSE: 760.638
#> sCE: 106.057%; sMAE: 8.815%; sMSE: 1.092%
#> MASE: 0.261; RMSSE: 0.24; rMAE: 0.283; rRMSE: 0.251

If you work with ADAM ARIMA model, then there is no such thing as “usual” bounds for the parameters, so the function will use the bounds="admissible", checking the AR / MA polynomials in order to make sure that the model is stationary and invertible (aka stable).

Similarly to ETS, you can use different distributions and losses for the estimation. Note that the order selection for ARIMA is done in auto.adam() function, not in the adam()!

Finally, ARIMA is typically slower than ETS, mainly because the maxeval is set by default to be at least 1000. But this is inevitable due to an increased complexity of the model - otherwise it won’t be estimated properly. If you want to speed things up, use initial="backcasting" and reduce the number of iterations.

ADAM ETSX / ARIMAX / ETSX+ARIMA

Another important feature of ADAM is introduction of explanatory variables. Unlike in es(), adam() expects a matrix for data and can work with a formula. If the latter is not provided, then it will use all explanatory variables. Here is a brief example:

BJData <- cbind(BJsales,BJsales.lead)
testModel <- adam(BJData, "AAN", h=18, silent=FALSE)

If you work with data.frame or similar structures, then you can use them directly, ADAM will extract the response variable either assuming that it is in the first column or from the provided formula (if you specify one via formula parameter). Here is an example, where we create a matrix with lags and leads of an explanatory variable:

BJData <- cbind(as.data.frame(BJsales),as.data.frame(xregExpander(BJsales.lead,c(-7:7))))
colnames(BJData)[1] <- "y"
testModel <- adam(BJData, "ANN", h=18, silent=FALSE, holdout=TRUE, formula=y~xLag1+xLag2+xLag3)
testModel
#> Time elapsed: 0.06 seconds
#> Model estimated using adam() function: ETSX(ANN)
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 206.7573
#> Persistence vector g (excluding xreg):
#>  alpha 
#> 0.9993 
#> 
#> Sample size: 132
#> Number of estimated parameters: 6
#> Number of degrees of freedom: 126
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 425.5146 426.1866 442.8114 444.4520 
#> 
#> Forecast errors:
#> ME: 0.644; MAE: 1.418; RMSE: 1.827
#> sCE: 5.13%; sMAE: 0.628%; sMSE: 0.007%
#> MASE: 1.163; RMSSE: 1.169; rMAE: 0.633; rRMSE: 0.728

Similarly to es(), there is a support for variables selection, but via the regressors parameter instead of xregDo, which will then use stepwise() function from greybox package on the residuals of the model:

testModel <- adam(BJData, "ANN", h=18, silent=FALSE, holdout=TRUE, regressors="select")

The same functionality is supported with ARIMA, so you can have, for example, ARIMAX(0,1,1), which is equivalent to ETSX(A,N,N):

testModel <- adam(BJData, "NNN", h=18, silent=FALSE, holdout=TRUE, regressors="select", orders=c(0,1,1))

The two models might differ because they have different initialisation in the optimiser. It is possible to make them identical if the number of iterations is increased and the initial parameters are the same. Here is an example of what happens, when the two models have exactly the same parameters:

BJData <- BJData[,c("y",names(testModel$initial$xreg))];
testModel <- adam(BJData, "NNN", h=18, silent=TRUE, holdout=TRUE, orders=c(0,1,1),
                  initial=testModel$initial, arma=testModel$arma)
testModel
#> Time elapsed: 0 seconds
#> Model estimated using adam() function: ARIMAX(0,1,1)
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 71.0402
#> ARMA parameters of the model:
#> MA:
#> theta1[1] 
#>    0.2447 
#> 
#> Sample size: 132
#> Number of estimated parameters: 1
#> Number of degrees of freedom: 131
#> Number of provided parameters: 7
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 144.0804 144.1112 146.9632 147.0384 
#> 
#> Forecast errors:
#> ME: 0.543; MAE: 0.579; RMSE: 0.753
#> sCE: 4.325%; sMAE: 0.256%; sMSE: 0.001%
#> MASE: 0.475; RMSSE: 0.482; rMAE: 0.259; rRMSE: 0.3
names(testModel$initial)[1] <- names(testModel$initial)[[1]] <- "level"
testModel2 <- adam(BJData, "ANN", h=18, silent=TRUE, holdout=TRUE,
                   initial=testModel$initial, persistence=testModel$arma$ma+1)
testModel2
#> Time elapsed: 0 seconds
#> Model estimated using adam() function: ETSX(ANN)
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 1e+300
#> Persistence vector g (excluding xreg):
#>  alpha 
#> 1.2447 
#> 
#> Sample size: 132
#> Number of estimated parameters: 1
#> Number of degrees of freedom: 131
#> Number of provided parameters: 7
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 144.0804 144.1112 146.9632 147.0384 
#> 
#> Forecast errors:
#> ME: 0.543; MAE: 0.579; RMSE: 0.753
#> sCE: 4.325%; sMAE: 0.256%; sMSE: 0.001%
#> MASE: 0.475; RMSSE: 0.482; rMAE: 0.259; rRMSE: 0.3

Another feature of ADAM is the time varying parameters in the SSOE framework, which can be switched on via regressors="adapt":

testModel <- adam(BJData, "ANN", h=18, silent=FALSE, holdout=TRUE, regressors="adapt")
testModel$persistence
#>        alpha       delta1       delta2       delta3       delta4       delta5 
#> 7.658422e-01 3.055245e-04 7.349038e-06 1.069980e-01 1.115520e-01 3.778273e-02

Note that the default number of iterations might not be sufficient in order to get close to the optimum of the function, so setting maxeval to something bigger might help. If you want to explore, why the optimisation stopped, you can provide print_level=41 parameter to the function, and it will print out the report from the optimiser. In the end, the default parameters are tuned in order to give a reasonable solution, but given the complexity of the model, they might not guarantee to give the best one all the time.

Finally, you can produce a mixture of ETS, ARIMA and regression, by using the respective parameters, like this:

testModel <- adam(BJData, "AAN", h=18, silent=FALSE, holdout=TRUE, orders=c(1,0,1))
summary(testModel)
#> Model estimated using adam() function: ETSX(AAN)+ARIMA(1,0,1)
#> Response variable: y
#> Distribution used in the estimation: Normal
#> Loss function type: likelihood; Loss function value: 97.2232
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%
#> alpha         0.9995     0.1509     0.7008      1.0000
#> beta          0.0000     0.0153     0.0000      0.0303
#> phi1[1]       0.3262     0.1018     0.1247      0.5275
#> theta1[1]    -0.1739     0.1669    -0.3362      0.1559
#> level        32.1939    13.5465     5.3706     58.9680
#> trend         0.0055     0.0581    -0.1096      0.1203
#> ARIMAState1   0.0103     1.6070    -3.1718      3.1865
#> xLag3         5.0720     0.2038     4.6685      5.4748
#> xLag7         1.5185     0.2714     0.9810      2.0550
#> xLag4         4.4649     0.3578     3.7565      5.1720
#> xLag6         2.6176     0.3969     1.8316      3.4021
#> xLag5         3.2385     0.3853     2.4756      3.9999
#> 
#> Sample size: 132
#> Number of estimated parameters: 13
#> Number of degrees of freedom: 119
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 220.4464 223.5312 257.9228 265.4539

This might be handy, when you explore a high frequency data, want to add calendar events, apply ETS and add AR/MA errors to it.

Auto ADAM

While the original adam() function allows selecting ETS components and explanatory variables, it does not allow selecting the most suitable distribution and / or ARIMA components. This is what auto.adam() function is for.

In order to do the selection of the most appropriate distribution, you need to provide a vector of those that you want to check:

testModel <- auto.adam(M3[[1234]], "XXX", silent=FALSE,
                       distribution=c("dnorm","dlaplace","ds"))
#> Evaluating models with different distributions... dnorm , dlaplace , ds , Done!
testModel
#> Time elapsed: 0.28 seconds
#> Model estimated using adam() function: ETS(AAN)
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 255.2972
#> Persistence vector g:
#>  alpha   beta 
#> 0.6828 0.2276 
#> 
#> Sample size: 45
#> Number of estimated parameters: 5
#> Number of degrees of freedom: 40
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 520.5943 522.1328 529.6277 532.5559 
#> 
#> Forecast errors:
#> ME: -348.216; MAE: 348.216; RMSE: 396.392
#> sCE: -34.215%; sMAE: 4.277%; sMSE: 0.237%
#> MASE: 4.818; RMSSE: 4.427; rMAE: 3.957; rRMSE: 3.576

This process can also be done in parallel on either the automatically selected number of cores (e.g. parallel=TRUE) or on the specified by user (e.g. parallel=4):

testModel <- auto.adam(M3[[1234]], "ZZZ", silent=FALSE, parallel=TRUE)

If you want to add ARIMA or regression components, you can do it in the exactly the same way as for the adam() function. Here is an example of ETS+ARIMA:

testModel <- auto.adam(M3[[1234]], "AAN", orders=list(ar=2,i=2,ma=2), silent=TRUE,
                       distribution=c("dnorm","dlaplace","ds","dgnorm"))
testModel
#> Time elapsed: 0.43 seconds
#> Model estimated using adam() function: ETS(AAN)+ARIMA(2,2,2)
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 255.3152
#> Persistence vector g:
#> alpha  beta 
#> 2e-04 0e+00 
#> 
#> ARMA parameters of the model:
#> AR:
#> phi1[1] phi2[1] 
#> -0.6085  0.1040 
#> MA:
#> theta1[1] theta2[1] 
#>   -0.1176   -0.9393 
#> 
#> Sample size: 45
#> Number of estimated parameters: 13
#> Number of degrees of freedom: 32
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 536.6304 548.3724 560.1170 582.4658 
#> 
#> Forecast errors:
#> ME: -312.957; MAE: 312.957; RMSE: 359.974
#> sCE: -30.75%; sMAE: 3.844%; sMSE: 0.195%
#> MASE: 4.33; RMSSE: 4.02; rMAE: 3.556; rRMSE: 3.248

However, this way the function will just use ARIMA(2,2,2) and fit it together with ETS. If you want it to select the most appropriate ARIMA orders from the provided (e.g. up to AR(2), I(1) and MA(2)), you need to add parameter select=TRUE to the list in orders:

testModel <- auto.adam(M3[[1234]], "XXN", orders=list(ar=2,i=2,ma=2,select=TRUE),
                       distribution="default", silent=FALSE)
#> Evaluating models with different distributions... default ,  Selecting ARIMA orders...    5 %38 %71 %  100 %. The best ARIMA is selected.
#> Done!
testModel
#> Time elapsed: 0.11 seconds
#> Model estimated using adam() function: ETS(AAN)
#> Distribution assumed in the model: Normal
#> Loss function type: likelihood; Loss function value: 255.2972
#> Persistence vector g:
#>  alpha   beta 
#> 0.6828 0.2276 
#> 
#> Sample size: 45
#> Number of estimated parameters: 5
#> Number of degrees of freedom: 40
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 520.5943 522.1328 529.6277 532.5559 
#> 
#> Forecast errors:
#> ME: -348.216; MAE: 348.216; RMSE: 396.392
#> sCE: -34.215%; sMAE: 4.277%; sMSE: 0.237%
#> MASE: 4.818; RMSSE: 4.427; rMAE: 3.957; rRMSE: 3.576

Knowing how to work with adam(), you can use similar principles, when dealing with auto.adam(). Just keep in mind that the provided persistence, phi, initial, arma and B won’t work, because this contradicts the idea of the model selection.

Finally, there is also the mechanism of automatic outliers detection, which extracts residuals from the best model, flags observations that lie outside the prediction interval of thw width level in sample and then refits auto.adam() with the dummy variables for the outliers. Here how it works:

testModel <- auto.adam(Mcomp::M3[[2568]], "PPP", silent=FALSE, outliers="use",
                       distribution="default")
#> Evaluating models with different distributions... default , 
#> Dealing with outliers...
testModel
#> Time elapsed: 1.01 seconds
#> Model estimated using adam() function: ETSX(MMdM)
#> Distribution assumed in the model: Inverse Gaussian
#> Loss function type: likelihood; Loss function value: 854.3258
#> Persistence vector g (excluding xreg):
#>  alpha   beta  gamma 
#> 0.0233 0.0233 0.0196 
#> Damping parameter: 0.9529
#> Sample size: 116
#> Number of estimated parameters: 22
#> Number of degrees of freedom: 94
#> Information criteria:
#>      AIC     AICc      BIC     BICc 
#> 1752.652 1763.533 1813.231 1839.094 
#> 
#> Forecast errors:
#> ME: 752.6; MAE: 867.837; RMSE: 1109.625
#> sCE: 186.094%; sMAE: 11.922%; sMSE: 2.323%
#> MASE: 0.353; RMSSE: 0.35; rMAE: 0.383; rRMSE: 0.365

If you specify outliers="select", the function will create leads and lags 1 of the outliers and then select the most appropriate ones via the regressors parameter of adam.

If you want to know more about ADAM, you are welcome to visit the online textbook (this is a work in progress at the moment).

Hyndman, Rob J, Anne B Koehler, J Keith Ord, and Ralph D Snyder. 2008. Forecasting with Exponential Smoothing. Springer Berlin Heidelberg.