14  When things go wrong

The estimation in the previous chapter went through without any problems, but we are only sometimes so lucky. Since we are maximising a non-linear function, many things can go wrong, often with hard-to-diagnose problems, as the code can fail with an incomprehensible error message. There can be many plausible explanations. There might be errors in data, the algorithms might be buggy, or your code might have mistakes.

These problems can arise because of the trade-off between safe code and speed. If we want to ensure that nothing goes wrong, we need many checks and robust algorithms, which are slow, even very slow. Consequently, we opt for algorithms that are fast and usually work but sometimes fail.

14.1 Libraries

library(reshape2)
source("common/functions.r",chdir=TRUE)
library(rugarch)

14.2 Data

The data we use is returns on the S&P500 index, and for convenience, we put them into the variable y.

data=ProcessRawData()
y=data$sp500$y
y=y-mean(y)

14.2.1 Define rugarch specs

spec.g = ugarchspec(
 variance.model = list(garchOrder = c(1,1)),
 mean.model = list(armaOrder = c(0,0), include.mean = FALSE)
)
spec.t = ugarchspec(
 variance.model = list(garchOrder = c(1,1)),
 mean.model = list(armaOrder = c(0,0), include.mean = FALSE),
 distribution.model = "std"
)

14.3 Failure

When I tried to estimate the tGARCH model over a particular 2000-day estimation window, y[720:2719], the estimation failed. Other estimation windows also failed. This happens on the computer I’m using now, but the estimation might be successful on a different computer, especially if it has a different version of R or other key libraries.

Here is what we do:

Res = ugarchfit(spec = spec.t, data = y[720:2719])


When we run this, we get the following error message:

Error in robustvcv(fun = f, pars = ipars[estidx, 1], nlag = nlag, hess = fit$hessian, : object 'B' not found
Traceback:

1. ugarchfit(spec = spec.t, data = data)
2. ugarchfit(spec = spec.t, data = data)
3. .sgarchfit(spec = spec, data = data, out.sample = out.sample, 
 . solver = solver, solver.control = solver.control, fit.control = fit.control, 
 . numderiv.control = default.numd)
4. .makefitmodel(garchmodel = "sGARCH", f = .sgarchLLH, T = T, m = m, 
 . timer = timer, convergence = convergence, message = sol$message, 
 . hess = hess, arglist = arglist, numderiv.control = numderiv.control)
5. robustvcv(fun = f, pars = ipars[estidx, 1], nlag = nlag, hess = fit$hessian, 
 . n = T, arglist = arglist)

14.3.1 Why does it fail?

This error message might seem incomprehensible, but there are some clues in it. It fails in something called robustvcv, ' which is not something we need for our purpose. It is simply a one-day-ahead volatility forecast, but it is required to get the confidence bounds of the parameters. In particular,vcvrelates to variance-covariance-matrix androbust` to its calculation in a way that is not overly sensitive to distributional assumptions.

While I can only guess, the error has to do with some overflow of numerical values. Some returns are very small, while others are quite large, and when we take them to power two, the differences amplify. The covariance matrix is obtained from the inverse hessian, which is the matrix of second derivatives at the top of the likelihood function. It is tricky to estimate this hessian, especially if we want it done quickly.

It comes down to limited numerical precision. Most calculations are done with what is known as double precision floating point, called float64, a number that uses 64 bits in computer memory. A floating point number looks like \(0.0234=2.34\times 10^{-2}\) written in code as 2.34e-2.

The smallest number is \(2^{-1022}\) and largest \(2^{1023}\). There are two parts to a floating point number: the exponent (or power) and mantissa, the precision bits of the number in front of the Power. Fifty-two bits represent the mantissa.

This means that if we do calculations with numbers that are very different from each other, one can end up with a very incorrect answer. For example, if we want to sum up many numbers that are very different in size, we will get a different answer whether we start adding up the smallest numbers or the largest numbers. Also, as we see in Chapter 18 discussing backtests, we can get a different answer depending on whether we do a log of a product or the sum of logs.

I think this is what happened here: some sort of numerical precision problem. There are some ways to solve it, some easy, others a bit more complicated.

14.3.1.1 solver = "hybrid" does not help

We could try

Res = ugarchfit(spec = spec.t, data = data,solver = "hybrid")

But that also fails for obvious reasons.

14.4 Re-scaling might work

The first thing to try is rescaling the data.

14.4.1 De-meaning

When we imported the data, we removed the mean from all the observations. If, however, we de-mean the estimation window, the calculation succeeds.

data=y[720:2719]
data=data-mean(data)
Res = ugarchfit(spec = spec.t, data = data,)
coef(Res)
       omega       alpha1        beta1        shape 
1.363094e-06 1.014802e-01 8.970168e-01 5.428948e+00 

14.4.2 Normalising variance

Normalising the variance to be one also works, but then we need to re-scale \(\omega\) back since \[ \sigma^2 = \frac{\omega}{1-\alpha-\beta} \] and so if we do data=data/sd(data) then need to do \[ \hat{\omega}\times var(y) \]

data=y[720:2719]
scale=1/sd(data)
data=data*scale
Res = ugarchfit(spec = spec.t, data = data)
coef(Res)
      omega      alpha1       beta1       shape 
0.006938824 0.102216377 0.896542511 5.366219367 
omega=coef(Res)[1] / scale^2 
omega
       omega 
1.366334e-06 

14.4.3 Do both

Usually, it is best to do both, both de-mean and normalise

14.4.4 Why did re-scaling work?

The reason why the de-mean and normalisation works is because it makes most of the numbers be around one, which means that when we square them, the difference between the big and the small numbers doesn’t become very large, so we don’t get the numerical overflow we discussed above.

14.5 Detecting failures — try()

Suppose the rescaling doesn’t work, and the estimation still fails. If doing one estimation, it is easy to spot the problem and do something else. But what if we are doing backtesting, and we want to generate a large number of one-day risk forecasts? Then, it becomes annoying to have the code fail.

Here, an R statement called try() can be particularly useful. try() is designed to catch errors, so you try a calculation, and either you get an answer back or a message saying the estimation failed.

Suppose you want to take a logarithm of x and log(x). The x needs to be a positive number, and if you pass a string, the code crashes.

log("string")
Error in log("string"): non-numeric argument to mathematical function

But what if you do a try?

res1=try(log(1))
res2=try(log("string"),silent = TRUE)

Both ran without error, but the second, of course, failed. We can see that by

res1
[1] 0
class(res1)
[1] "numeric"
res2
[1] "Error in log(\"string\") : non-numeric argument to mathematical function\n"
attr(,"class")
[1] "try-error"
attr(,"condition")
<simpleError in log("string"): non-numeric argument to mathematical function>
class(res2)
[1] "try-error"

And detect it by

class(res2)=="try-error"
[1] TRUE

So after running the try we check the output, and if its class is "try-error", we know the calculation failed, and we should deal with that.

14.6 Exercise

Apart from rugarch, which we used in this chapter, there are two popular alternatives in R that estimate time series models: fGarch and tseries. Fit a GARCH(1,1) model to the S&P500 return series (y) using these three packages. Are the results identical? If not, what can account for the difference? Discuss.

# Answer
library(fGarch)
library(tseries)

spec1 = ugarchspec(
 variance.model = list(garchOrder = c(1,1)),
 mean.model = list(armaOrder = c(0,0), include.mean = FALSE))
ru.fit = ugarchfit(spec = spec1, data = y)
fg.fit = garchFit(y~garch(1,1),data=y,include.mean = FALSE)
ts.fit = garch(y)
rst=c()
rst=rbind(c(as.numeric(coef(ru.fit)),likelihood(ru.fit)),
 c(as.numeric(coef(fg.fit)),as.numeric(-fg.fit@fit$llh)),
 c(as.numeric(ts.fit$coef),as.numeric(logLik(ts.fit))))
colnames(rst)=c("Omega", "Alpha", "Beta", "Log Likelihood")
rownames(rst)=c("rugarch", "fGarch", "tseries")
rst

# rugarch and fGarch give almost identical results, with very little difference. The log-likelihood of the model estimated by tseries is lower. Possible explanations are:

# Different solvers. rugarch uses solnp by default, but it also provides several alternatives, such as nlminb, lbfgs, etc. fGarch uses nlminb by default, while tseries uses a Quasi-Newton optimiser.
# Variance initialisation. The most common choice is the unconditional variance, but the tseries documentation did not clearly mention the value it selects.
# Other factors, such as initial values of parameters, number of iterations, etc.

# Previous studies show that the performance of tseries is inferior to the other two packages. For more discussion on this topic, see "On The Accuracy of GARCH Estimation in R Packages" by Hill and McCullough (2019).