In my previous article GARCH(p,q) Model and Exit Strategy for Intraday Algorithmic Traders we described the essentials of GARCH(p,q) model and provided an exemplary implementation in Matlab. In general, we apply GARCH model in order to estimate the volatility one time-step forward, where:

$$

\sigma_t^2 = \omega + \alpha r_{t-1}^2 + \beta \sigma_{t-1}^2

$$ based on the most recent update of $r$ and $\sigma$, where $r_{t-1} = \ln({P_{t-1}}/{P_{t-2}})$ and $P$ corresponds to an asset price. For any financial time-series, $\{r_j\}$, the estimation of $(\omega,\alpha,\beta)$ parameters can be conducted utilising the maximum likelihood method. The latter is an iterative process by looking for the maximum value of the sum among all sums defined as:

$$

\sum_{i=3}^{N} \left[ -\ln(\sigma_i^2) – \frac{r_i^2}{\sigma_i^2} \right]

$$ where $N$ denotes the length of the return series $\{r_j\}$ ($j=2,…,N$) under study.

Let’s assume we have a test array of input data, $\{r_j\}$, stored in Python variable of *r*, and we write a function, *GARCH11_logL*, that will be used in the optimisation process. It contains two input parameters where *parma* is an 3-element array with some trial values corresponding to $(\omega,\alpha,\beta)$ and *u* denotes the return-series.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | # GARCH(1,1) Model in Python # uses maximum likelihood method to estimate (omega,alpha,beta) # (c) 2014 QuantAtRisk, by Pawel Lachowicz; tested with Python 3.5 only import numpy as np from scipy import optimize import statistics as st r = np.array([0.945532630498276, 0.614772790142383, 0.834417758890680, 0.862344782601800, 0.555858715401929, 0.641058419842652, 0.720118656981704, 0.643948007732270, 0.138790608092353, 0.279264178231250, 0.993836948076485, 0.531967023876420, 0.964455754192395, 0.873171802181126, 0.937828816793698]) def GARCH11_logL(param, r): omega, alpha, beta = param n = len(r) s = np.ones(n)*0.01 s[2] = st.variance(r[0:3]) for i in range(3, n): s[i] = omega + alpha*r[i-1]**2 + beta*(s[i-1]) # GARCH(1,1) model logL = -((-np.log(s) - r**2/s).sum()) return logL |

In this point it is important to note that in line #32 we multiply the sum by $-1$ in order to find maximal value of the expression. Why? This can be understood as we implement *optimize.fmin* function from Python’s *optimize* module. Therefore, we seek for best estimates as follows:

34 | o = optimize.fmin(GARCH11_logL,np.array([.1,.1,.1]), args=(r,), full_output=1) |

and we display the results,

36 37 38 | R = np.abs(o[0]) print() print("omega = %.6f\nbeta = %.6f\nalpha = %.6f\n" % (R[0], R[2], R[1])) |

what, in case of the array of *r* as given in the code, returns the following results:

Optimization terminated successfully. Current function value: 14.705098 Iterations: 88 Function evaluations: 162 omega = 0.788244 beta = 0.498230 alpha = 0.033886 |

**Python vs. Matlab Solution**

Programming requires caution. It is always a good practice to test the outcome of one algorithm against alternative solutions. Let’s run the GARCH(1,1) model estimation for the same input array and compare Python and Matlab results:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | % GARCH(1,1) Model in Matlab 2013a % (c) 2014 QuantAtRisk, by Pawel Lachowicz clear all; close all; clc; r=[0.945532630498276, ... 0.614772790142383, ... 0.834417758890680, ... 0.862344782601800, ... 0.555858715401929, ... 0.641058419842652, ... 0.720118656981704, ... 0.643948007732270, ... 0.138790608092353, ... 0.279264178231250, ... 0.993836948076485, ... 0.531967023876420, ... 0.964455754192395, ... 0.873171802181126, ... 0.937828816793698]'; % GARCH(p,q) parameter estimation model = garch(1,1) % define model [fit,VarCov,LogL,Par] = estimate(model,r); % extract model parameters parC=Par.X(1); % omega parG=Par.X(2); % beta (GARCH) parA=Par.X(3); % alpha (ARCH) % estimate unconditional volatility gamma=1-parA-parG; VL=parC/gamma; volL=sqrt(VL); % redefine model with estimatated parameters model=garch('Constant',parC,'GARCH',parG,'ARCH',parA) |

what returns:

model = GARCH(1,1) Conditional Variance Model: -------------------------------------- Distribution: Name = 'Gaussian' P: 1 Q: 1 Constant: NaN GARCH: {NaN} at Lags [1] ARCH: {NaN} at Lags [1] ____________________________________________________________ Diagnostic Information Number of variables: 3 Functions Objective: @(X)OBJ.nLogLikeGaussian(X,V,E,Lags,1,maxPQ,T,nan,trapValue) Gradient: finite-differencing Hessian: finite-differencing (or Quasi-Newton) Constraints Nonlinear constraints: do not exist Number of linear inequality constraints: 1 Number of linear equality constraints: 0 Number of lower bound constraints: 3 Number of upper bound constraints: 3 Algorithm selected sequential quadratic programming ____________________________________________________________ End diagnostic information Norm of First-order Iter F-count f(x) Feasibility Steplength step optimality 0 4 1.748188e+01 0.000e+00 5.758e+01 1 27 1.723863e+01 0.000e+00 1.140e-03 6.565e-02 1.477e+01 2 31 1.688626e+01 0.000e+00 1.000e+00 9.996e-01 1.510e+00 3 35 1.688234e+01 0.000e+00 1.000e+00 4.099e-02 1.402e+00 4 39 1.686305e+01 0.000e+00 1.000e+00 1.440e-01 8.889e-01 5 44 1.685246e+01 0.000e+00 7.000e-01 2.379e-01 5.088e-01 6 48 1.684889e+01 0.000e+00 1.000e+00 9.620e-02 1.379e-01 7 52 1.684835e+01 0.000e+00 1.000e+00 2.651e-02 2.257e-02 8 56 1.684832e+01 0.000e+00 1.000e+00 8.389e-03 7.046e-02 9 60 1.684831e+01 0.000e+00 1.000e+00 1.953e-03 7.457e-02 10 64 1.684825e+01 0.000e+00 1.000e+00 7.888e-03 7.738e-02 11 68 1.684794e+01 0.000e+00 1.000e+00 3.692e-02 7.324e-02 12 72 1.684765e+01 0.000e+00 1.000e+00 1.615e-01 5.862e-02 13 76 1.684745e+01 0.000e+00 1.000e+00 7.609e-02 8.429e-03 14 80 1.684740e+01 0.000e+00 1.000e+00 2.368e-02 4.072e-03 15 84 1.684739e+01 0.000e+00 1.000e+00 1.103e-02 3.142e-03 16 88 1.684739e+01 0.000e+00 1.000e+00 1.183e-03 2.716e-04 17 92 1.684739e+01 0.000e+00 1.000e+00 9.913e-05 1.378e-04 18 96 1.684739e+01 0.000e+00 1.000e+00 6.270e-05 2.146e-06 19 97 1.684739e+01 0.000e+00 7.000e-01 4.327e-07 2.146e-06 Local minimum possible. Constraints satisfied. fmincon stopped because the size of the current step is less than the default value of the step size tolerance and constraints are satisfied to within the selected value of the constraint tolerance. <stopping criteria details> GARCH(1,1) Conditional Variance Model: ---------------------------------------- Conditional Probability Distribution: Gaussian Standard t Parameter Value Error Statistic ----------- ----------- ------------ ----------- Constant 0.278061 26.3774 0.0105417 GARCH{1} 0.457286 49.4915 0.0092397 ARCH{1} 0.0328433 1.65576 0.0198358 model = GARCH(1,1) Conditional Variance Model: -------------------------------------- Distribution: Name = 'Gaussian' P: 1 Q: 1 Constant: 0.278061 GARCH: {0.457286} at Lags [1] ARCH: {0.0328433} at Lags [1] |

id est

$$

(\omega,\beta,\alpha)_{\rm Matlab} = (0.278061,0.457286,0.0328433) \ .

$$This slightly differs itself from the Python solution which was

$$

(\omega,\beta,\alpha)_{\rm Py} =(0.788244,0.498230,0.033886) \ .

$$At this stage it is difficult to assess which solution is “better”. Both algorithms and applied methodologies simply may be different what is usually the case. Having that in mind, further extensive tests are required, for example, a dependance of Python solution on trial input $(\omega,\alpha,\beta)$ values as displayed in line #33 of the Python code.