Quantitative Analysis, Risk Management, Modelling, Algo Trading, and Big Data Analysis

Pre-Processing of Asset Price Series for Portfolio Optimization

Portfolio Optimization is a significant component of Matlab’s Financial Toolbox. It provides us with ready-to-use solution in finding optimal weights of assets that we consider for trading deriving them based on the historical asset performance. From a practical point of view, we can include it in our algorithmic trading strategy and backtest its applicability under different initial conditions. This is a subject of my next up-coming post. However, before we can enjoy the view from the peak, we need to climb the mountain first.

In Matlab, the portfolio is created as a dedicated object of the same name. It doesn’t read the raw stock data. We need to feed that beast. Two major ingredients satisfy the input: a vector of the expected asset returns and a covariance matrix. Matlab helps us to estimate these moments but first we need to deliver asset data in a digestable form.

In this post we will see how one can quickly download the stock data from the Internet based on our own stock selection and pre-process them for solving portfolio optimization problem in Matlab.

Initial Setup for Portfolio Object

Let’s say that at any point of time you have your own list of stocks you wish to buy. For simplicity let’s also assume that the list contains stocks traded on NYSE or NASDAQ. Since you have been a great fun of this game, now you are almost ready to buy what you jotted down on your ShoppingList.lst. Here, an example of 10 tech stocks:

AAPL   AOL   BIDU   GOOG   HPQ   IBM   INTC   MSFT   NVDA   TXN

They will constitute your portfolio of stocks. The problem of portfolio optimization requires a look back in time in the space of returns obtained in trading by each stock. Based on them the Return Proxy and Risk Proxy can be found.

The return matrix $R$ of dimensions $(N-1)\times M$ where $N$ stands for number of historical prices (e.g. derived daily, or monthly, etc.) and $M$ for the number of stocks in our portfolio, is required by Matlab as an input. We will see how does it work in next post. For now let’s solely focus on creation of this matrix.

In the article Create a Portfolio of Stocks based on Google Finance Data fed by Quandl I discussed Quandl.com as an attractive data provider for US stocks. Here, we will follow this solution making use of Quandl resources to pull out the stock price series for our shopping list. Ultimately, we aim at building a function, here: QuandlForPortfolio, that does the job for us:

% Pre-Processing of Asset Price Series for Portfolio Optimization in Matlab
%  (c) 2013, QuantAtRisk.com, by Pawel Lachowicz
 
clear all; close all; clc;
 
% Input Parameters
n=1*365;
tickers='ShoppingList.lst';
qcodes='QuandlStockCodeListUS.xlsx';
 
[X,Y,R,AssetList] = QuandlForPortfolio(n,tickers,qcodes);

We call this function with three input parameters. The first one, $n$, denotes a number of calendar days from today (counting backwards) for which we wish to retrieve the stock data. Usually, 365 days will correspond to about 250$-$252 trading days. The second parameter is a path/file name to our list of stock (desired to be taken into account in the portfolio optimisation process) while the last input defines the path/file name to the file storing stocks’ tickers and associated Quandl Price Codes (see here for more details).

Feeding the Beast

The QuandlForPortfolio Matlab function is an extended version of the previously discussed solution. It contains an important correcting procedure for the data fetched from the Quandl servers. First, let’s have a closer look on the function itself:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
% Function assists in fetching Google Finance data from the Quandl.com
%  server for a given list of tickers of stocks traded on NYSE or
%  NASDAQ. Data are retrieved for last 'n' days with daily sampling.
%
% INPUT
%   n       : number of calendar days from 'today' (e.g. 365 would
%             correspond to about 252 business days)
%   tickers : a path/file name of a text file listing tickers
%   qcodes  : a path/file name of Excel workbook (.xlsx) containing a list
%              of tickers and Quandl Price Codes in the format of
%              [Ticker,Stock Name,Price Code,Ratios Code,In Market?]
% OUTPUT
%   X0        : [Nx1] column vector with days
%   Y0        : [NxM] matrix with Close Prices for M stocks
%   R0        : [(N-1)xM] matrix of Retruns
%   AssetList : a list of tickers (cell array)
%
% (c) 2013, QuantAtRisk.com, by Pawel Lachowicz
 
function [X0,Y0,R0,AssetList0] = QuandlForPortfolio(n,tickers,qcodes)
    fileID = fopen(tickers);
    tmp = textscan(fileID, '%s');
    fclose(fileID);
    AssetList=tmp{1};  % a list as a cell array
 
    % Read in the list of tickers and internal Quandl codes
    %
    [~,text,~] = xlsread(qcodes);
    quandlc=text(:,1); % again, as a list in a cell array
    quandlcode=text(:,3); % corresponding Quandl's Price Code
 
    date1=datestr(today-n,'yyyy-mm-dd'); % from
    date2=datestr(today,'yyyy-mm-dd');   % to
 
    % Fetch the data from Quandl.com
    %
    QData={};
    for i=1:length(AssetList)
        for j=1:length(quandlc)
            if(strcmp(AssetList{i},quandlc{j}))
                fprintf('%4.0f %s\n',i,quandlc{j});
                fts=0;
                [fts,headers]=Quandl.get(quandlcode{j},'type','fints', ...
                   'authcode','x',...
                   'start_date',date1,'end_date',date2,'collapse','daily');
                QData{i}=fts;
            end
        end
    end
 
    % Post-Processing of Fetched Data
    %
    % create a list of days across all tickers
    TMP=[];
    for i=1:length(QData)
        tmp=fts2mat(QData{i},1);
        tmp=tmp(:,1);
        TMP=[TMP; tmp];
    end
    ut=unique(TMP);
    % use that list to find these days that are not present
    %  among all data sets
    TMP=[];
    for i=1:length(QData)
        tmp=fts2mat(QData{i},1);
        tmp=tmp(:,1);
        TMP=[TMP; setdiff(ut,tmp)];
    end
    ut=unique(TMP);
    % finally, extract Close Prices from FTS object and store them
    %  in Y0 matrix, plus corresponding days in X0
    X0=[];
    Y0=[]; 
    for i=1:length(QData)
        tmp=fts2mat(QData{i},1);
        cp=[];
        for j=1:size(tmp,1)
            [r,~,~]=find(ut==tmp(j,1));
            if(isempty(r))
                cp=[cp; tmp(j,5)]; % column 5 corresponds to Close Price
                if(i<2)
                    % create a time column vector listing days
                    % common among all data sets
                    X0=[X0; tmp(j,1)];
                end
            end
        end
        Y0=[Y0 cp];
    end
    % transform Close Prices into Returns, R(i)=cp(i)/cp(i-1)-1
    R0=tick2ret(Y0);
    AssetList0=AssetList';
end

The main bottleneck comes from the fact that Matlab’s portfolio object demands an equal number of historical returns ($N-1$) in the matrix of $R$ for all $M$ assets. We design the function in the way that it sets the common timeframe for all stocks listed on our shopping list. Of course, we ensure that all stocks were traded in the markets for about $n$ last days (rough estimation).

Now, the timeframe of $n$ last days should be understood as a first approximation. We fetch the data from Quandl (numeric date, Open, High, Low, Close, Volume) and save them in the cell array QData (lines #37-49) for each stock separately as FTS objects (Financial Time-Series objects; see Financial Toolbox). However, it may occur that not every stock we fetched displays the same amount of data. That is why we need to investigate for what days and for what stocks we miss the data. We achieve that by scanning each FTS object and creating a unique list of all days for which we have data (lines #54-60).

Next, we loop again over the same data sets but now we compare that list with a list of all dates for each stock individually (lines #63-69), capturing (line #67) those dates that are missing. Their complete list is stored as a vector in line #69. Eventually, given that, we are able to compile the full data set (e.g. Close Prices; here line #80) for all stocks in our portfolio ensuring that we will include only those dates for which we have prices across all $M$ assets (lines #70-91).

Beast Unleashed

We test our data pre-processing simply by running the block of code listed above engaging QuandlForPortfolio function and we check the results in the Matlab’s command window as follows:

>> whos X Y R AssetList
  Name             Size            Bytes  Class     Attributes
 
  AssetList        1x10             1192  cell                
  R              250x10            20000  double              
  X              251x1              2008  double              
  Y              251x10            20080  double

what confirms the correctness of dimensions as expected.

At this stage, the aforementioned function can be used two-fold. First, we are interested in the portfolio optimisation and we look back at last $n$ calendar days since the most current one (today). The second usage is handy too. We consider our stocks on the shopping list and fetch for their last, say, $n=7\times365$ days with data. If all stocks were traded over past 7 years we should be able to collect a reach data set. If not, the function will adjust the beginning and end date to meet the initial time constrains as required for $R$ matrix construction. For the former case, we can use 7-year data sample for direct backtesting of algo models utilizing Portfolio Optimization.

Stay tuned as we will rock this land in the next post!

Any Questions?

Share them across QuantCove.com – the official Forum of QuantAtRisk.

Anxiety Detection Model for Stock Traders based on Principal Component Analysis

Everybody would agree on one thing: the nervousness among traders may lead to massive sells of stocks, the avalanche of prices, and huge losses. We have witnessed this sort of behaviour many times. An anxiety is the feeling of uncertainty, a human inborn instinct that triggers a self-defending mechanism against high risks. The apex of anxiety is fear and panic. When the fear spreads among the markets, all goes south. It is a dream goal for all traders to capture the nervousness and fear just by looking at or studying the market behaviour, trading patterns, ask-bid spread, or flow of orders. The major problem is that we know very well how much a human behaviour is an important factor in the game but it is not observed directly. It has been puzzling me for a few years since I accomplished reading Your Money and Your Brain: How the New Science of Neuroeconomics Can Help Make You Rich by Jason Zweig sitting at the beach of one of the Gili Islands, Indonesia, in December of 2009. A perfect spot far away from the trading charts.

So, is there a way to disentangle the emotional part involved in trading from all other factors (e.g. the application of technical analysis, bad news consequences, IPOs, etc.) which are somehow easier to deduce? In this post I will try to make a quantitative attempt towards solving this problem. Although the solution will not have the final and closed form, my goal is to deliver an inspiration for quants and traders interested in the subject by putting a simple idea into practice: the application of Principal Component Analysis.

1. Principal Component Analysis (PCA)

Called by many as one of the most valuable results from applied linear algebra, the Principal Component Analysis, delivers a simple, non-parametric method of extracting relevant information from often confusing data sets. The real-world data usually hold some relationships among their variables and, as a good approximation, in the first instance we may suspect them to be of the linear (or close to linear) form. And the linearity is one of stringent but powerful assumptions standing behind PCA.

Imagine we observe the daily change of prices of $m$ stocks (being a part of your portfolio or a specific market index) over last $n$ days. We collect the data in $\boldsymbol{X}$, the matrix $m\times n$. Each of $n$-long vectors lie in an $m$-dimensional vector space spanned by an orthonormal basis, therefore they are a linear combination of this set of unit length basic vectors: $ \boldsymbol{BX} = \boldsymbol{X}$ where a basis $\boldsymbol{B}$ is the identity matrix $\boldsymbol{I}$. Within PCA approach we ask a simple question: is there another basis which is a linear combination of the original basis that represents our data set? In other words, we look for a transformation matrix $\boldsymbol{P}$ acting on $\boldsymbol{X}$ in order to deliver its re-representation:
$$
\boldsymbol{PX} = \boldsymbol{Y} \ .
$$ The rows of $\boldsymbol{P}$ become a set of new basis vectors for expressing the columns of $\boldsymbol{X}$. This change of basis makes the row vectors of $\boldsymbol{P}$ in this transformation the principal components of $\boldsymbol{X}$. But how to find a good $\boldsymbol{P}$?

Consider for a moment what we can do with a set of $m$ observables spanned over $n$ days? It is not a mystery that many stocks over different periods of time co-vary, i.e. their price movements are closely correlated and follow the same direction. The statistical method to measure the mutual relationship among $m$ vectors (correlation) is achieved by the calculation of a covariance matrix. For our data set of $\boldsymbol{X}$:
$$
\boldsymbol{X}_{m\times n} =
\left[
\begin{array}{cccc}
\boldsymbol{x_1} \\
\boldsymbol{x_2} \\
… \\
\boldsymbol{x_m}
\end{array}
\right]
=
\left[
\begin{array}{cccc}
x_{1,1} & x_{1,2} & … & x_{1,n} \\
x_{2,1} & x_{2,2} & … & x_{2,n} \\
… & … & … & … \\
x_{m,1} & x_{m,2} & … & x_{m,n}
\end{array}
\right]
$$
the covariance matrix takes the following form:
$$
cov(\boldsymbol{X}) \equiv \frac{1}{n-1} \boldsymbol{X}\boldsymbol{X}^{T}
$$ where we multiply $\boldsymbol{X}$ by its transposed version and $(n-1)^{-1}$ helps to secure the variance to be unbiased. The diagonal elements of $cov(\boldsymbol{X})$ are the variances corresponding to each row of $\boldsymbol{X}$ whereas the off-diagonal terms of $cov(\boldsymbol{X})$ represent the covariances between different rows (prices of the stocks). Please note that above multiplication assures us that $cov(\boldsymbol{X})$ is a square symmetric matrix $m\times m$.

All right, but what does it have in common with our PCA method? PCA looks for a way to optimise the matrix of $cov(\boldsymbol{X})$ by a reduction of redundancy. Sounds a bit enigmatic? I bet! Well, all we need to understand is that PCA wants to ‘force’ all off-diagonal elements of the covariance matrix to be zero (in the best possible way). The guys in the Department of Statistics will tell you the same as: removing redundancy diagonalises $cov(\boldsymbol{X})$. But how, how?!

Let’s come back to our previous notation of $\boldsymbol{PX}=\boldsymbol{Y}$. $\boldsymbol{P}$ transforms $\boldsymbol{X}$ into $\boldsymbol{Y}$. We also marked that:
$$
\boldsymbol{P} = [\boldsymbol{p_1},\boldsymbol{p_2},…,\boldsymbol{p_m}]
$$ was a new basis we were looking for. PCA assumes that all basis vectors $\boldsymbol{p_k}$ are orthonormal, i.e. $\boldsymbol{p_i}\boldsymbol{p_j}=\delta_{ij}$, and that the directions with the largest variances are the most principal. So, PCA first selects a normalised direction in $m$-dimensional space along which the variance in $\boldsymbol{X}$ is maximised. That is first principal component $\boldsymbol{p_1}$. In the next step, PCA looks for another direction along which the variance is maximised. However, because of orthonormality condition, it looks only in all directions perpendicular to all previously found directions. In consequence, we obtain an orthonormal matrix of $\boldsymbol{P}$. Good stuff, but still sounds complicated?

The goal of PCA is to find such $\boldsymbol{P}$ where $\boldsymbol{Y}=\boldsymbol{PX}$ such that $cov(\boldsymbol{Y})=(n-1)^{-1}\boldsymbol{XX}^T$ is diagonalised.

We can evolve the notation of the covariance matrix as follows:
$$
(n-1)cov(\boldsymbol{Y}) = \boldsymbol{YY}^T = \boldsymbol{(PX)(PX)}^T = \boldsymbol{PXX}^T\boldsymbol{P}^T = \boldsymbol{P}(\boldsymbol{XX}^T)\boldsymbol{P}^T = \boldsymbol{PAP}^T
$$ where we made a quick substitution of $\boldsymbol{A}=\boldsymbol{XX}^T$. It is easy to prove that $\boldsymbol{A}$ is symmetric. It takes a longer while to find a proof for the following two theorems: (1) a matrix is symmetric if and only if it is orthogonally diagonalisable; (2) a symmetric matrix is diagonalised by a matrix of its orthonormal eigenvectors. Just check your favourite algebra textbook. The second theorem provides us with a right to denote:
$$
\boldsymbol{A} = \boldsymbol{EDE}^T
$$ where $\boldsymbol{D}$ us a diagonal matrix and $\boldsymbol{E}$ is a matrix of eigenvectors of $\boldsymbol{A}$. That brings us at the end of the rainbow.

We select matrix $\boldsymbol{P}$ to be a such where each row $\boldsymbol{p_1}$ is an eigenvector of $\boldsymbol{XX}^T$, therefore
$$
\boldsymbol{P} = \boldsymbol{E}^T .
$$

Given that, we see that $\boldsymbol{E}=\boldsymbol{P}^T$, thus we find $\boldsymbol{A}=\boldsymbol{EDE}^T = \boldsymbol{P}^T\boldsymbol{DP}$ what leads us to a magnificent relationship between $\boldsymbol{P}$ and the covariance matrix:
$$
(n-1)cov(\boldsymbol{Y}) = \boldsymbol{PAP}^T = \boldsymbol{P}(\boldsymbol{P}^T\boldsymbol{DP})\boldsymbol{P}^T
= (\boldsymbol{PP}^T)\boldsymbol{D}(\boldsymbol{PP}^T) =
(\boldsymbol{PP}^{-1})\boldsymbol{D}(\boldsymbol{PP}^{-1})
$$ or
$$
cov(\boldsymbol{Y}) = \frac{1}{n-1}\boldsymbol{D},
$$ i.e. the choice of $\boldsymbol{P}$ diagonalises $cov(\boldsymbol{Y})$ where silently we also used the matrix algebra theorem saying that the inverse of an orthogonal matrix is its transpose ($\boldsymbol{P^{-1}}=\boldsymbol{P}^T$). Fascinating, right?! Let’s see now how one can use all that complicated machinery in the quest of looking for human emotions among the endless rivers of market numbers bombarding our sensors every day.

2. Covariances of NASDAQ, Eigenvalues of Anxiety


We will try to build a simple quantitative model for detection of the nervousness in the trading markets using PCA.

By its simplicity I will understand the following model assumption: no matter what the data conceal, the 1st Principal Component (1-PC) of PCA solution links the complicated relationships among a subset of stocks triggered by a latent factor attributed by us to a common behaviour of traders (human and pre-programmed algos). It is a pretty reasonable assumption, much stronger than, for instance, the influence of Saturn’s gravity on the annual silver price fluctuations. Since PCA does not tell us what its 1-PC means in reality, this is our job to seek for meaningful explanations. Therefore, a human factor fits the frame as a trial value very well.

Let’s consider the NASDAQ-100 index. It is composed of 100 technology stocks. The most current list you can find here: nasdaq100.lst downloadable as a text file. As usual, we will perform all calculations using Matlab environment. Let’s start with data collection and pre-processing:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
% Anxiety Detection Model for Stock Traders
%  making use of the Principal Component Analsis (PCA)
%  and utilising publicly available Yahoo! stock data
%
% (c) 2013 QuantAtRisk.com, by Pawel Lachowicz
 
clear all; close all; clc;
 
 
% Reading a list of NASDAQ-100 components
nasdaq100=(dataread('file',['nasdaq100.lst'], '%s', 'delimiter', '\n'))';
 
% Time period we are interested in
d1=datenum('Jan 2 1998');
d2=datenum('Oct 11 2013');
 
% Check and download the stock data for a requested time period
stocks={};
for i=1:length(nasdaq100)
    try
        % Fetch the Yahoo! adjusted daily close prices between selected
        % days [d1;d2]
        tmp = fetch(yahoo,nasdaq100{i},'Adj Close',d1,d2,'d');
        stocks{i}=tmp;
        disp(i);
    catch err
        % no full history available for requested time period
    end
end

where, first, we try to check whether for a given list of NASDAQ-100’s components the full data history (adjusted close prices) are available via Yahoo! server (please refer to my previous post of Yahoo! Stock Data in Matlab and a Model for Dividend Backtesting for more information on the connectivity).

The cell array stocks becomes populated with two-dimensional matrixes: the time-series corresponding to stock prices (time,price). Since the Yahoo! database does not contain a full history for all stocks of our interest, we may expect their different time spans. For the purpose of demonstration of the PCA method, we apply additional screening of downloaded data, i.e. we require the data to be spanned between as defined by $d1$ and $d2$ variables and, additionally, having the same (maximal available) number of data points (observations, trials). We achieve that by:

31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
% Additional screening
d=[]; 
j=1; 
data={};
for i=1:length(nasdaq100)
    d=[d; i min(stocks{i}(:,1)) max(stocks{i}(:,1)) size(stocks{i},1)];
end
for i=1:length(nasdaq100)
    if(d(i,2)==d1) && (d(i,3)==d2) && (d(i,4)==max(d(:,4)))
        data{j}=sortrows(stocks{i},1);
        fprintf('%3i %1s\n',i,nasdaq100{i})
        j=j+1;
    end
end
m=length(data);

The temporary matrix of $d$ holds the index of stock as read in from nasdaq100.lst file, first and last day number of data available, and total number of data points in the time-series, respectively:

>> d
d =
      1      729757      735518        3970
      2      729757      735518        3964
      3      729757      735518        3964
      4      729757      735518        3969
     ..          ..          ..          ..
     99      729757      735518        3970
    100      729757      735518        3970

Our screening method saves $m=21$ selected stock data into data cell array corresponding to the following companies from our list:

  1 AAPL
  7 ALTR
  9 AMAT
 10 AMGN
 20 CERN
 21 CHKP
 25 COST
 26 CSCO
 30 DELL
 39 FAST
 51 INTC
 64 MSFT
 65 MU
 67 MYL
 74 PCAR
 82 SIAL
 84 SNDK
 88 SYMC
 96 WFM
 99 XRAY
100 YHOO

Okay, some people say that seeing is believing. All right. Let’s see how it works. Recall the fact that we demanded our stock data to be spanned between ‘Jan 2 1998′ and ‘Oct 11 2013′. We found 21 stocks meeting those criteria. Now, let’s assume we pick up a random date, say, Jul 2 2007 and we extract for all 21 stocks their price history over last 90 calendar days. We save their prices (skipping the time columns) into $Z$ matrix as follows:

t=datenum('Jul 2 2007');
Z=[];
for i=1:m
    [r,c,v]=find((data{i}(:,1)<=t) & (data{i}(:,1)>t-90));
    Z=[Z data{i}(r,2)]
end

and we plot them all together:

plot(Z)
xlim([1 length(Z)]);
ylabel('Stock price (US$)');
xlabel('T-90d');

anxiety-fig01
It’s easy to deduct that the top one line corresponds to Apple, Inc. (AAPL) adjusted close prices.

The unspoken earlier data processing methodology is that we need to transform our time-series into the comparable form. We can do it by subtracting the average value and dividing each of them by their standard deviations. Why? For a simple reason of an equivalent way of their mutual comparison. We call that step a normalisation or standardisation of the time-series under investigation:

[N,M]=size(Z);
X=(Z-repmat(mean(Z),[N 1]))./repmat(std(Z),[N 1]);

This represents the matrix $\boldsymbol{X}$ that I discussed in a theoretical part of this post. Note, that the dimensions are reversed in Matlab. Therefore, the normalised time-series,

% Display normalized stock prices
plot(X)
xlim([1 length(Z)]);
ylabel('(Stock price-Mean)/StdDev');
xlabel('T-90d');

look like:
anxiety-fig02
For a given matrix of $\boldsymbol{X}$, its covariance matrix,

% Calculate the covariance matrix, cov(X)
CovX=cov(X);
imagesc(CovX);

as for data spanned 90 calendar day back from Jul 2 2007, looks like:
anxiety-fig03
where the colour coding goes from the maximal values (most reddish) down to the minimal values (most blueish). The diagonal of the covariance matrix simply tells us that for normalised time-series, their covariances are equal to the standard deviations (variances) of 1 as expected.

Going one step forward, based on the given covariance matrix, we look for the matrix of $\boldsymbol{P}$ whose columns are the corresponding eigenvectors:

% Find P
[P,~]=eigs(CovX,5);
imagesc(P);
set(gca,'xticklabel',{1,2,3,4,5},'xtick',[1 2 3 4 5]);
xlabel('Principal Component')
ylabel('Stock');
set(gca,'yticklabel',{'AAPL', 'ALTR', 'AMAT', 'AMGN', 'CERN', ...
 'CHKP', 'COST', 'CSCO', 'DELL', 'FAST', 'INTC', 'MSFT', 'MU', ...
 'MYL', 'PCAR', 'SIAL', 'SNDK', 'SYMC', 'WFM', 'XRAY', 'YHOO'}, ...
 'ytick',[1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21]);

which results in $\boldsymbol{P}$ displayed as:
anxiety-fig04
where we computed PCA for five principal components in order to illustrate the process. Since the colour coding is the same as in the previous figure, a visual inspection of of the 1-PC indicates on negative numbers for at least 16 out of 21 eigenvalues. That simply means that over last 90 days the global dynamics for those stocks were directed south, in favour of traders holding short-position in those stocks.

It is important to note in this very moment that 1-PC does not represent the ‘price momentum’ itself. It would be too easy. It represents the latent variable responsible for a common behaviour in the stock dynamics whatever it is. Based on our model assumption (see above) we suspect it may indicate a human factor latent in the trading.

3. Game of Nerves

The last figure communicates an additional message. There is a remarkable coherence of eigenvalues for 1-PC and pretty random patterns for the remaining four principal components. One may check that in the case of our data sample, this feature is maintained over many years. That allows us to limit our interest to 1-PC only.

It’s getting exciting, isn’t it? Let’s come back to our main code. Having now a pretty good grasp of the algebra of PCA at work, we may limit our investigation of 1-PC to any time period of our interest, below spanned between as defined by $t1$ and $t2$ variables:

47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
% Select time period of your interest
t1=datenum('July 1 2006');
t2=datenum('July 1 2010');
 
results=[];
for t=t1:t2
    tmp=[];
    A=[]; V=[];
    for i=1:m
        [r,c,v]=find((data{i}(:,1)<=t) & (data{i}(:,1)>t-60));
        A=[A data{i}(r,2)];
    end
    [N,M]=size(A);
    X=(A-repmat(mean(A),[N 1]))./repmat(std(A),[N 1]);
    CovX=cov(X);
    [V,D]=eigs(CovX,1);
    % Find all negative eigenvalues of the 1st Principal Component
    [r,c,v]=find(V(:,1)<0);
    % Extract them into a new vector
    neg1PC=V(r,1);
    % Calculate a percentage of negative eigenvalues relative
    % to all values available
    ratio=length(neg1PC)/m;
    % Build a new time-series of 'ratio' change over required
    % time period (spanned between t1 and t2)
    results=[results; t ratio];    
end

We build our anxiety detection model based on the change of number of eigenvalues of the 1st Principal Component (relative to the total their numbers; here equal 21). As a result, we generate a new time-series tracing over $[t1;t2]$ time period this variable. We plot the results all in one plot contrasted with the NASDAQ-100 Index in the following way:

75
76
77
78
79
80
81
82
83
84
85
% Fetch NASDAQ-100 Index from Yahoo! data-server
nasdaq = fetch(yahoo,'^ndx','Adj Close',t1,t2,'d');
% Plot it
subplot(2,1,1)
plot(nasdaq(:,1),nasdaq(:,2),'color',[0.6 0.6 0.6]);
ylabel('NASDAQ-100 Index');
% Add a plot corresponding to a new time-series we've generated
subplot(2,1,2)
plot(results(:,1),results(:,2),'color',[0.6 0.6 0.6])
% add overplot 30d moving average based on the same data
hold on; plot(results(:,1),moving(results(:,2),30),'b')

leading us to:
anxiety-fig05
I use 30-day moving average (a solid blue line) in order to smooth the results (moving.m). Please note, that line in #56 I also replaced the earlier value of 90 days with 60 days. Somehow, it is more reasonable to examine with the PCA the market dynamics over past two months than for longer periods (but it’s a matter of taste and needs).

Eventually, we construct the core model’s element, namely, we detect nervousness among traders when the percentage of negative eigenvalues of the 1st Principal Component increases over (at least) five consecutive days:

87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
% Model Core
x1=results(:,1);
y1=moving(results(:,2),30);
tmp=[];
% Find moments of time where the percetage of negative 1-PC
% eigenvalues increases over time (minimal requirement of
% five consecutive days
for i=5:length(x1)
    if(y1(i)>y1(i-1))&&(y1(i-1)>y1(i-2))&&(y1(i-2)>y1(i-3))&& ...
      (y1(i-3)>y1(i-4))&&(y1(i-4)>y1(i-5))
        tmp=[tmp; x1(i)];
    end
end
% When found
z=[];
for i=1:length(tmp)
    for j=1:length(nasdaq)
        if(tmp(i)==nasdaq(j,1))
            z=[z; nasdaq(j,1) nasdaq(j,2)];
        end
    end
end
subplot(2,1,1); 
hold on; plot(z(:,1),z(:,2),'r.','markersize',7);

The results of the model we over-plot with red markers on top of the NASDAQ-100 Index:
anxiety-fig06
Our simple model takes us into a completely new territory of unexplored space of latent variables. Firstly, it does not predict the future. It still (unfortunately) remains unknown. However, what it delivers is a fresh look at the past dynamics in the market. Secondly, it is easily to read out from the plot that results cluster into three subgroups.

The first subgroup corresponds to actions in the stock trading having further negative consequences (see the events of 2007-2009 and the avalanche of prices). Here the dynamics over any 60 calendar days had been continued. The second subgroup are those periods of time when anxiety led to negative dynamics among stock traders but due to other factors (e.g. financial, global, political, etc.) the stocks surged dragging the Index up. The third subgroup (less frequent) corresponds to instances of relative flat changes of Index revealing a typical pattern of psychological hesitation about the trading direction.

No matter how we might interpret the results, the human factor in trading is evident. Hopefully, the PCA approach captures it. If not, all we are left with is our best friend: a trader’s intuition.

Acknowledgments

An article dedicated to Dr. Dariusz Grech of Physics and Astronomy Department of University of Wroclaw, Poland, for his superbly important! and mind-blowing lectures on linear algebra in the 1998/99 academic year.

Yahoo! Stock Data in Matlab and a Model for Dividend Backtesting


Within the evolution of Mathworks’ MATLAB programming environment, finally, in the most recent version labelled 2013a we received a longly awaited line-command facilitation for pulling stock data directly from the Yahoo! servers. What does that mean for quants and algo traders? Honestly, a lot. Now, simply writing a few commands we can have nearly all what we want. However, please keep in mind that Yahoo! data are free therefore not always in one hundred percent their precision remains at the level of the same quality as, e.g. downloaded from Bloomberg resources. Anyway, just for pure backtesting of your models, this step introduces a big leap in dealing with daily stock data. As usual, we have a possibility of getting open, high, low, close, adjusted close prices of stocks supplemented with traded volume and the dates plus values of dividends.

In this post I present a short example how one can retrieve the data of SPY (tracking the performance of S&P500 index) using Yahoo! data in a new Matlab 2013a and I show a simple code how one can test the time period of buying-holding-and-selling SPY (or any other stock paying dividends) to make a profit every time.

The beauty of Yahoo! new feature in Matlab 2013a has been fully described in the official article of Request data from Yahoo! data servers where you can find all details required to build the code into your Matlab programs.

Model for Dividends

It is a well known opinion (based on many years of market observations) that one may expect the drop of stock price within a short timeframe (e.g. a few days) after the day when the stock’s dividends have been announced. And probably every quant, sooner or later, is tempted to verify that hypothesis. It’s your homework. However, today, let’s look at a bit differently defined problem based on the omni-working reversed rule: what goes down, must go up. Let’s consider an exchange traded fund of SPDR S&P 500 ETF Trust labelled in NYSE as SPY.

First, let’s pull out the Yahoo! data of adjusted Close prices of SPY from Jan 1, 2009 up to Aug 27, 2013

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
% Yahoo! Stock Data in Matlab and a Model for Dividend Backtesting
% (c) 2013 QuantAtRisk.com, by Pawel Lachowicz
 
close all; clear all; clc;
 
date_from=datenum('Jan 1 2009');
date_to=datenum('Aug 27 2013');
 
stock='SPY';
 
adjClose = fetch(yahoo,stock,'adj close',date_from,date_to);
div = fetch(yahoo,stock,date_from,date_to,'v')
returns=(adjClose(2:end,2)./adjClose(1:end-1,2)-1);
 
% plot adjusted Close price of  and mark days when dividends
% have been announced
plot(adjClose(:,1),adjClose(:,2),'color',[0.6 0.6 0.6])
hold on;
plot(div(:,1),min(adjClose(:,2))+10,'ob');
ylabel('SPY (US$)');
xlabel('Jan 1 2009 to Aug 27 2013');

and visualize them:

spy-1

Having the data ready for backtesting, let’s look for the most profitable period of time of buying-holding-and-selling SPY assuming that we buy SPY one day after the dividends have been announced (at the market price), and we hold for $dt$ days (here, tested to be between 1 and 40 trading days).

23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
% find the most profitable period of holding SPY (long position)
neg=[];
for dt=1:40
 
buy=[]; sell=[];
for i=1:size(div,1)
    % find the dates when the dividends have been announced
    [r,c,v]=find(adjClose(:,1)==div(i,1));
    % mark the corresponding SPY price with blue circle marker
    hold on; plot(adjClose(r,1),adjClose(r,2),'ob');
    % assume you buy long SPY next day at the market price (close price)
    buy=[buy; adjClose(r-1,1) adjClose(r-1,2)];
    % assume you sell SPY in 'dt' days after you bought SPY at the market
    % price (close price)
    sell=[sell; adjClose(r-1-dt,1) adjClose(r-1-dt,2)];
end
 
% calculate profit-and-loss of each trade (excluding transaction costs)
PnL=sell(:,2)./buy(:,2)-1;
% summarize the results
neg=[neg; dt sum(PnL<0) sum(PnL<0)/length(PnL)];
 
end

If we now sort the results according to the percentage of negative returns (column 3 of neg matrix), we will be able to get:

>> sortrows(neg,3)
 
ans =
   18.0000    2.0000    0.1111
   17.0000    3.0000    0.1667
   19.0000    3.0000    0.1667
   24.0000    3.0000    0.1667
    9.0000    4.0000    0.2222
   14.0000    4.0000    0.2222
   20.0000    4.0000    0.2222
   21.0000    4.0000    0.2222
   23.0000    4.0000    0.2222
   25.0000    4.0000    0.2222
   28.0000    4.0000    0.2222
   29.0000    4.0000    0.2222
   13.0000    5.0000    0.2778
   15.0000    5.0000    0.2778
   16.0000    5.0000    0.2778
   22.0000    5.0000    0.2778
   27.0000    5.0000    0.2778
   30.0000    5.0000    0.2778
   31.0000    5.0000    0.2778
   33.0000    5.0000    0.2778
   34.0000    5.0000    0.2778
   35.0000    5.0000    0.2778
   36.0000    5.0000    0.2778
    6.0000    6.0000    0.3333
    8.0000    6.0000    0.3333
   10.0000    6.0000    0.3333
   11.0000    6.0000    0.3333
   12.0000    6.0000    0.3333
   26.0000    6.0000    0.3333
   32.0000    6.0000    0.3333
   37.0000    6.0000    0.3333
   38.0000    6.0000    0.3333
   39.0000    6.0000    0.3333
   40.0000    6.0000    0.3333
    5.0000    7.0000    0.3889
    7.0000    7.0000    0.3889
    1.0000    9.0000    0.5000
    2.0000    9.0000    0.5000
    3.0000    9.0000    0.5000
    4.0000    9.0000    0.5000

what simply indicates at the most optimal period of holding the long position in SPY equal 18 days. We can mark all trades (18 day holding period) in the chart:

spy-2

where the trade open and close prices (according to our model described above) have been marked in the plot by black and red circle markers, respectively. Only 2 out of 18 trades (PnL matrix) occurred to be negative with the loss of 2.63% and 4.26%. The complete distribution of profit and losses from all trades can be obtained in the following way:

47
48
49
50
figure(2);
hist(PnL*100,length(PnL))
ylabel('Number of trades')
xlabel('Return (%)')

returning

spy-3

Let’s make some money!

The above Matlab code delivers a simple application of the newest build-in connectivity with Yahoo! server and the ability to download the stock data of our interest. We have tested the optimal holding period for SPY since the beginning of 2009 till now (global uptrend). The same code can be easily used and/or modified for verification of any period and any stock for which the dividends had been released in the past. Fairly simple approach, though not too frequent in trading, provides us with some extra idea how we can beat the market assuming that the future is going to be/remain more or less the same as the past. So, let’s make some money!

Modern Time Analysis of Black Swans


I decided to take the data analysis on Black Swan and Extreme Loss Modeling to the next level and examine the time distribution of extreme losses across the entire S&P 500 universe of traded stocks. Previously, we were interested, first, in finding the maximum loss among all trading days for a given stock in a specified time interval (stock life-time on the trading floor), secondly, in plotting the distribution of those extreme losses (found to be well fitted with the Gumbel distribution). The investigation when all those losses occurred in time can provide us with an extra vantage point, and if we are lucky researchers we can discover some new information on the population of Black Swans.

An excellent approach to data analysis is through time analysis. It is one of many, many data analytic techniques available but it is closest to my heart and I would like to provide you with a sophisticated taste of easy-to-digest mathematics applied to the financial time-series analysis. So what is a data analysis about in our case? Can we code it or be tempted by ready-to-use dedicated data analytic software? Both ways are accessible but let’s do it in more educational way taking computer programming class in Matlab.

By the time analysis we can understand the behaviour of a given quantity in time, e.g. the price of an asset traded in the exchange market. It is pretty straightforward that our eye looks for patterns and tries to classify them somehow. If we observe a repeating structure, or sequence, or shape $-$ that may help us to code its recognition within our trading platforms. The quest for hunting the most fundamental pattern of hidden volatility in the data remains obvious: a sine wave. Fairly painless in understanding, covered by maths teachers in high-schools, ‘up and down’ approach, good moods versus bad moods, bear markets followed by bull markets. A sinusoidal behaviour is everywhere around us. Therefore it is so important to know how to find it!

In quantitative finance and risk modeling, a periodicity or cyclicality constitutes an attractive and simplest model of volatility. To be able to figure out what spectrum of characteristic frequencies our given data set conceals we may need to use a proper tool. In time-series analysis this tool is known as a periodogram. However, before starting using it properly, it is essential to understand the theoretical background.

1. The Method of Period Detection

1.1. General Formulations and Model Orthogonality

In general, we learn from experiments by fitting data, $x$, with a model, $x_{\|}$. The data contain $n$ measurements, and the model, $n_{\|}$ free parameters. The consistency of the data with the model is measured by a function, $\Theta$, called a statistic. A given model, $x_{\|}$, using a given statistic (e.g., $\chi^2$), yields its particular value, $\Theta_1$. Various methods used in the analysis of time series differ both in their choice of the model and the statistic; hence are difficult to compare directly. To enable such a comparison and for determining the significance of results, $\Theta$ is converted into the false alarm probability,$P_1$. This is done considering a hypothetic situation, $H_1$, in which $x$ is pure white noise. Then each pair $(x_{\|},\Theta)$ corresponds to certain cumulative probability distribution of $\Theta$, namely $P(n_{\|},n;\Theta)$, with $P_1$ being the tail probability that under the hypothesis $H_1$ the experiment yields $\Theta>\Theta_1$, i.e., $P_1(\Theta>\Theta_1) = 1-P(n_{\|},n;\Theta_1)$.

Up to here, we have just outlined the classical Neyman-Pearson procedure of statistics. The specific method for analysis of time series used here differs from those commonly encountered in astronomy only in the choices of $x_{\|}$ and $\Theta$. Then, our accounting for variance, correlation and multiple frequencies in calculating $P$ is dictated by the laws of statistics. The probabilities derived by us from the data are the false alarm probabilities. However, we also call them below just probabilities or significance levels.

We note then that Fourier harmonics are not orthogonal in terms of the scalar product with weights at unevenly distributed observations. Certain statistical procedures employing classical probability distributions hold for orthogonal models only and fail in other cases. To avoid that, a popular variant of the power spectrum, Lomb (1976) and Scargle (1982, hereafter LS) Lomb-Scargle periodogram, $P_{\rm LS}(\nu)$, relies on a special choice of phase such that the sine and cosine functions become orthogonal:
$$
\begin{equation}
P_{\rm LS}(\nu) = A_{\rm LS} \left|\hat{x}_{\rm LS}(\nu) \right|^2 .
\end{equation}
$$ Square of the Fourier amplitude, $\hat{x}(\nu)$, takes form:
$$
\begin{eqnarray}
\left|\hat{x}_{\rm LS}(\nu)\right|^2 & = \left[ \sum_{k=1}^{n} (x_k-\bar{x})
\cos (2\pi\nu (t_k-\tau))
\right]^2 +
\nonumber \\
& \left[ \sum_{k=1}^{n} (x_k-\bar{x}) \sin (2\pi\nu
(t_k-\tau))
\right]^2 .
\end{eqnarray}
$$ The phase $\tau$ is defined as:
$$
\begin{equation}
\tan(4\pi\nu\tau) = \frac{\sum_{k=1}^{n} \sin(4\pi\nu x_k)}
{\sum_{k=1}^{N_{\rm obs}} \cos(4\pi\nu x_k)}.
\end{equation}
$$ where, as usual, we consider a time-series $\{x_i\}\ (i=1,…,n)$; $\bar{x}$ denotes the subtracted mean value, and the discrete Fourier transform takes our signal from time to frequency domain. Orignally, normalization of LS periodogram was proposed as $A_{\rm LS}=1/2\sigma^2$ in order to account for normalization to the level of white noise but different variants are available as well.

In practice, some simplification to the full version of LS periodogram are applied where we are interested in understanding the power density spectrum of $|\hat{x}(\nu)|^2$ distribution. The following Matlab function allow us to obtain a modified LS solution for a time-series $[t,y]$:

% Function calculates the power density spectrum (periodogram) for
%  a time-series [t,y] based on discrete Fourier transform. The
%  series is expected to be evenly distributed, but gaps are allowed.
%
% (c) 2013 QuantAtRisk.com, by Pawel Lachowicz
%
% Input:
%           t       : time vector   [1xn]
%           y       : time-series   [1xn]
%           dt      : average time-series sampling time
% Output:
%           freq    : frequency vector        [1xm]
%           psd     : power spectral density  [1xm]
 
function [freq,pds]=qarPDS(t,y,dt);
    n=length(t);                % number of data points
    X=fft(y,n);
    Pyy=X.*conj(X);
    freq=[1:n/2]*(1/n/dt);      % in physical units
    pds=(Pyy(2:(floor(n/2)+1)));
    pds=pds';                   % pds
end

We will use it later on to the time analysis of our S&P 500 data of extreme losses across the whole universe of stocks traded. The function computes the periodogram for any time-series so don’t be worried too much about initial data units.

Now, back to the first base. We may extend the approach used within LS method by employing Szego orthogonal trigonometric polynomials as model functions. A series of $n_{\|}=2N+1$ polynomials corresponds to the orthogonal combinations of the $N$ lowest Fourier harmonics (Schwarzenberg-Czerny 1996). Orthogonal series are optimal from the statistical point of view because, by virtue of the Fisher lemma (Fisz 1963; Schwarzenberg-Czerny 1998), they guarantee the minimum variance of the fit residuals for a given model complexity (given by $n_{\|}$). Szego polynomials are also convenient in computations since the least-square solution may be obtained using recurrence and orthogonal projections, resulting in high computational efficiency, with the number of steps $\propto N$ instead of $N^3$ for $N$ harmonics.

1.2. Variance, the AoV statistics, and model complexity


The LS method employs the sine as a model, and the quadratic norm, $$\Theta_{\chi^2}=\|x-x_{\|}\|^2 , $$ as the statistic. The corresponding probability distribution is $\chi^2$ with 2 degrees of freedom. Prior to use of the $\chi^2$ distribution, $\Theta_{\chi^2}$ has to be divided by the signal variance, $V$. However,$V$ is usually not known and has to be estimated from the data themselves. Then, neither $\Theta_{\chi^2}$ and variance estimates are independent nor their ratio follows the $\chi^2$ distribution, which effect has to be accounted for. A simple way to do it is to apply the Fisher Analysis of Variance (AoV) statistic, $$\Theta\equiv (n-n_{\|}) \|x_{\|}\|^2/ (n_{\|}\|x – x_{\|}\|^2) .$$ Hence we call our method, involving Szego polynomials model and the AoV statistics, the multi-harmonic analysis of variance or mhAoV periodogram (Schwarzenberg-Czerny 1996). The probability distribution is then the Fisher-Snedecor distribution, $F$, rather then $\chi^2$, and $P_1= 1-F(n_{\|},n_{\perp};\Theta)$ where $n_{\perp}=n-n_{\|}$. For everything else fixed, replacing $\chi^2$ with $F$ for $n=100$ yields an increase of $P_1(\chi^2)=0.001$ to $P_1(F)=0.01$. Thus, accounting for the unknown variance yields the mhAoV detection less significant, but more trustworthy. In this work, $n$ usually is larger, for which $P_1(F)/ P_1(\chi^2)$ reduces to several.

Apart from the choice of the statistic, our method for $N=1$ differs from the LS one in the average flux being subtracted in the latter (thus yielding $n_\|=2$) whereas a constant term is fitted in the former (which can be often of significant advantage, see Foster 1995). If the periodic modulation in the data differs significantly from a sinusoid (e.g., due to dips, eclipses, etc.), then our $N>1$ models account for that more complex shape and perform considerably better then the LS one.

1.3. Multiple trials

Probability can be assigned to a period found in data according to one of two statistical hypotheses. Namely, (i) one knows in advance the trial frequency, $\nu_0$ (from other data), and would like to check whether it is also present in a given data set or (ii) one searches a whole range, $\Delta\nu$, of $N_{\rm eff}$ frequencies and finds the frequency, $\nu$, corresponding to the most significant modulation. The two cases correspond to the probabilities $P_1$ and $P_{N_{\rm eff}}$ to win in a lottery after 1 and $N_{\rm eff}$ trials, respectively, i.e., they represent the false alarm probabilities in single and multiple experiments, respectively. They are related by
$$ P_{N_{\rm eff}}= 1-(1-P_1)^{N_{\rm eff}} .$$ Note that the hypothesis (ii) and the probability $P_{N_{\rm eff}}$ must be always employed in order to claim any new frequency in the object under study. The hypothesis (i) is rarely used. However, since $P_1\lt P_{N_{\rm eff}}$, it is the more sensitive one. For this reason, we advocate its use in the situations where the modulation frequency is already known, and we aim at checking for its manifestation in the same object but in a new band, new data set, etc. We stress that we do not use the hypothesis (i) to claim any new frequency.

An obstacle hampering use of the (ii) hypothesis is that no analytical method is known to calculate $N_{\rm eff}$. The number $N_{\rm eff}$ corresponds to independent trials, whereas values of periodograms at many frequencies are correlated because of the finite width of the peaks, $\delta\nu$, and because of aliasing. As no analytical method is known to determine $N_{\rm eff}$, Monte Carlo simulations have been used (e.g., Paltani 2004). Here, we use a simple conservative estimate, $N_{\rm eff}= \min(\Delta\nu/\delta\nu, N_{\rm calc},n)$, where $N_{\rm calc}$ is the number of the values at which the periodogram is calculated. The estimate is conservative in the sense that it corresponds to the upper limit on $P_{N_{\rm eff}}$, and thus the minimum significance of detection. This effect applies to all methods of period search (Horne & Baliunas 1986). In general, it may reduce significance of a new frequency detection for large $N_{\rm eff}$ as $P_{N_{\rm eff}}\gg P_1$. In practice, it underscores the role of any prior knowledge, in a way similar to the Bayesian statistics: with any prior knowledge of the given frequency we are able to use the hypothesis (i) to claim the detection with large significance (small $P_1$).

1.4. Correlation length

The $P_1$, and other common probability distributions used to set the detection criteria, are derived under the assumption of the noise being statistically
independent. Often this is not the case, as seen, e.g., in light curves of cataclysmic variables (CVs). The correlated noise, often termed red noise, obeys different probability distribution than the standard $P_1$, and hence may have a profound effect. For example, noise with a Gaussian autocorrelation function (ACF) correlated over a time interval, $\delta t$, yields a power spectrum with the Gaussian shape centered at $\nu=0$ and the width $\delta\nu=1/\delta t$. It may be demonstrated that the net effect of the correlation on $P_1$ in analysis of low frequency processes is to decimate the number of independent observations by a factor $n_{\rm corr}$, the average number of observations in the correlation interval $\delta t$ (Schwarzenberg-Czerny 1991). Effectively, one should use $n_{\perp}/n_{\rm corr}$ and $\Theta/n_{\rm corr}$ instead of $n_{\perp}$ and $\Theta$ in calculating $P_1$. This result holds generally, for both least squares and maximum likelihood analyses of time series.

For independent observations, $m=2$ consecutive residuals have the same sign on average (e.g., Fisz 1963). Thus, counting the average length, $m$, of series of residuals of the same sign provides an estimate of the number of consecutive observations being correlated, $n_{\rm corr}$. Note that $m=n/l$ where $l$ is the number of such series (both positive and negative). For correlated observations, the average length of series with the same sign is $m=2n_{\rm corr}$, which allows us to calculate $n_{\rm corr}$.

Let $\Theta$ denote the Fisher-Snedecor statistics from the mhAoV periodogram (i.e. from Fourier series fit) computed for $n_{\|}=2N+1$ parameters, $n$ observations and $n_{\perp}=n-n_{\|}$ degrees of freedom. To account for $n_{\rm corr}$, we calculate $P_1$ as follows,
\begin{equation}
P_1=1- F\left(n_{\|},\frac{n_{\perp}}{n_{\rm
corr}};\frac{\Theta}{n_{\rm corr}}\right)=
I_{z}\left(\frac{n_{\perp}}{2n_{\rm
corr}},\frac{n_{\|}}{2}\right)\label{P1},
\end{equation}
where $ z= n_{\perp}/(n_{\perp}+n_{\|}\Theta)$ and $I_z(a,b)$ is the incomplete (regularized) beta function (Abramowitz & Stegun 1971), see Schwarzenberg-Czerny 1998 and references therein. In the popular Mathematica (Wolfram 1996) that function is called BetaRegularized. In Matlab, the following function does the calculations for us:

1
2
3
4
5
6
7
8
9
10
11
12
% Function computes the mhAoV periodogram peak significance
% Usage: [P1,Pneff]=pneff(n,nh,ncorr,neff,theta)
 
function [P1,Pneff]=pneff(n,nh,ncorr,neff,theta);
    npar=2.0*nh+1;
    nper=n-npar;
    z=nper/(nper+npar*theta);
    a=nper/(2.0*ncorr);
    b=npar/2.0;
    P1=betainc(z,a,b);  
    Pneff=1.0-(1.0-P1)^neff;
end

In the following section we will apply both approaches of modified Lomb-Scargle (LS) and multi-harmonic AoV periodograms for financial data and we will discuss the potential consequences coming from time analysis of largest daily losses for stocks traded publicly within S&P500 universe. So buckle up as we are ready for take off!

2. The Annual Migration Routes of Black Swans

A theory can both beautiful and exhausting. So let’s do some work to capture the beauty. Our goal is to re-analyze the data of extreme losses extracted by us previously within Black Swan and Extreme Loss Modeling article. First, we extract the value of the maximum loss for each stock and store them in a matrix in Matlab data as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
% Modern Time Analysis of Black Swans among 
%  Traded Stocks in S&P 500
%
% (c) 2013 QuantAtRisk.com, by Pawel Lachowicz
 
 
clear all; close all; clc;
tic;
 
%% DATA READING AND VISUALIZATION
 
% read a list of stock names
StockNames=dataread('file',['sp500u.lst'],'%s','delimiter', '\n');
K=length(StockNames); % the number of stocks in the universe
% path to data files
path=['./SP500_1984_2011'];
 
data=[];
fprintf('data reading and preprocessing..\n');
for si=1:K
    % --stock name
    stock=StockNames{si};
    fprintf('%4.0f  %7s\n',si,stock);
    % --load data
    n=[path,'/',stock,'.dat'];
    % check for NULL and change to NaN (using 'sed' command
    % in Unix/Linux/MacOS environment
    cmd=['sed -i ''s/NULL/NaN/g''',' ',n]; [status,result]=system(cmd);
    % construct FTS object for daily data
    FTS=ascii2fts(n,1,2);
    % fill any missing values denoted by NaNs
    FTS=fillts(FTS);
    % extract the close price of the stock
    cp=fts2mat(FTS.CLOSE,0);
    dd=fts2mat(FTS.CLOSE,1);
    % extract Matlab matrix containing value of maximum
    % loss per stock and corresponding day
    rtmp=cp(2:end)./cp(1:end-1)-1; % daily returns
    dtmp=dd(2:end,1); % time vector
    tmp{si}=[dtmp rtmp]; 
    [tmp1,tmp2]=min(tmp{si}(:,2)); % maximum loss 
    data=[data; dtmp(tmp2) tmp1];  % [time of maximum loss, loss value]
end
 
data=sortrows(data,1); % sort data according to time of loss occurrence

where, again, the required data files can be downloaded here as sp500u.zip (23.8 MB) and sp500u.lst, respectively.

The visualization of collected data provides us with a new dimension on time distribution of the maximum losses (Black Swans events) across the S&P 500 universe as traded between 3-Jan-1984 and 8-Mar-2011:

46
47
48
49
plot(data(:,1),data(:,2)*100,'.-','color',[0.7 0.7 0.7])
xlim([min(data(:,1)) max(data(:,1)) ]);
xlabel('Time (days)');
ylabel('R_{\rm min} (%)');

07042013_Fig1

All individual stock maximum losses have been depicted with dot markers. As we found within Gumbel distribution analysis, the expected value was -22.6% with a heavy tail extending up to nearly complete losses of $\sim$98%.

Changing the way how we look at our data, we allow to connect the dots and think about data as a new time-series $x_i\ (i=1,…,n=954)$. From this standpoint we can continue our analysis in various direction. Let’s have a look at one case in particular: annual average maximum losses in a function of time. Why? Such approach has been suggested as interesting by McNeil, Frey, and Embrechts in their book Quantitative Risk Management, section 7.1.4., making use of the block maxima method in order to find return levels for stress losses. We turn this idea in practice by rebinning our time-series $\{x_i\}$ with a new time step of 252 (trading) days utilizing the code published within my past post on Rebinning of Financial Time-Series as follows:

51
52
[rx,ry]=rebin(data(:,1),data(:,2),[],252);
hold on; plot(rx,ry*100,'or');

and allowing for inappropriate data profanity with a gentle data interpolation between the points:

54
55
56
57
xi=rx(1):0.01:rx(end);
rdatai=interp1(rx,ry,xi,'pchip');
rdatai=[xi; rdatai]';
hold on; plot(rdatai(:,1),rdatai(:,2)*100,'r-');

resulting in:

07042013_Fig2

Next, based on non-interpolated data we compute the Fourier power spectrum (a modified LS periodogram) as follows:

59
60
61
62
63
64
65
% Periodogram
[freq,pds]=qarPDS(rx,ry,252);
 
figure(2);
plot(freq,pds);
xlabel('Frequency [1/d]');
ylabel('Power Spectrum');

which returns:

07042013_Fig3

It is obvious that the periodogram is calculated based on a fixed frequency grid with a frequency step of $\Delta\nu = 1/T = 0.000104$ [1/d]. The peak of highest power corresponds to the sine modulation detected in the time-series which period is equal $1/0.001462$ or 684 days. The maximal allowed frequency is the Nyquist frequency of $1/(2\Delta t)$ or 0.00198 [1/d]. Honestly, the plot is terrible. To improve its quality it is allowed in spectral analysis of time-series to apply over-samling in frequency, i.e to adopt the frequency grid of computations with a step of $\Delta\nu = 1/(kT)$ where $k$ denotes the over-sampling (an integer) factor. Why do we need the over-sampling? One of the reasons is: to find the value of periodicity as accurately as possible.

Let’s see how mhAoV periodogram copes with this task in practice. The source codes for mhAoV can be downloaded directly from Alex’s webpage (available in Fortran 95 and Python), though I still make use of our old version executable directly in Unix/Linux environment: aov. Let’s first store rebinned data (with 252 d step) in an external file of rdata.dat:

67
68
69
70
71
rdata=[rx ry];
fn=['./rdata.dat'];
fid=fopen(fn,'wt');
fprintf(fid,'%f %f\n',rdata');
fclose(fid);

and, next, let’s compute aov periodogram:

./aov -f=0.000104,0.002 -nh=1 -fos=5 rdata.dat 
 
mhAov periodogram, by Alex Schwarzenberg-Czerny
ver. 27.9.2006 updated by Pawel Lachowicz
 
datname=rdata.dat trfname=rdata.trf maxname=rdata.max nobs=38
method=ORTAOV nh=1 nbf=20 fos=5.00
frstart=0.0001040 frend=0.0019809 frstep=0.0000107 nfr=176
  frequency    period     theta    quality
  0.0014747 678.1132747    5.53729   0.743
  0.0013152 760.3542866    1.39906   0.146
  0.0016301 613.4435376    1.37416   0.138
  0.0003351 2984.5922602   1.30742   0.116
  0.0001733 5771.4262041   1.22450   0.088
  0.0011538 866.7094426    1.12090   0.050

i.e. employing the model which contains a single sinusoid (nh=1) and adopting over-sampling in frequency with $k=5$ (fos). It occurs that the highest value of mhAoV $\Theta=5.537$ statistics corresponds to the period of 678 days.

Fitting the annual data with the model defined as:
$$
f(t) = c + A\sin(2\pi t/P – g\pi)
$$ we find for $P_1=678$ d the estimation for amplitude $A_1=0.12$ and $g_1=0.79$ and the best fit we over-plot as follows:

07042013_Fig4

This model is not perfect but delivers us a good taste of concealed periodic pattern following a sinusoidal model in about 60% of time between 1984 to 2011. This is an interesting result though the computation of:

>> [P1,Pneff]=pneff(38,1,1.3,7.1,5.53729)
 
P1 =
    0.0138
Pneff =
    0.0941

indicates at only 9% of significance of this periodicity. This can be understood as a poor fit of the model to the complicated and variable shape of annual changes in maximal losses for different traded stocks.

A sort of improvement of the model we could achieve by inclusion of variation of the amplitude in function of time, i.e. $A_1(t)$. This can be practically extracted from the wavelet analysis via computation of the continuous wavelet transform. If this is subject of your interest check a draft of this approach in the paper of Czerny et alii (2010) I co-authored.

3. Conclusions

Was Joseph Fourier a birdwatcher? We don’t know. But his approach to the time-series analysis allowed us to check whether any periodic patterns in annual migration (occurrences) of Black Swan events do exist? With a very low probability we found a cyclical trend repeating every 678 days. Will that allow us to forecast the future and next density of massive losses as incurred by individual stocks? Well, now, equipped with power tools of modern approach to time analysis, we can always come back and verify our hypotheses.

References

The theory on the methods of period detection based on publication of Lachowicz et alii (2006). For deeper references to source readings mentioned in the text, check the reference section inside the aforementioned publication.

GARCH(p,q) Model and Exit Strategy for Intraday Algorithmic Traders

Forecasting future has always been a part of human untamed skill to posses. In an enjoyable Hollywood production of Next Nicolas Cage playing a character of Frank Cadillac has an ability to see the future just up to a few minutes ahead. This allows him to take almost immediate actions to avoid the risks. Now, just imagine for a moment that you are an (algorithmic) intraday trader. What would you offer for a glimpse of knowing what is going to happen within following couple of minutes? What sort of risk would you undertake? Is it really possible to deduce the next move on your trading chessboard? Most probably the best answer to this question is: it is partially possible. Why then partially? Well, even with a help of mathematics and statistics, obviously God didn’t want us to know future by putting his fingerprint into our equations and calling it a random variable. Smart, isn’t it? Therefore, our goal is to work harder in order to guess what is going to happen next!?


In this post I will shortly describe one of the most popular methods of forecasting future volatility in financial time-series employing a GARCH model. Next, I will make use of 5-min intraday stock data of close prices to show how to infer possible stock value in next 5 minutes using current levels of volatility in intraday trading. Ultimately, I will discuss an exit strategy from a trade based on forecasted worst case scenario (stock price is forecasted to exceed the assumed stop-loss level). But first, let’s warm up with some cute equations we cannot live without.

Inferring Volatility

Capturing and digesting volatility is somehow like an art that does not attempt to represent external, recognizable reality but seeks to achieve its effect using shapes, forms, colors, and textures. The basic idea we want to describe here is a volatility $\sigma_t$ of a random variable (rv) of, e.g. an asset price, on day $t$ as estimated at the end of the previous day $t-1$. How to do it in the most easiest way? It’s simple. First let’s assume that a logarithmic rate of change of the asset price between two time steps is:
$$
r_{t-1} = \ln\frac{P_{t-1}}{P_{t-2}}
$$ what corresponds to the return expressed in percents as $R_{t-1}=100[\exp(r_{t-1})-1]$ and we will be using this transformation throughout the rest of the text. This notation leaves us with a window of opportunity to denote $r_t$ as an innovation to the rate of return under condition that we are able, somehow, to deduce, infer, and forecast a future asset price of $P_t$.

Using classical definition of a sample variance, we are allowed to write it down as:
$$
\sigma_t^2 = \frac{1}{m-1} \sum_{i=1}^{m} (r_{t-i}-\langle r \rangle)^2
$$ what is our forecast of the variance rate in next time step $t$ based on past $m$ data points, and $\langle r \rangle=m^{-1}\sum_{i=1}^{m} r_{t-i}$ is a sample mean. Now, if we examine return series which is sampled every one day, or an hour, or a minute, it is worth to notice that $\langle r \rangle$ is very small as compared with the standard deviation of changes. This observation pushes us a bit further in rewriting the estimation of $\sigma_t$ as:
$$
\sigma_t^2 = \frac{1}{m} \sum_{i=1}^{m} r_{t-i}^2
$$ where $m-1$ has been replaced with $m$ by adding an extra degree of freedom (equivalent to a maximum likelihood estimate). What is superbly important about this formula is the fact that it gives an equal weighting of unity to every value of $r_{t-i}$ as we can always imagine that quantity multiplied by one. But in practice, we may have a small wish to associate some weights $\alpha_i$ as follows:
$$
\sigma_t^2 = \sum_{i=1}^{m} \alpha_i r_{t-i}^2
$$ where $\sum_{i=1}^{m} \alpha_i = 1$ what replaces a factor of $m^{-1}$ in the previous formula. If you think for a second about the idea of $\alpha$’s it is pretty straightforward to understand that every observation of $r_{t-i}$ has some significant contribution to the overall value of $\sigma_t^2$. In particular, if we select $\alpha_i<\alpha_j$ for $i>j$, every past observation from the most current time of $t-1$ contributes less and less.

In 1982 R. Engle proposed a tiny extension of discussed formula, finalized in the form of AutoRegressive Conditional Heteroscedasticity ARCH($m$) model:
$$
\sigma_t^2 = \omega + \sum_{i=1}^{m} \alpha_i r_{t-i}^2
$$ where $\omega$ is the weighted long-run variance taking its position with a weight of $\gamma$, such $\omega=\gamma V$ and now $\gamma+\sum_{i=1}^{m} \alpha_i = 1$. What the ARCH model allows is the estimation of future volatility, $\sigma_t$, taking into account only past $m$ weighted rates of return $\alpha_i r_{t-i}$ and additional parameter of $\omega$. In practice, we aim at finding weights of $\alpha_i$ and $\gamma$ using maximum likelihood method for given return series of $\{r_i\}$. This approach, in general, requires approximately $m>3$ in order to describe $\sigma_t^2$ efficiently. So, the question emerges: can we do much better? And the answer is: of course.

Four years later, in 1986, a new player entered the ring. His name was Mr T (Bollerslev) and he literally crushed Engle in the second round with an innovation of the Generalized AutoRegressive Conditional Heteroscedasticity GARCH($p,q$) model:
$$
\sigma_t^2 = \omega + \sum_{i=1}^{p} \alpha_i r_{t-i}^2 + \sum_{j=1}^{q} \beta_j \sigma_{t-j}^2
$$ which derives its $\sigma_t^2$ based on $p$ past observations of $r^2$ and $q$ most recent estimates of the variance rate. The inferred return is then equal $r_t=\sigma_t\epsilon_t$ where $\epsilon_t\sim N(0,1)$ what leave us with a rather pale and wry face as we know what in practice that truly means! A some sort of simplification meeting a wide applause in financial hassle delivers the solution of GARCH(1,1) model:
$$
\sigma_t^2 = \omega + \alpha r_{t-1}^2 + \beta \sigma_{t-1}^2
$$ which derives its value based solely on the most recent update of $r$ and $\sigma$. If we think for a shorter while, GARCH(1,1) should provide us with a good taste of forecasted volatility when the series of past couple of returns were similar, however its weakness emerges in the moments of sudden jumps (shocks) in price changes what causes overestimated volatility predictions. Well, no model is perfect.

Similarly as in case of ARCH model, for GARCH(1,1) we may use the maximum likelihood method to find the best estimates of $\alpha$ and $\beta$ parameters leading us to a long-run volatility of $[\omega/(1-\alpha-\beta)]^{1/2}$. It is usually achieved in the iterative process by looking for the maximum value of the sum among all sums computed as follows:
$$
\sum_{i=3}^{N} \left[ -\ln(\sigma_i^2) – \frac{r_i^2}{\sigma_i^2} \right]
$$ where $N$ denotes the length of the return series $\{r_j\}$ ($j=2,…,N$) available for us. There are special dedicated algorithms for doing that and, as we will see later on, we will make use of one of them in Matlab.

For the remaining discussion on verification procedure of GARCH model as a tool to explain volatility in the return time-series, pros and cons, and other comparisons of GARCH to other ARCH-derivatives I refer you to the immortal and infamous quant’s bible of John Hull and more in-depth textbook by a financial time-series role model Ruey Tsay.

Predicting the Unpredictable


The concept of predicting the next move in asset price based on GARCH model appears to be thrilling and exciting. The only worry we may have, and as it has been already recognized by us, is the fact that the forecasted return value is $r_t=\sigma_t\epsilon_t$ with $\epsilon_t$ to be a rv drawn from a normal distribution of $N(0,1)$. That implies $r_t$ to be a rv such $r_t\sim N(0,\sigma_t)$. This model we are allowed to extend further to an attractive form of:
$$
r_t = \mu + \sigma_t\epsilon_t \ \ \ \sim N(\mu,\sigma_t)
$$ where by $\mu$ we will understand a simple mean over past $k$ data points:
$$
\mu = k^{-1} \sum_{i=1}^{k} r_{t-i} \ .
$$ Since we rather expect $\mu$ to be very small, its inclusion provides us with an offset in $r_t$ value modeling. Okay, so what does it mean for us? A huge uncertainty in guessing where we gonna end up in a next time step. The greater value of $\sigma_t$ inferred from GARCH model, the wider spread in possible values that $r_t$ will take. God’s fingerprint. End of hopes. Well, not exactly.

Let’s look at the bright side of our $N(\mu,\sigma_t)$ distribution. Remember, never give in too quickly! Look for opportunities! Always! So, the distribution has two tails. We know very well what is concealed in its left tail. It’s the devil’s playground of worst losses! Can a bad thing be a good thing? Yes! But how? It’s simple. We can always compute, for example, 5% quantile of $Q$, what would correspond to finding a value of a rv $r_t$ for which there is 5% and less of chances that the actual value of $r_t$ will be smaller or equal $Q$ (a sort of equivalent of finding VaR). Having this opportunity, we may wish to design a test statistic as follows:
$$
d = Q + r’
$$ where $r’$ would represent a cumulative return of an open trading position. If you are an intraday (or algorithmic high-frequency) trader and you have a fixed stop-loss limit of $s$ set for every trade you enter, a simple verification of a logical outcome of the condition of:
$$
d < s \ ? $$ provides you immediately with an attractive model for your decision to close your position in next time step based on GARCH forecasting derived at time $t-1$. All right, let’s cut the cheese and let’s see now how does that work in practice!

5-min Trading with GARCH Exit Strategy

In order to illustrate the whole theory of GARCH approach and dancing at the edge of uncertainty of the future, we analyze the intraday 5-min stock data of Toyota Motor Corporation traded at Toyota Stock Exchange with a ticker of TYO:7203. The data file of TYO7203m5.dat contains a historical record of trading in the following form:

TYO7203
DATES      TIMES OPEN HIGH LOW  CLOSE VOLUME
07/8/2010  11:00 3135 3140 3130 3130  50900
07/8/2010  11:05 3130 3135 3130 3130  16100
07/8/2010  11:10 3130 3145 3130 3140  163700
...
01/13/2011 16:50 3535 3540 3535 3535  148000
01/13/2011 16:55 3540 3545 3535 3535  1010900

i.e. it is spanned between 8-Jul-2010 11:00 and 13-Jan-2011 16:55. Using Matlab language,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
% Forecasting Volatility, Returns and VaR for Intraday 
%   and High-Frequency Traders
%
% (c) 2013 QuantAtRisk.com, by Pawel Lachowicz
 
clear all; close all; clc;
 
%% ---DATA READING AND PREPROCESSING
 
fname=['TYO7203m5.dat'];
% construct FTS object for 5-min data
fts=ascii2fts(fname,'T',1,2,[1 2]);
% sampling
dt=5; % minutes
 
% extract close prices
cp=fts2mat(fts.CLOSE,1);
 
% plot 5-min close prices for entire data set
figure(1);
plot(cp(:,1)-cp(1,1),cp(:,2),'Color',[0.6 0.6 0.6]);
xlabel('Days since 08-Jul-2010 11:00');
ylabel('TYO:7203 Stock Price (JPY)');

let’s plot 5-min close prices for a whole data set available:

Figure1

Now, a moment of concentration. Let’s imagine you are trading this stock based on 5-min close price data points. It is late morning of 3-Dec-2010, you just finished your second cup of cappuccino and a ham-and-cheese sandwich when at 10:55 your trading model sends a signal to open a new long position in TYO:7203 at 11:00.

25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
% ---DATES SPECIFICATION
 
% data begin on/at
date1='08-Jul-2010';
time1='11:00';
% last available data point (used for GARCH model feeding)
dateG='03-Dec-2010';
timeG='12:35';
% a corresponding data index for that time point (idxGARCH)
[idxGARCH,~,~]=find(cp(:,1)==datenum([dateG,' ' ,timeG]));
% enter (trade) long position on 03-Dec-2010 at 11:00am
dateT='03-Dec-2010';
timeT='11:00';
% a corresposding data index for that time point(idx)
[idx,~,~]=find(cp(:,1)==datenum([dateT,' ',timeT]));

You buy $X$ shares at $P_{\rm{open}}=3310$ JPY per share at 11:00 and you wait observing the asset price every 5 min. The price goes first up, than starts to fall down. You look at your watch, it is already 12:35. Following a trading data processing, which your Matlab does for you every 5 minutes,

41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
 % ---DATA PREPROCESSING (2)
 
% extract FTS object spanned between the beginning of data
%  and last available data point
ftsGARCH=fetch(fts,date1,time1,dateG,timeG,1,'d');
cpG=fts2mat(ftsGARCH.CLOSE,1); % extract close prices (vector)
retG=log(cpG(2:end,2)./cpG(1:end-1,2)); % extract log return (vector)
 
figure(1); 
hold on; 
plot(cp(1:idxGARCH,1)-cp(1,1),cp(1:idxGARCH,2),'b') % a blue line in Figure 2
 
figure(2);
% plot close prices
x=1:5:size(cpG,1)*5;
x=x-x(idx+1); % 
plot(x,cpG(:,2),'-ok');
xlim([-20 110]);
ylim([3285 3330]);
ylabel('TYO:7203 Stock Price (JPY)'); 
xlabel('Minutes since 03-Dec-2010 11:00');
% add markers to the plot
hold on; %
plot(-dt,cpG(idx,2),'o','MarkerEdgeColor','g');
hold on; 
plot(0,cpG(idx+1,2),'o','MarkerFaceColor','k','MarkerEdgeColor','k');
% mark -0.5% stoploss line
xs=0:5:110;
stoploss=(1-0.005)*3315; % (JPY)
ys=stoploss*ones(1,length(xs));
plot(xs,ys,':r');

you re-plot charts, here: as for data extracted at 3-Dec-2010 12:35,

TYO:7203

where by blue line it has been marked the current available price history of the stock (please ignore a grey line as it refers to the future prices which we of course don’t know on 3-Dec-2010 12:35 but we use it here to understand better the data feeding process in time, used next in GARCH modeling), and

Fig3

what presents the price history with time axis adjusted to start at 0 when a trade has been entered (black filled marker) as suggested by the model 5 min earlier (green marker). Assuming that in our trading we have a fixed stop-loss level of -0.5% per every single trade, the red dotted line in the figure marks the corresponding stock price of suggested exit action. As for 12:35, the actual trading price of TYO:7203 is still above the stop-loss. Great! Well, are you sure?

As we trade the stock since 11:00 every 5 min we recalculate GARCH model based on the total data available being updated by a new innovation in price series. Therefore, as for 12:35, our GARCH modeling in Matlab,

73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
% ---ANALYSIS
 
% GARCH(p,q) parameter estimation
model = garch(3,3) % define model
[fit,VarCov,LogL,Par] = estimate(model,retG)
% extract model parameters
parC=Par.X(1) % omega
parG=Par.X(2) % beta (GARCH)
parA=Par.X(3) % alpha (ARCH)
% estimate unconditional volatility
gamma=1-parA-parG
VL=parC/gamma;
volL=sqrt(VL)
% redefine model with estimatated parameters
model=garch('Constant',parC,'GARCH',parG,'ARCH',parA)
 
% infer 5-min varinace based on GARCH model
sig2=infer(model,retG); % vector

runs,

model = 
    GARCH(3,3) Conditional Variance Model:
    --------------------------------------  
    Distribution: Name = 'Gaussian'
               P: 3
               Q: 3
        Constant: NaN
           GARCH: {NaN NaN NaN} at Lags [1 2 3]
            ARCH: {NaN NaN NaN} at Lags [1 2 3]
____________________________________________________________
   Diagnostic Information
 
Number of variables: 7
 
Functions 
Objective:                            @(X)OBJ.nLogLikeGaussian(X,V,E,Lags,...)
Gradient:                             finite-differencing
Hessian:                              finite-differencing (or Quasi-Newton)
 
Constraints
Nonlinear constraints:                do not exist
 
Number of linear inequality constraints:    1
Number of linear equality constraints:      0
Number of lower bound constraints:          7
Number of upper bound constraints:          7
 
Algorithm selected
   sequential quadratic programming
____________________________________________________________
   End diagnostic information
                                                          Norm of First-order
 Iter F-count            f(x) Feasibility  Steplength        step  optimality
    0       8   -2.545544e+04   0.000e+00                           1.392e+09
    1      60   -2.545544e+04   0.000e+00   8.812e-09   8.813e-07   1.000e+02
 
Local minimum found that satisfies the constraints.
 
Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the selected value of the function tolerance,
and constraints are satisfied to within the selected value of the constraint tolerance.

and returns the best estimates of GARCH model’s parameters:

   GARCH(1,1) Conditional Variance Model:
    ----------------------------------------
    Conditional Probability Distribution: Gaussian
 
                                  Standard          t     
     Parameter       Value          Error       Statistic 
    -----------   -----------   ------------   -----------
     Constant    2.39895e-07   6.87508e-08        3.48934
     GARCH{1}            0.9     0.0224136        40.1542
      ARCH{1}           0.05     0.0037863        13.2055
 
fit = 
    GARCH(1,1) Conditional Variance Model:
    --------------------------------------  
    Distribution: Name = 'Gaussian'
               P: 1
               Q: 1
        Constant: 2.39895e-07
           GARCH: {0.9} at Lags [1]
            ARCH: {0.05} at Lags [1]

defining the GARCH(1,1) model on 3-Dec-2010 12:35 of TYO:7302 as follows:
$$
\sigma_t^2 = 2.39895\times 10^{-7} + 0.05r_{t-1}^2 + 0.9\sigma_{t-1}^2 \ \ .
$$ where the long-run volatility $V=0.0022$ ($0.2\%$) at $\gamma=0.05$. Please note, that due to some computational mind tricks, we allowed to define the raw template of GARCH model as GARCH(3,3) which has been adjusted by Matlab’s solver down to resulting GARCH(1,1).

Based on the model and data available till $t-1=$ 12:35, we get a forecast of $\sigma_t$ value at $t=$ 12:40 to be

92
93
94
% forcast varianace 1-step (5-min) ahead, sigma^2_t
sig2_t=forecast(model,1,'Y0',retG,'V0',sig2); % scalar
sig_t=sqrt(sig2_t) % scalar

$$
\sigma_t=0.002 \ (0.2\%) \ .
$$
Plotting return series for our trade, at 12:35 we have:

96
97
98
99
100
101
102
103
% update a plot of 5-min returns
figure(3);
x=1:length(retG); x=x-idx; x=x*5;
plot(x,100*(exp(retG)-1),'ko-');  
xlim([-20 110]);
ylim([-0.8 0.8]);
xlabel('Minutes since 03-Dec-2010 11:00');
ylabel('R_t [%]');

Fig4

Estimating the mean value of return series by taking into account last 12 data points (60 min),

105
106
% estimate mean return over passing hour (dt*12=60 min)
mu=mean(retG(end-12:end))

we get $\mu=-2.324\times 10^{-4}$ (log values), what allows us to define the model for return at $t=$ 12:40 to be:
$$
R_t = \mu + \sigma_t\epsilon_t = -0.02324 + 0.2\epsilon \ , \ \ \ \epsilon\sim N(0,1)\ .
$$ The beauty of God’s fingerprint denoted by $\epsilon$ we can understand better but running a simple Monte-Carlo simulation and drawing 1000 rvs of $r_t=\mu+\sigma_t\epsilon$ which illustrate 1000 possible values of stock return as predicted by GARCH model!

108
109
% simulate 1000 rvs ~ N(0,sig_t)
r_t=sig_t*randn(1000,1);

One of them could be the one at 12:40 but we don’t know which one?! However, as discussed in Section 2 of this article, we find 5% quantile of $Q$ from the simulation and we mark this value by red filled circle in the plot.

111
112
113
114
115
116
117
tmp=[(length(retG)-idx)*5+5*ones(length(r_t),1) 100*(exp(r_t)-1)];
hold on; plot(tmp(:,1),tmp(:,2),'x');
 
Q=norminv(0.05,mu,sig_t); 
Q=100*(exp(Q)-1);
 
hold on; plot((length(retG)-idx)*5+5,Q,'or','MarkerFaceColor','r');

Including 1000 possibilities (blue markers) of realisation of $r_t$, the updated Figure 4 now looks like:

Fig5

where we find from simulation $Q=-0.3447\%$. The current P&L for an open long position in TYO:7203,

119
120
121
% current P/L for the trade
P_open=3315; % JPY (3-Dec-2010 11:00)
cumRT=100*(cpG(end,2)/P_open-1) %  = r'

returns $r’=-0.3017\%$. Employing a proposed test statistic of $d$,

123
124
% test statistics (percent)
d=cumRT+Q

we get $d=r’+Q = -0.6464\%$ which, for the first time since the opening the position at 11:00, exceeds our stop-loss of $s=-0.5\%$. Therefore, based on GARCH(1,1) forecast and logical condition of $d\lt s$ to be true, our risk management engine sends out at 12:35 an order to close the position at 12:40. The forecasted closing price,

126
127
% forecasted stock price under test (in JPY)
f=P_open*(1+d/100)

is estimated to be equal 3293.6 JPY, i.e. -0.6464% loss.

At 12:40 the order is executed at 3305 JPY per share, i.e. with realized loss of -0.3017%.

The very last question you asks yourself is: using above GARCH Exit Strategy was luck in my favor or not? In other words, has the algorithm taken the right decision or not? We can find out about that only letting a time to flow. As you come back from the super-quick lunch break, you sit down, look at the recent chart of TYO:7203 at 12:55,

Fig6

and you feel relief, as the price went further down making the algo’s decision a right decision.

For reference, the red filled marker has been used to denote the price at which we closed our long position with a loss of -0.3017% at 12:40, and by open red circle the forecasted price under $d$-statistics as derived at 12:35 has been marked in addition for the completeness of the grand picture and idea standing behind this post.

Black Swan and Extreme Loss Modeling

When I read the book of Nassim Nicholas Taleb Black Swan my mind was captured by the beauty of extremely rare events and, concurrently, devastated by the message the book sent: the non-computability of the probability of the consequential rare events using scientific methods (as owed to the very nature of small probabilities). I rushed to the local library to find out what had been written on the subject. Surprisingly I discovered the book of Embrechts, Kluppelberg & Mikosh on Modelling Extremal Events for Insurance and Finance which appeared to me very inaccessible, loaded with heavy mathematical theorems and proofs, and a negligible number of practical examples. I left it on the shelf to dust for a longer while until last week when a fresh drive to decompose the problem came back to me again.

In this article I will try to take you for a short but efficient journey through the part of the classical extreme value theory, namely, fluctuations of maxima, and fulfil the story with easy-to-follow procedures on how one may run simulations of the occurrences of extreme rare losses in financial return series. Having this experience, I will discuss briefly the inclusion of resulting modeling to the future stock returns.

1. The Theory of Fluctuations of Maxima

Let’s imagine we have a rich historical data (time-series) of returns for a specific financial asset or portfolio of assets. A good and easy example is the daily rate of returns, $R_i$, for a stock traded e.g. at NASDAQ Stock Market,
$$
R_t = \frac{P_t}{P_{t-1}} – 1 \ ,
$$ where $P_t$ and $P_{t-1}$ denote a stock price on a day $t$ and $t-1$, respectively. The longer time coverage the more valuable information can be extracted. Given the time-series of daily stock returns, $\{R_i\}\ (i=1,…,N)$, we can create a histogram, i.e. the distribution of returns. By the rare event or, more precisely here, the rare loss we will refer to the returns placed in the far left tail of the distribution. As an assumption we also agree to $R_1,R_2,…$ to be the sequence of iid non-degenerate rvs (random variables) with a common df $F$ (distribution function of $F$). We define the fluctuations of the sample maxima as:
$$
M_1 = R_1, \ \ \ M_n = \max(R_1,…,R_n) \ \mbox{for}\ \ n\ge 2 \ .
$$ That simply says that for any time-series $\{R_i\}$, there is one maximum corresponding to the rv (random variable) with the most extreme value. Since the main line of this post is the investigation of maximum losses in return time-series, we are eligible to think about negative value (losses) in terms of maxima (therefore conduct the theoretical understanding) thanks to the identity:
$$
\min(R_1,…,R_n) = -\max(-R_1,…,-R_n) \ .
$$ The distribution function of maximum $M_n$ is given as:
$$
P(M_n\le x) = P(R_1\le x, …, R_n\le x) = P(R_1\le x)\cdots P(R_n\le x) = F^n(x)
$$ for $x\in\Re$ and $n\in\mbox{N}$.

What the extreme value theory first ‘investigates’ are the limit laws for the maxima $M_n$. The important question here emerges: is there somewhere out there any distribution which satisfies for all $n\ge 2$ the identity in law
$$
\max(R_1,…,R_n) = c_nR + d_n
$$ for appropriate constants $c_n>0$ and $d_n\in\Re$, or simply speaking, which classes of distributions $F$ are closed for maxima? The theory defines next the max-stable distribution within which a random variable $R$ is called max-stable if it satisfies a aforegoing relation for idd $R_1,…,R_n$. If we assume that $\{R_i\}$ is the sequence of idd max-stable rvs then:
$$
R = c_n^{-1}(M_n-d_n)
$$ and one can say that every max-stable distribution is a limit distribution for maxima of idd rvs. That brings us to the fundamental Fisher-Trippett theorem saying that if there exist constants $c_n>0$ and $d_n\in\Re$ such that:
$$
c_n^{-1}(M_n-d_n) \rightarrow H, \ \ n\rightarrow\infty\ ,
$$ then $H$ must be of the type of one of the three so-called standard extreme value distributions, namely: Fréchet, Weibull, and Gumbel. In this post we will be only considering the Gumbel distribution $G$ of the corresponding probability density function (pdf) $g$ given as:
$$
G(z;\ a,b) = e^{-e^{-z}} \ \ \mbox{for}\ \ z=\frac{x-b}{a}, \ x\in\Re
$$ and
$$
g(z;\ a,b) = b^{-1} e^{-z}e^{-e^{-z}} \ .
$$ where $a$ and $b$ are the location parameter and scale parameter, respectively. Having defined the extreme value distribution and being now equipped with a better understanding of theory, we are ready for a test drive over daily roads of profits and losses in the trading markets. This is the moment which separates men from boys.

2. Gumbel Extreme Value Distribution for S&P500 Universe


As usual, we start with entrée. Our goal is to find the empirical distribution of maxima (i.e. maximal daily losses) for all stocks belonging to the S&P500 universe between 3-Jan-1984 and 8-Mar-2011. There were $K=954$ stocks traded within this period and their data can be downloaded here as a sp500u.zip file (23.8 MB). The full list of stocks’ names is provided in sp500u.lst file. Therefore, performing the data processing in Matlab, first we need to compute a vector storing daily returns for each stock, and next find the corresponding minimal value $M_n$ where $n$ stands for the length of each return vector:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
% Black Swan and Extreme Loss Modeling
%  using Gumbel distribution and S&P500 universe
%
% (c) 2013 QuantAtRisk, by Pawel Lachowicz
 
 
clear all; close all; clc;
tic;
 
%% DATA READING AND PREPROCESSING
 
% read a list of stock names
StockNames=dataread('file',['sp500u.lst'],'%s','delimiter', '\n');
K=length(StockNames); % the number of stocks in the universe
% path to data files
path=['./SP500u'];
 
fprintf('data reading and preprocessing..\n');
for si=1:K
    % --stock name
    stock=StockNames{si};
    fprintf('%4.0f  %7s\n',si,stock);
    % --load data
    n=[path,'/',stock,'.dat'];
    % check for NULL and change to NaN (using 'sed' command
    % in Unix/Linux/MacOS environment)
    cmd=['sed -i ''s/NULL/NaN/g''',' ',n]; [status,result]=system(cmd);
    % construct FTS object for daily data
    FTS=ascii2fts(n,1,2);
    % fill any missing values denoted by NaNs
    FTS=fillts(FTS);
    % extract the close price of the stock
    cp=fts2mat(FTS.CLOSE,0);
    % calculate a vector with daily stock returns and store it in
    % the cell array
    R{si}=cp(2:end)./cp(1:end-1)-1;
end
 
%% ANALYSIS
 
% find the minimum daily return value for each stock
Rmin=[];
for si=1:K
    Mn=min(R{si},[],1);
    Rmin=[Rmin; Mn];
end

Having that ready, we fit the data with the Gumbel function which (as we believe) would describe the distribution of maximal losses in the S&P500 universe best:

48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
% fit the empirical distribution with Gumbel distribution and 
% estimate the location, a, and scale, b, parameter
[par,parci]=evfit(Rmin);
a=par(1); 
b=par(2);
 
% plot the distribution
x=-1:0.01:1;
hist(Rmin,-1:0.01:0);
h=findobj(gca,'Type','patch');
set(h,'FaceColor',[0.7 0.7 0.7],'EdgeColor',[0.6 0.6 0.6]);
h=findobj(gca,'Type','box');
set(h,'Color','k');
 
% add a plot of Gumbel pdf 
pdf1=evpdf(x,a,b);
y=0.01*length(Rmin)*pdf1;
line(x,y,'color','r'); box on;
xlabel('R_{min}');
ylabel(['f(R_{min}|a,b)']);
text(-1,140,['a = ',num2str(paramEstsMinima(1),3)]);
text(-1,130,['b = ',num2str(paramEstsMinima(2),3)]); 
xlim([-1 0]);

The maximum likelihood estimates of the parameters $a$ and $b$ and corresponding 95% confidence intervals we can find as follows:

>> [par,parci]=evfit(Rmin)
 
par =
   -0.2265    0.1135
 
parci =
   -0.2340    0.1076
   -0.2190    0.1197

That brings us to a visual representation of our analysis:

gumbel

This is a very important result communicating that the expected value of extreme daily loss is equal about -22.6%. However, the left tail of the fitted Gumbel distribution extends far up to nearly -98% although the probability of the occurrence of such a massive daily loss is rather low.

On the other hand, the expected value of -22.6% is surprisingly close to the trading down-movements in the markets on Oct 19, 1987 known as Black Monday when Dow Jones Industrial Average (DJIA) dropped by 508 points to 1738.74, i.e. by 22.61%!

3. Blending Extreme Loss Model with Daily Returns of a Stock

Probably you wonder how can we include the results coming from the Gumbel modeling for the prediction of rare losses in the future daily returns of a particular stock. This can be pretty straightforwardly done combining the best fitted model (pdf) for extreme losses with stock’s pdf. To do it properly we need to employ the concept of the mixture distributions. Michael B. Miller in his book Mathematics and Statistics for Financial Risk Management provides with a clear idea of this procedure. In our case, the mixture density function $f(x)$ could be denoted as:
$$
f(x) = w_1 g(x) + (1-w_1) n(x)
$$ where $g(x)$ is the Gumbel pdf, $n(x)$ represents fitted stock pdf, and $w_1$ marks the weight (influence) of $g(x)$ into resulting overall pdf.

In order to illustrate this process, let’s select one stock from our S&P500 universe, say Apple Inc. (NASDAQ: AAPL), and fit its daily returns with a normal distribution:

72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
% AAPL daily returns (3-Jan-1984 to 11-Mar-2011)
rs=R{18};
figure(2);
hist(rs,50);
h=findobj(gca,'Type','patch');
set(h,'FaceColor',[0.7 0.7 0.7],'EdgeColor',[0.6 0.6 0.6]);
h=findobj(gca,'Type','box');
set(h,'Color','k');
% fit the normal distribution and plot the fit
[muhat,sigmahat]=normfit(rs)
x =-1:0.01:1;
pdf2=normpdf(x,muhat,sigmahat);
y=0.01*length(rs)*pdf2;
hold on; line(x,y,'color','r');
xlim([-0.2 0.2]); ylim([0 2100]);
xlabel('R');

aapl

where the red line represents the fit with a mean of $\mu=0.0012$ and a standard deviation $\sigma=0.0308$.

We can obtain the mixture distribution $f(x)$ executing a few more lines of code:

89
90
91
92
93
94
95
96
% Mixture Distribution Plot
figure(3);
w1= % enter your favorite value, e.g. 0.001
w2=1-w1;
pdfmix=w1*(pdf1*0.01)+w2*(pdf2*0.01);  % note: sum(pdfmix)=1 as expected
x=-1:0.01:1;
plot(x,pdfmix);
xlim([-0.6 0.6]);

It is important to note that our modeling is based on $w_1$ parameter. It can be intuitively understood as follows. Let’s say that we choose $w_1=0.01$. That would mean that Gumbel pdf contributes to the overall pdf in 1%. In the following section we will see that if a random variable is drawn from the distribution given by $f(x)$, $w_1=0.01$ simply means (not exactly but with a sufficient approximation) that there is 99% of chances of drawing this variable from $n(x)$ and only 1% from $g(x)$. The dependence of $f(x)$ on $w_1$ illustrates the next figure:

w1

It is well visible that a selection of $w_1>0.01$ would be a significant contributor to the left tail making it fat. This is not the case what is observed in the empirical distribution of daily returns for AAPL (and in general for majority of stocks), therefore we rather expect $w_1$ to be much much smaller than 1%.

4. Drawing Random Variables from Mixture Distribution

A short break between entrée and main course we fill with a sip of red wine. Having discrete form of $f(x)$ we would like to be able to draw a random variable from this distribution. Again, this is easy too. Following a general recipe, for instance given in the Chapter 12.2.2 of Philippe Jorion’s book Value at Risk: The New Benchmark for Managing Financial Risk, we wish to use the concept of inverse transform method. In first step we use the output (a random variable) coming from a pseudo-random generator drawing its rvs based on the uniform distribution $U(x)$. This rv is always between 0 and 1, and in the last step is projected on the cumulative distribution of our interest $F(x)$, what in our case would correspond to the cumulative distribution for $f(x)$ pdf. Finally, we read out the corresponding value on the x-axis: a rv drawn from $f(x)$ pdf. Philippe illustrates that procedure more intuitively:

Drawing a Random Variable Process

This methods works smoothly when we know the analytical form of $F(x)$. However, if this not in the menu, we need to use a couple of technical skills. First, we calculate $F(x)$ based on $f(x)$. Next, we set a very fine grid for $x$ domain, and we perform interpolation between given data points of $F(x)$.

98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
% find cumulative pdf, F(x)
figure(4);
s=0;
x=-1;
F=[];
for i=1:length(pdfmix);
    s=s+pdfmix(i);
    F=[F; x s];
    x=x+0.01;
end
plot(F(:,1),F(:,2),'k')
xlim([-1 1]); ylim([-0.1 1.1]);
% perform interpolation of cumulative pdf using very fine grid
xi=(-1:0.0000001:1);
yi=interp1(F(:,1),F(:,2),xi,'linear'); % use linear interpolation method
hold on; plot(xi,yi);

The second sort of difficulty is in finding a good match between the rv drawn from the uniform distribution and approximated value for our $F(x)$. That is why a very fine grid is required supplemented with some matching techniques. The following code that I wrote deals with this problem pretty efficiently:

115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
% draw a random variable from f(x) pdf: xi(row)
tF2=round((round(100000*yi')/100000)*100000);
RV=[];
for k=1:(252*40)
    notok=false;
    while(~notok)
        tU=round((round(100000*rand)/100000)*100000);
        [r,c,v]=find(tF2==tU);
        if(~isempty(r))
            notok=true;
        end
    end
    if(length(r)>1)
        rv=round(2+(length(r)-2)*rand);
        row=r(rv); 
    else
        row=r(1);
    end
    % therefore, xi(row) is a number represting a rv
    % drawn from f(x) pdf; we store 252*40 of those
    % new rvs in the following matrix:
    RV=[RV; xi(row) yi(row)];
end
% mark all corresponding rvs on the cumulative pdf
hold on; plot(RV(:,1),RV(:,2),'rx');

Finally, as the main course we get and verify the distribution of a large number of new rvs drawn from $f(x)$ pdf. It is crucial to check whether our generating algorithm provides us with a uniform coverage across the entire $F(x)$ plot,

Fx

where, in order to get more reliable (statistically) results, we generate 10080 rvs which correspond to the simulated 1-day stock returns for 252 trading days times 40 years.

5. Black Swan Detection

A -22% collapse in the markets on Oct 19, 1978 served as a day when the name of Black Swan event took its birth or at least had been reinforced in the financial community. Are black swans extremely rare? It depends. If you live for example in Perth, Western Australia, you can see a lot of them wandering around. So what defines the extremely rare loss in the sense of financial event? Let’s assume by the definition that by Black Swan event we will understand of a daily loss of 20% or more. If so, using the procedure described in this post, we are tempted to pass from the main course to dessert.

Our modeling concentrates around finding the most proper contribution of $w_1g(x)$ to resulting $f(x)$ pdf. As an outcome of a few runs of Monte Carlo simulations with different values of $w_1$ we find that for $w_1=[0.0010,0.0005,0.0001]$ we detect in the simulations respectively 9, 5, and 2 events (rvs) displaying a one-day loss of 20% or more.

Therefore, the simulated daily returns for AAPL, assuming $w_1=0.0001$, generate in the distribution two Black Swan events, i.e. one event per 5040 trading days, or one per 20 years:

Black Swans in future AAPL returns

That result agrees quite well with what has been observed so far, i.e. including Black Monday in 1978 and Flash Crash in intra-day trading on May 6, 2010 for some of the stocks.

Acknowledgements

I am grateful to Peter Urbani from New Zealand for directing my attention towards Gumbel distribution for modeling very rare events.

Contact Form Powered By : XYZScripts.com