Quantitative Analysis, Risk Management, Modelling, Algo Trading, and Big Data Analysis

Upcoming… 2nd Edition of Python for Quants. In Sale in Feb/Mar 2017.

A brand new, the 2nd Edition of my Python for Quants. Volume I. book is coming out on Feb/Mar 2017! Powered by great feedback from those who purchased its 1st Edition, supplemented and extended by new content, I’m pleased to deliver you the most friendly introduction to Python for Finance for everyone!

What new in 2nd Edition:

  1. New Features of Standard Library of Python 3.6
  2. Strings and Text Manipulation
  3. O/I and Files
  4. Advanced NumPy for Quants: this time we make it more complete incl. linear and non-linear algebra for finance; accelerated computations on GPU; advanced matrix manipulations; essential calculus;
  5. Statistics: more practical applications for financial data analysis powered by SciPy and statsmodels libraries;
  6. pandas for Time-Series: the best introduction to the most powerful library for data handling and processing with advanced application in financial time-series analysis;
  7. Plotting: a smart use of matplotlib, seaborn, and cufflinks for computational aspects of Python and pandas.


Interested in purchasing a book?
Enjoy 20% Off Before the Official Premiere!$^{*}$

Pre-Order Today!


Select:



U$990

$^{*}$ Prepay today to secure 20% Off the regular price of USD 49.00 after the Official Premiere. Pay Now USD 9.90 and only USD 29.30 later. Your total book price will be USD 39.20. Your Pre-Order Deposit will be saved under your name and entitle you to the offered discount (unreturnable). If you wish to upgrade from 1st Ed. you will be entitled to 45% discount (your personal data will be cross-verified with our records).

ebook03-cover1000

Non-Linear Cross-Bicorrelations between the Oil Prices and Stock Fundamentals

When we talk about correlations in finance, by default, we assume linear relationships between two time-series “co-moving”. In other words, if one time-series changes its values over a give time period, we seek for a tight correlation reflected within the other time-series. If found, we say they are correlated. But wait a minute! Is the game always about looking for an immediate feedback between two time-series? Is “correlated” always a synonymous of a direct, i.e. a linear response?

Think for a second about the oil prices. If they rise within a quarter, the effect on some companies is not the same. Certain firms can be tightly linked to oil/petroleum prices that affect their product value, distribution, sales, etc. while for other companies the change of crude oil price has a negligible effect, e.g. the online services. Is it so?

As a data scientist, quantitative (business) analysts, or investment researcher you need to know that mathematics and statistics deliver ample of tools you may use in order to derive the best possibilities for two (or more) financial assets that can be somehow related, correlated, or connected. The wording does not need to be precise to reflect the goal of an investigation: correlation, in general.

Correlation can be linear and non-linear. Non-linearity of correlation is somehow counterintuitive. We used to think in a linear way that is why it is so hard to readjust our thinking in a non-linear domain. The changes of the oil prices might have a non-negligble effect on the airlines, causing the air-ticket prices to rise or fall due to recalculated oil/petroleum surcharge. The effect can be not immediate. A rise of the oil prices today can be “detected” with a time delay of days, weeks, or even months, i.e. non-linearly correlated.

In this post analyse both linear and non-linear correlation methods that have been so far applied (and not) in the research over oil price tectonics across the markets. In particular, first we discuss one-factor and multiple linear regression models to turn, in consequence, toward non-linearity possible to be captured by cross-correlation and cross-bicorrelation methods. By developing an extensive but easy-to-follow Python code, we analyse a database of Quandl.com storing over 620 stocks’ financials factors (stock fundamentals). Finally, we show how both non-linear mathematical tools can be applied in order to detect non-linear correlations for different financial assets.

1. Linear Correlations

1.1. Oil Prices versus Stock Markets

Correlations between increasing/decreasing oil prices and stock markets have been a subject of investigation for a number of years. The main interest was initially in search for linear correlations between raw price/return time-series among the stocks and oil benchmarks (e.g. WPI Crude Oil). A justification standing behind such approach to data analysis seemed to be driven by a common sense: an impact of rising oil prices should affect companies dependent on petroleum-based products or services.

The usual presumption states that a decline in oil price is a good news for the economy, especially for net oil importers, e.g. USA or China. An increase in oil prices usually causes the raise of the input costs for most businesses and force consumers to spend more money on gasoline, thereby reducing the corporate earnings of other businesses. The opposite should be true too when the oil prices fall.

The oil prices are determined by the supply-and-damand for petroleum-based products. During economic expansion, prices might rise as a result of increased consumption. They might fall as a result of increased production. That pattern has been observed on many occasions so far.

A quantitative approach to measuring the strength of linear response of the stock market to oil price can be demonstrated using a plain linear regression modeling. If we agree that S&P 500 Index is a proxy for the US stock market, we should be able to capture the relationship. In the following example, we compare WPI Crude Oil prices with S&P 500 Index for 2-year period between Nov 2014 and Nov 2016. First we plot both time-series, and next we apply one-factor linear regression model,
$$
y(t) = \beta x(t) + \epsilon(t) \ \,
$$ in order to fit the data best:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import numpy as np
import pandas as pd
from scipy import stats
from matplotlib import pyplot as plt
from pandas_datareader import data, wb
 
%matplotlib inline
 
grey = 0.7, 0.7, 0.7
 
import warnings
warnings.filterwarnings('ignore')
 
# downloading S&P 500 Index from Yahoo! Finance
sp500 = web.DataReader("^GSPC", data_source='yahoo',
                  start='2014-11-01', end='2016-11-01')['Adj Close']
 
# WPI Crude Oil price-series
# src: https://fred.stlouisfed.org/series/DCOILWTICO/downloaddata
dateparse = lambda x: pd.datetime.strptime(x, '%Y-%m-%d')
wpi = pd.read_csv("DCOILWTICO.csv", parse_dates=['DATE'], date_parser=dateparse)
wpi = wpi[wpi.VALUE != "."]
wpi.VALUE = wpi.VALUE.astype(float)
wpi.index = wpi.DATE
wpi.drop('DATE', axis=1, inplace=True) 
 
# combine both time-series
data = pd.concat([wpi, sp500], axis=1).dropna()  # and remove NaNs, if any
data.columns = ["WPI", "S&P500"]
print(data.head())
              WPI       S&P500
2014-11-03  78.77  2017.810059
2014-11-04  77.15  2012.099976
2014-11-05  78.71  2023.569946
2014-11-06  77.87  2031.209961
2014-11-07  78.71  2031.920044
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
plt.figure(figsize=(8,8))
# plot time-series
ax1 = plt.subplot(2,1,1)
plt.plot(data.WPI, 'k', label="WPI Crude Oil")
plt.legend(loc=3)
plt.ylabel("WPI Crude Oil [U$/barrel]")
 
# fit one-factor linear model
slope, intercept, r2_value, p_value, std_err = 
                  stats.linregress(data.WPI, data["S&P500"])
 
# best fit: model
xline = np.linspace(np.min(data.WPI), np.max(data.WPI), 100)
line = slope*xline + intercept
 
# plot scatter plot and linear model
ax2 = ax1.twinx()
plt.plot(data["S&P500"], 'm', label="S&P500")
plt.ylabel("S&P 500 Index [U$]")
plt.legend(loc="best")
plt.subplot(2,1,2)
plt.plot(data.WPI, data["S&P500"],'.', color=grey)
plt.plot(xline, line,'--')
plt.xlabel("WPI Crude Oil [U$/barrel]")
plt.ylabel("S&P 500 Index [U$]")
plt.title("R$^2$ = %.2f" % r2_value, fontsize=11)

Our Python code generates:
unknown
where we can see that WPI Crude Oil price vs. S&P 500 linear correlation is weak, with $R^2 = 0.36$ only. This is a static picture.

As one might suspect, correlation is time-dependent. Therefore, it is wiser to check a rolling linear correlation for a given data window size. Below, we assume its length to be 50 days:

59
60
61
62
63
64
rollcorr = pd.rolling_corr(wpi, sp500, 50).dropna()
 
plt.figure(figsize=(10,3))
plt.plot(rollcorr)
plt.grid()
plt.ylabel("Linear Correlation (50 day rolling window)", fontsize=8)

unknown-1
In 2008 Andrea Pescatori measured changes in the S&P 500 and oil prices in the same way as demonstrated above. He noted that variables occasionally moved in the same direction at the same time, but even then, the relationship remained weak. He concluded on no correlation at the 95% confidence level.

The lack of correlation might be attributed to factors like: wages, interest rates, industrial metal, plastic, computer technology that can offset changes in energy costs. On the other side, corporations might have become increasingly sophisticated at reading the futures markets and are able, much better, to anticipate the shift in factor prices (e.g. a company should be able to switch production processes to compensate for added fuel costs).

No matter what we analyse, either oil prices vs. market indexes or vs. individual stock trading records, linear correlation method is solely able to measure 1-to-1 response of $y$ to $x$. The lower coefficient of correlation the less valid linear model as a descriptor of true events and mutual relationships under study.

1.2. Multiple Linear Regression Model for Oil Price Changes

In February 2016 Ben S. Bernanke found that a positive correlation of stocks and oil might arise because both are responding to the underlying shift in global demand. Using a simple multiple linear regression model he took and attempt in explaining the daily changes of the oil prices and concluded that in over 90% they could be driven by changes of commodities prices (copper), US dollar (spot price), 10-yr treasury interest rate, and in over 95% by extending the model with an inclusion of daily changes in VIX:
$$
\Delta p_{oil, t} = \beta_1 \Delta p_{copp, t} + \beta_2 \Delta p_{10yr, t} + \beta_3 \Delta p_{USD, t} + \beta_4 \Delta p_{VIX, t} + \beta_0
$$ The premise is that commodity prices, long-term interest rate, and USD are likely to respond to investors’ perceptions of global and US demand for oil. Additionally, the presence of VIX is in line with the following idea: if investors retreat from commodities and/or stocks during periods of high uncertainty/risk aversion, then shocks to volatility may be another reason for the observed tendency of stocks and oil prices moving together.

The model, inspired by James Hamilton (2014) work, aims at measuring the effect of demand shifts on the oil market. For the sake of elegance, we can code it in Python as follows. First, we download to the files all necessary price-series and process them within pandas Dataframes (more on Quantitative Aspects of pandas for Quants you will find in my upcoming book Python for Quants. Volume II.).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
import numpy as np
import pandas as pd
from scipy import stats
import datetime
from datetime import datetime
from matplotlib import pyplot as plt
import statsmodels.api as sm
 
%matplotlib inline
 
grey = 0.7, 0.7, 0.7
 
import warnings
warnings.filterwarnings('ignore')
 
# Download WPI Crude Oil prices
# src: https://fred.stlouisfed.org/series/DCOILWTICO/downloaddata
dateparse = lambda x: pd.datetime.strptime(x, '%Y-%m-%d')
wpi = pd.read_csv("DCOILWTICO.csv", parse_dates=['DATE'], date_parser=dateparse)
wpi = wpi[wpi.VALUE != "."]
wpi.VALUE = wpi.VALUE.astype(float)
wpi.index = wpi.DATE
wpi.drop('DATE', axis=1, inplace=True)
 
# Copper Futures 
# src: http://www.investing.com/commodities/copper-historical-data
cop = pd.read_csv("COPPER.csv", index_col=["Date"])
cop.index = pd.to_datetime(pd.Series(cop.index), format='%b %d, %Y')
 
# Nominal interest rate on 10-year Treasury bonds
# src: https://fred.stlouisfed.org/series/DGS10/
tb = pd.read_csv('DGS10.csv')
tb = tb[tb.DGS10 != "."]
tb.DGS10 = tb.DGS10.astype(float)
tb['DATE'] =  pd.to_datetime(tb['DATE'], format='%Y-%m-%d')
tb.index = tb.DATE
tb.drop('DATE', axis=1, inplace=True)
 
# Trade Weighted U.S. Dollar Index
# src: https://fred.stlouisfed.org/series/DTWEXM
usd = pd.read_csv('DTWEXM.csv')
usd = usd[usd.DTWEXM != "."]
usd.DTWEXM = usd.DTWEXM.astype(float)
usd['DATE'] =  pd.to_datetime(usd['DATE'], format='%Y-%m-%d')
usd.index = usd.DATE
usd.drop('DATE', axis=1, inplace=True)
 
# CBOE Volatility Index: VIX© (VIXCLS)
# src: https://fred.stlouisfed.org/series/VIXCLS/downloaddata
vix = pd.read_csv('VIXCLS.csv')
vix = vix[vix.VALUE != "."]
vix.VALUE = vix.VALUE.astype(float)
vix['DATE'] =  pd.to_datetime(vix['DATE'], format='%Y-%m-%d')
vix.index = vix.DATE
vix.drop('DATE', axis=1, inplace=True)

Not every time a plain use of pd.read_csv function works in the way we expect from it, therefore some extra steps are needed to be implemented in order to convert .csv files content to Dataframe, e.g. a correct formatting of date (line #18-19 or #35, 44, 53), price conversion from string to float (lines #21, 34, 43, 52), replacement of index with data included in ‘Date’ column followed by its removal (e.g. lines #22-23).

As a side note, in order to download the copper data, at the website we need manually mark a whole table, paste it into Excel or Mac’s Numbers, trim if required, and save down to .csv file. The problem might appear as dates are given in “Nov 26, 2016″ format. For this case, the following trick in Python saves a lot of time:

from datetime import datetime
expr = 'Nov 26, 2016'
datetime.strptime(expr, '%b %d, %Y')
datetime.datetime(2016, 11, 26, 0, 0)

what justifies the use of %b descriptor (see more on Python’s datetime date formats and parsing here and here).

pandas’ Dataframes are fast and convenient way to concatenate multiple time-series with indexes set to calendar dates. The following step ensures all five price-series to have data points on the same dates (if given), and if any data point (for any time-series) is missing (NaN), the whole row is removed from the end Dataframe of ‘df’ thanks to action of .dropna() function:

57
58
59
60
df = pd.concat([wpi, cop, tb, usd, vix], join='outer', axis=1).dropna()
df.columns = ["WPI", "Copp", "TrB", "USD", "VIX"]
 
print(df.head(10))
              WPI   Copp   TrB      USD    VIX
2007-11-15  93.37  3.082  4.17  72.7522  28.06
2007-11-16  94.81  3.156  4.15  72.5811  25.49
2007-11-19  95.75  2.981  4.07  72.7241  26.01
2007-11-20  99.16  3.026  4.06  72.4344  24.88
2007-11-21  98.57  2.887  4.00  72.3345  26.84
2007-11-23  98.24  2.985  4.01  72.2719  25.61
2007-11-26  97.66  3.019  3.83  72.1312  28.91
2007-11-27  94.39  2.958  3.95  72.5061  26.28
2007-11-28  90.71  3.001  4.03  72.7000  24.11
2007-11-29  90.98  3.063  3.94  72.6812  23.97

Given all price-series and our model in mind, we are more interested in running the regression using daily change in the natural logarithm of crude oil, copper price, 10-yr interest rate, USD spot price, and VIX. The use of log is a common practice in finance and economics (see here why). Contrary to the daily percent change, we derive the log returns in Python in the following way:

np.random.seed(7)
df = pd.DataFrame(100 + np.random.randn(100).cumsum(), columns=['price'])
df['pct_change'] = df.price.pct_change()
df['log_ret'] = np.log(df.price) - np.log(df.price.shift(1))
 
print(df.head())
        price  pct_change   log_ret
0  101.690526         NaN       NaN
1  101.224588   -0.004582 -0.004592
2  101.257408    0.000324  0.000324
3  101.664925    0.004025  0.004016
4  100.876002   -0.007760 -0.007790

Having that, we continue coding our main model:

62
63
64
65
66
67
68
69
70
71
72
73
headers = df.columns
 
dlog = pd.DataFrame()  # log returns Dateframe
 
for j in range(df.shape[1]):
    price = df.ix[:,j]
    dl = np.log(price) - np.log(price.shift(1))
    dlog.loc[:,j] = dl
 
dlog.dropna(inplace=True)
dlog.columns = headers
print(dlog.head())
                 WPI      Copp       TrB       USD       VIX
2007-11-16  0.015305  0.023727 -0.004808 -0.002355 -0.096059
2007-11-19  0.009866 -0.057047 -0.019465  0.001968  0.020195
2007-11-20  0.034994  0.014983 -0.002460 -0.003992 -0.044417
2007-11-21 -0.005968 -0.047024 -0.014889 -0.001380  0.075829
2007-11-23 -0.003353  0.033382  0.002497 -0.000866 -0.046910

and run multiple linear regression model for in-sample data, here, selected for 2 year in-sample period between Nov 2012 and Nov 2014:

75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
dlog0 = dlog.copy()
 
# in-sample data selection
dlog = dlog[(dlog.index > "2012-11-01") & (dlog.index < "2014-11-01")]
 
# define input data
y = dlog.WPI.values
x = [dlog.Copp.values, dlog.TrB.values, dlog.USD.values, dlog.VIX.values ]
 
# Multiple Linear Regression using 'statsmodels' library
def reg_m(y, x):
    ones = np.ones(len(x[0]))
    X = sm.add_constant(np.column_stack((x[0], ones)))
    for ele in x[1:]:
        X = sm.add_constant(np.column_stack((ele, X)))
    results = sm.OLS(y, X).fit()
    return results
 
# display a summary of the regression
print(reg_m(y, x).summary())
                           OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.105
Model:                            OLS   Adj. R-squared:                  0.098
Method:                 Least Squares   F-statistic:                     14.49
Date:                Tue, 29 Nov 2016   Prob (F-statistic):           3.37e-11
Time:                        11:38:44   Log-Likelihood:                 1507.8
No. Observations:                 499   AIC:                            -3006.
Df Residuals:                     494   BIC:                            -2985.
Df Model:                           4                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1            -0.0303      0.008     -3.800      0.000        -0.046    -0.015
x2            -0.3888      0.178     -2.180      0.030        -0.739    -0.038
x3             0.0365      0.032      1.146      0.252        -0.026     0.099
x4             0.2261      0.052      4.357      0.000         0.124     0.328
const      -3.394e-05      0.001     -0.064      0.949        -0.001     0.001
==============================================================================
Omnibus:                       35.051   Durbin-Watson:                   2.153
Prob(Omnibus):                  0.000   Jarque-Bera (JB):               75.984
Skew:                          -0.392   Prob(JB):                     3.16e-17
Kurtosis:                       4.743   Cond. No.                         338.
==============================================================================
 
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is 
    correctly specified.

A complete model is given by the following parameters:

96
97
par = reg_m(y, x).params
print(par)
[ -3.02699345e-02  -3.88784262e-01   3.64964420e-02   2.26134337e-01
  -3.39418282e-05]

or more formally as:
$$
\Delta p_{oil, t} = 0.2261 \Delta p_{copp, t} + 0.0365 \Delta p_{10yr, t} -0.3888 \Delta p_{USD, t} -0.0303 \Delta p_{VIX, t}
$$ The adjusted $R^2$ is near zero and statistical significance of $\beta_2$ is more than 5%.

In order to use our model for “future” oil price prediction based on price movements in commodity (copper), long-term interest rate, USD spot price and VIX index, we generate out-of-sample data set and construct the estimation in the following way:

99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
# out-of-sample log return Dataframe
dlog_oos = dlog0[dlog0.index >= '2014-11-01']
 
# what is the last WPI Crude Oil price in in-sample
last = df[df.index == "2014-10-31"].values[0][0]  # 80.53 USD/barrel
 
oilreg = pd.DataFrame(columns=["Date", "OilPrice"])  # regressed oil prices
par = reg_m(y, x).params  # parameters from regression
 
for i in range(dlog_oos.shape[0]):
    x1 = dlog_oos.iloc[i]["VIX"]
    x2 = dlog_oos.iloc[i]["USD"]
    x3 = dlog_oos.iloc[i]["TrB"]
    x4 = dlog_oos.iloc[i]["Copp"]
    w = par[0]*x1 + par[1]*x2 + par[2]*x3 + par[3]*x4 + par[4]
    if(i==0):
        oilprice = np.exp(w) * last
    else:
        oilprice = np.exp(w) * oilprice
    oilreg.loc[len(oilreg)] = [dlog_oos.index[i], oilprice]
 
oilreg.index = oilreg.Date
oilreg.drop('Date', axis=1, inplace=True)

where oilreg Dataframe stores time and regressed oil prices. Visualisation of the results

123
124
125
126
127
128
129
130
plt.figure(figsize=(10,5))
plt.plot(df.WPI, color=grey, label="WPI Crude Oil")
plt.plot(oilreg, "r", label="Predicted Oil Price")
plt.xlim(['2012-01-01', '2016-11-01'])  # in-sample & out-of-sample
plt.ylim([20, 120])
plt.ylabel("USD/barrel")
plt.legend(loc=3)
plt.grid()

delivers
unknown
Bernanke’s multiple regression model points at an idea saying: when stock traders respond to a change in oil prices, they do so not necessarily because the oil movement in consequential in itself. Our finding (for different data sets than Bernanke’s) points at lack of strong dependancy on VIX thus on traders activity. Again, as in case of one-factor regression model discussed in Section 1.1, the stability of underlying correlations with all four considered factors remains statistically variable in time.

2. Non-Linear Correlations

So far we have understood how fragile the quantification and interpretation of the linear correlation between two time-series can be. We agreed on its time-dependant character and how data sample size may influence derived results. Linear correlation is a tool that captures 1-to-1 feedback between two data sets. What if a response is lagged? We need to look for some other useful tools good enough for this challenge. In this Section we will analyse two of them.

2.1. Cross-Correlation and Cross-Bicorrelation

If a change of one quantity has a noticeable effect on another one, this can be observed with or without time delay. In the time-series analysis we talk about one time-series being lagged (or led) behind (before) the other one. Since an exisiting lag (lead) is not observed directly (a linear feedback) it introduces a non-linear relationship between two time-series.

A degree of non-linear correlation can be measured by the application of two independent methods, namely, cross-correlations and cross-bicorrelations designed to examine the data dependence between any two (three) values led/lagged by $r$ ($s$) periods. By period one should understand the sampling frequency of the underlying time-series, for example a quarter, i.e. 3 month period.

Let $x(t) \equiv \{ x_t \} = \{ x_1, x_2, …, x_N \}$ for $t=1,…,N$ represents, e.g. the WPI Crude Oil price-series, and $\{ y_t \}$ the time-series of a selected quantity. For two time-series, $x(t)$ and $y(t)$, the cross-correlation is defined as:
$$
C_{xy}(r) = (N-r)^{-1} \sum_{i=1}^{N-r} x(t_i) y(t_i + r)
$$ whereas the cross-bicorrelation is usually assumed as:
$$
C_{xyy}(r,s) = (N-m)^{-1} \sum_{i=1}^{N-m} x(t_i) y(t_i + r) y(t_i + s)
$$ where $m=\max(r,s)$ (Brooks & Hinich 1999). Both formulae measure the inner product between the individual values of two time-series. In case of cross-correlation $C_{xy}(r)$ it is given by $\langle x(t_i) y(t_i + r) \rangle$ where the value of $y(t)$ is lagged by $r$ periods. The summation, similarly as in the concept of the Fourier transform, picks up the maximum power for a unique set of lags $r$ or $(r,s)$ for $C_{xy}(r)$ and $C_{xyy}(r,s)$, respectively. The use of cross-correlation method should return the greatest value of $C_{xy}(r)$ among all examined $r$’s. Here, $1 \le r < N$ in a first approximation. It reveals a direct non-linear correlation between $x(t_i)$ and $y(t_i + r)$ $\forall i$.

The concept of cross-bicorrelation goes one step further and examines the relationship between $x(t_i)$ and $y(t_i + r)$ and $y(t_i + s)$, e.g. how the current oil price at time $t$, $x(t_i)$, affects $y$ lagged by $r$ and $s$ independent time periods. Note that cross-bicorrelation also allows for $C_{xxy}(r,s)$ form, i.e. if we require to look for relationship of the current and lagged (by $r$) values of $x$ time-series and period $y$ time-series lagged by $s$.

2.2. Statistical Tests for Cross-Correlation and Cross-Bicorrelation

The application of both tools for small data samples ($N < 50$) does not guarantee a statistically significant non-linear correlations every single time the computations are done. The need for statistical testing of the results returned by both cross-correlation and cross-bicorrelation methods has beed addressed by Hinich (1996) and Brook & Hinich (1999). For two time-series, with a weak stationarity, the null hypothesis $H_0$ for the test is that two series are independent pure white noise processes against the alternative hypothesis $H_1$ saying that some cross-covariances, $E[x(t_i) y(t_i + r)]$, or cross-bicovariances, $E[x(t_i) y(t_i + r) y(t_i + s)]$, are nonzero. The test statistics, respectively for cross-correlation and cross-bicorrelation, are given by:
$$
H_{xy}(N) = \sum_{r=1}^{L} (N-L) C_{xy}^2(r) = (N-L) \sum_{r=1}^{L} C_{xy}^2(r)
$$ and
$$
H_{xyy}(N) = \sum_{s=-L}^{L} ‘ \sum_{r=1}^{L} (N-m) C_{xyy}^2(r,s) \ \ \ (‘-s \ne -1, 1, 0)
$$ where it has been shown by Hinich (1996) that:
$$
H_{xy}(N) \sim \chi^2_L
$$ and
$$
H_{xyy}(N) \sim \chi^2_{(L-1)(L/2)}
$$ where $\sim$ means are asymptotically distributed with the corresponding degrees for $N \rightarrow \infty$ and $m = \max(r, s)$ as previously. For both $H_{xy}(N)$ and $H_{xyy}(N)$ statistics, the number of lags $L$ is specified as $L = N^b$ where $0 < b < 0.5$. Based on the results of the Monte-Carlo simulations, Hinich & Parrerson (1995) recommended the use of $b=0.4$ in order to maximise the power of test while ensuring a valid approximation to the asymptotic theory. Further down, we compute $H_{xy}(N)$ and $H_{xyy}(N)$ statistics for every cross-correlation and cross-bicorrelation and determine the corresponding significance assuming the confidence level of 90% ($\alpha = 0.1$) if: $$ \mbox{p-value} < \alpha $$ where $$ \mbox{p-value} = 1 - \mbox{Pr} \left[ \chi^2_{\mbox{dof}} < H_{x(y)y} | H_1 \right] \ . $$ Given that, any resultant values of cross-correlations and cross-bicorrelations one can assume as statistically significant if p-value $< \alpha$. A recommended number of lags to be investigated is $L = N^{0.4}$ where $N$ is the length of $x(t)$ and $y(t)$ time-series. Employing floor rounding for $L$ we obtain the following relationship between $N$ and $L$ for the sample sizes of $N \le 50$:

L = []
for N in range(1, 51):
    L.append(np.floor(N**0.4))
 
fig, ax = plt.subplots(figsize=(8,3))
plt.plot(np.arange(1, 51), L, '.')
plt.xlabel("N")
plt.ylabel("L")
ax.margins(x=0.025, y=0.05) # add extra padding
ax.tick_params(axis='y',which='minor',left='off') # without minor yticks
plt.grid(b=True, which='minor', color='k', linestyle=':', alpha=0.3)
plt.minorticks_on()

unknown-2

2.3. Cross-Correlation and Cross-Bicorrelation in Python

Let’s translate cross-correlation to Python language and run a simple test for a random time-series:

# cross-correlation
def Cxy(x, y, r, N):
    z = 0
    for i in range(0, N-r-1):
        z += x[i] * y[i+r]
    z /= (N-r)
    return z
 
# test statistic for cross-correlation
def Hxy(x, y, L, N):
    z = 0
    for r in range(1, L+1):
        z += Cxy(x, y, r, N)**2
    z *= (N - L)   
    return z

where we defined two functions deriving $C_{xy}(r)$ and $H_{xy}(N)$, respectively. Now, let’s consider some simplified time-series of $N=16$, both normalised to be $\sim N(0,1)$:

# sample data
y = np.array([0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0])
x = np.array([1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0])
 
# normalise such that x, y ~ N(0,1)
x = (x - np.mean(x))/np.std(x, ddof=1)
y = (y - np.mean(y))/np.std(y, ddof=1)

As one can see, there is expected strong cross-correlation for lag of $r = 2$ (strongest) with $r = 1$ to be detected too. We verify the overall statistical significance of cross-correlations for given time-series in the following way:

N = len(x)
L = int(np.floor(N**0.4))
 
for r in range(1, L+1):
    print("r =%2g\t Cxy(r=%g) = %.4f" % (r, r, Cxy(x, y, r)))
 
dof = L    
pvalue = 1 - scipy.stats.chi2.cdf(Hxy(x, y, L, N), dof)
alpha = 0.1
 
print("Hxy(N) = %.4f" % Hxy(x, y, L, N))
print("p-value = %.4f" % pvalue)
print("H0 rejected in favour of H1 at %g%% c.l.: %s" % 
      (100*(1-alpha), pvalue<alpha))

confirming both our expectations and significance of detected cross-correlations at 90% confidence level:

r = 1	 Cxy(r=1) = 0.0578
r = 2	 Cxy(r=2) = 0.8358
r = 3	 Cxy(r=3) = -0.3200
Hxy(N) = 10.4553
p-value = 0.0151
H0 rejected in favour of H1 at 90% c.l.: True

For cross-bicorrelation we have:

# cross-bicorrelation
def Cxyy(x, y, r, s, N):
    z = 0
    m = np.max([r, s])
    for i in range(0, N-m-1):
        z += x[i] * y[i+r] * y[i+s]
    z /= (N-m)
    return z
 
# test statistic for cross-bicorrelation
def Hxyy(x, y, L, N):
    z = 0
    for s in range(2, L+1):
        for r in range(1, L+1):
            m = np.max([r, s])
            z += (N-m) * Cxyy(x, y, r, s, N)**2
    return z    
 
 
# sample data
y = np.array([0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0])
x = np.array([0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0])
 
# normalise such that x, y ~ N(0,1)
x = (x - np.mean(x))/np.std(x, ddof=1)
y = (y - np.mean(y))/np.std(y, ddof=1)
 
N = len(x)  # len(x) should be equal to len(y)
L = int(np.floor(N**0.4))
 
for r in range(0, L+1):
    for s in range(r+1, L+1):
        print("r =%2g, s =%2g\t Cxyy(r=%g, s=%g) = %.5f" % 
             (r, s, r, s, Cxyy(x, y, r, s, N)))
 
dof = (L-1)*(L/2)
pvalue = 1 - scipy.stats.chi2.cdf(Hxy(x, y, L, N), dof)
alpha = 0.1
 
print("Hxyy(N) = %.4f" % Hxyy(x, y, L, N))
print("p-value = %.4f" % pvalue)
print("H0 rejected in favour of H1 at %g%% c.l.: %s" % 
      (100*(1-alpha), pvalue<alpha))

where for a new sample data we expect to detect two strongest cross-bicorrelations for pairs of lags $(r,s)$ equal $(2,3)$ and $(1,2)$, respectively. By running the code we obtain indeed that:

r = 0, s = 1	 Cxyy(r=0, s=1) = -0.18168
r = 0, s = 2	 Cxyy(r=0, s=2) = -0.23209
r = 0, s = 3	 Cxyy(r=0, s=3) = 0.05375
r = 1, s = 2	 Cxyy(r=1, s=2) = 0.14724
r = 1, s = 3	 Cxyy(r=1, s=3) = -0.22576
r = 2, s = 3	 Cxyy(r=2, s=3) = 0.18276
Hxyy(N) = 5.3177
p-value = 0.0833
H0 rejected in favour of H1 at 90% c.l.: True

With these two examples we showed both cross-correlation and cross-bicorrelation methods in action. We also pointed at the important fact of an existing dependency between the length of time-series and the expected number of lags to be examined. The longer time-series the higher number of degrees-of-freedom.

Given that state of the play, there exist two independent and appealing ways for the application of cross-correlation and cross-bicorrelation in finance: (1) the first one regards the examination of time-series with a small number of data points (e.g. $N \le 50$); (2) alternatively, if we deal with a long ($N \ge 1000$) time-series, the common practice is to divide them into $k$ chunks (non-overlapping windows) of length at least $N_i$ of $50$ to $100$ where $i=1,…,k$ and compute cross-correlation and cross-bicorrelation metrics for every data window separately. In consequence, within this approach, we gain an opportunity to report on time-period dependent data segment with statistically signifiant (or not) non-linear correlations! (see e.g. Coronado et alii 2015).

In the Section that follows, we solely focus on the application of the first approach. Rewarding in many aspects if you’re open-mined.

2.4. Detecting Non-Linearity between Oil Prices and Stock Fundamentals

Since looking for a direct correlation between crude oil prices and stocks can be regarded as the first approach to the examination of mutual relationships, it’s not sexy; not any more. With cross-correlation and cross-bicorrelation methods you can step up and break the boredom.

The stock fundamentals consist a broad space of parameters that actually deserve to be cross-correlated with the oil prices. It is much more intuitive that any (if any) effect of rising or falling crude oil prices may affect different companies with various time lags. Moreover, for some businesses we might detect a non-linear relationships existing only between oil and company’s e.g. revenues whereas for others, changing oil prices might affect more than one factor. If the patterns is present indeed, with the application of cross-correlation and cross-bicorrelation methods, you gain a brand new tool for data analysis. Let’s have a look at a practical example.

As previously, we will need both the WPI Crude Oil price-series (see Section 1.1) and access to a large database containing company’s fundamentals over a number of quarters or years. For the latter let me use a demo version of Quandl.com‘s Zacks Fundamentals Collection C available here (download Zacks Fundamentals Expanded ZACKS-FE.csv file). It contains 852 rows of data for 30 selected companies traded at US stock markets (NYSE, NASDAQ, etc.). This sample collection covers data back to 2011 i.e. ca. 22 to 23 quarterly reported financials. There is over 620 fundamentals to select from. Their full list you can find here (download the ZFC Definitions csv file). Let’s put it all in a new Python listing:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# Non-Linear Cross-Bicorrelations between the Oil Prices and Stock Fundamentals
# (c) 2016 QuantAtRisk.com, by Pawel Lachowicz
 
import numpy as np
import pandas as pd
import scipy.stats
import datetime
from matplotlib import pyplot as plt
 
%matplotlib inline
 
grey = 0.7, 0.7, 0.7
 
import warnings
warnings.filterwarnings('ignore')
 
 
# WTI Crude Oil price-series
# src: https://fred.stlouisfed.org/series/DCOILWTICO/downloaddata
dateparse = lambda x: pd.datetime.strptime(x, '%Y-%m-%d')
wpi = pd.read_csv("DCOILWTICO.csv", parse_dates=['DATE'], date_parser=dateparse)
wpi = wpi[wpi.VALUE != "."]
wpi.VALUE = wpi.VALUE.astype(float)
 
# Zacks Fundamentals Collection C (Fundamentals Expanded)
# src: https://www.quandl.com/databases/ZFC
df = pd.read_csv('ZACKS-FE.csv')
 
col = ['Ticker', 'per_end_date', 'per_code', 'Revenue', "EBIT", 
          "GrossProfit", "CAPEX"]
data = pd.DataFrame(columns=col)  # create an empty DataFrame
 
tickers = ["XOM", "WMT", "BA", "AAPL", "JNJ"]
 
for i in range(df.shape[0]):
    ticker = df.iloc[i]["ticker"]
    pc = df.iloc[i]["per_code"]
    # extract only reported fundamentals for quarters "QR0 and all "QR-"
    # in total 22 to 23 rows of data per stock
    if((pc[0:3] == "QR-") | (pc[0:3] == "QR0")) and (ticker in tickers):
        data.loc[len(data)] = [df.iloc[i]["ticker"],
                               df.iloc[i]["per_end_date"],
                               df.iloc[i]["per_code"],
                               df.iloc[i]["tot_revnu"],
                               df.iloc[i]["ebit"],
                               df.iloc[i]["gross_profit"],
                               df.iloc[i]["cap_expense"]] 
 
print(data.head(25))

where we created a new Dataframe, data, storing Total Revenue, EBIT, Gross Profit (revenue from operations less associated costs), and CAPEX (capital expenditure) for a sample of 5 stocks (XOM, WMT, BA, AAPL, and JNJ) out of 30 available in that demo collection. Our goal here is just to show a path for a possible analysis of all those numbers with, kept in mind, an encouragement to build a bigger tool for stock screening based on detection of non-linear correlations amongst selected fundamentals and oil prices.

First 25 lines of data Dataframe are:

   Ticker per_end_date per_code  Revenue     EBIT  GrossProfit    CAPEX
0    AAPL   2011-03-31    QR-22  24667.0   7874.0      10218.0  -1838.0
1    AAPL   2011-06-30    QR-21  28571.0   9379.0      11922.0  -2615.0
2    AAPL   2011-09-30    QR-20  28270.0   8710.0      11380.0  -4260.0
3    AAPL   2011-12-31    QR-19  46333.0  17340.0      20703.0  -1321.0
4    AAPL   2012-03-31    QR-18  39186.0  15384.0      18564.0  -2778.0
5    AAPL   2012-06-30    QR-17  35023.0  11573.0      14994.0  -4834.0
6    AAPL   2012-09-30    QR-16  35966.0  10944.0      14401.0  -8295.0
7    AAPL   2012-12-31    QR-15  54512.0  17210.0      21060.0  -2317.0
8    AAPL   2013-03-31    QR-14  43603.0  12558.0      16349.0  -4325.0
9    AAPL   2013-06-30    QR-13  35323.0   9201.0      13024.0  -6210.0
10   AAPL   2013-09-30    QR-12  37472.0  10030.0      13871.0  -8165.0
11   AAPL   2013-12-31    QR-11  57594.0  17463.0      21846.0  -1985.0
12   AAPL   2014-03-31    QR-10  45646.0  13593.0      17947.0  -3367.0
13   AAPL   2014-06-30     QR-9  37432.0  10282.0      14735.0  -5745.0
14   AAPL   2014-09-30     QR-8  42123.0  11165.0      16009.0  -9571.0
15   AAPL   2014-12-31     QR-7  74599.0  24246.0      29741.0  -3217.0
16   AAPL   2015-03-31     QR-6  58010.0  18278.0      23656.0  -5586.0
17   AAPL   2015-06-30     QR-5  49605.0  14083.0      19681.0  -7629.0
18   AAPL   2015-09-30     QR-4  51501.0  14623.0      20548.0 -11247.0
19   AAPL   2015-12-31     QR-3  75872.0  24171.0      30423.0  -3612.0
20   AAPL   2016-03-31     QR-2  50557.0  13987.0      19921.0  -5948.0
21   AAPL   2016-06-30     QR-1  42358.0  10105.0      16106.0  -8757.0
22   AAPL   2016-09-30      QR0  46852.0  11761.0      17813.0 -12734.0
23     BA   2011-03-31    QR-22  14910.0   1033.0       2894.0   -417.0
24     BA   2011-06-30    QR-21  16543.0   1563.0       3372.0   -762.0

An exemplary visualisation of both data sets for selected company (here: Exxon Mobil Corporation, XOM, engaged in the exploration and production of crude oil and natural gas, manufacturing of petroleum products, and transportation and sale of crude oil, natural gas and petroleum products) we obtain with:

49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
fig, ax1 = plt.subplots(figsize=(10,5))
plt.plot(wpi.DATE, wpi.VALUE, color=grey, label="WPI Crude Oil (daily)")
plt.grid(True)
plt.legend(loc=1)
plt.ylabel("USD per barrel")
 
selection = ["XOM"]
 
ax2 = ax1.twinx()
plt.ylabel("1e6 USD")
for ticker in selection:
    tmp = data[data.Ticker == ticker]
    lab = ticker
    tmp['per_end_date'] =  pd.to_datetime(tmp['per_end_date'], 
                                                        format='%Y-%m-%d')
    plt.plot(tmp.per_end_date, tmp.Revenue, '.-', label=lab + ": Revenue")
    plt.plot(tmp.per_end_date, tmp.EBIT, '.-g', label=lab + ": EBIT")
    plt.plot(tmp.per_end_date, tmp.GrossProfit, '.-m', label=lab + ": 
                                                            Gross Profit")
    plt.plot(tmp.per_end_date, tmp.CAPEX, '.-r', label=lab + ": CAPEX")
    plt.legend(loc=3)

unknown-2
Every company has different periods of time (per_end_date) when they report their financials. That is why we need to make sure that in the process of comparison of selected fundamentals with oil prices, the oil price is picked up around the date defined by per_end_date. Let me use $\pm 5$ day average around those points:

70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
# cross-correlation
def Cxy(x, y, r, N):
    z = 0
    for i in range(0, N-r-1):
        z += x[i] * y[i+r]
    z /= (N-r)
    return z
 
# test statistic for cross-correlation
def Hxy(x, y, L, N):
    z = 0
    for r in range(1, L+1):
        z += Cxy(x, y, r, N)**2
    z *= (N - L)   
    return z
 
# cross-bicorrelation
def Cxyy(x, y, r, s, N):
    z = 0
    m = np.max([r, s])
    for i in range(0, N-m-1):
        z += x[i] * y[i+r] * y[i+s]
    z /= (N-m)
    return z
 
def Hxyy(x, y, L, N):
    z = 0
    for s in range(2, L+1):
        for r in range(1, L+1):
            m = np.max([r, s])
            z += (N-m) * Cxyy(x, y, r, s, N)**2
    return z
 
 
fig, ax1 = plt.subplots(figsize=(10,5))
plt.plot(wpi.DATE, wpi.VALUE, color=grey, label="WPI Crude Oil (daily)")
plt.grid(True)
plt.legend(loc=1)
plt.ylabel("USD per barrel")
 
selection = ["XOM"]
 
ax2 = ax1.twinx()
plt.ylabel("1e6 USD")
for ticker in selection:
    tmp = data[data.Ticker == ticker]
    lab = ticker
    tmp['per_end_date'] =  pd.to_datetime(tmp['per_end_date'], 
                                               format='%Y-%m-%d')
    plt.plot(tmp.per_end_date, tmp.Revenue, '.-', label=lab + ": Revenue")
    plt.plot(tmp.per_end_date, tmp.EBIT, '.-g', label=lab + ": EBIT")
    plt.plot(tmp.per_end_date, tmp.GrossProfit, '.-m', label=lab + ": 
                                                            Gross Profit")
    plt.plot(tmp.per_end_date, tmp.CAPEX, '.-r', label=lab + ": CAPEX")
    plt.legend(loc=3)
 
    col = ['per_end_date', 'avgWPI']
    wpi2 = pd.DataFrame(columns=col)  # create an empty DataFrame
 
    for i in range(tmp.shape[0]):
        date = tmp.iloc[i]["per_end_date"]
        date0 = date
        date1 = date + datetime.timedelta(days=-5)
        date2 = date + datetime.timedelta(days=+5)
        wpiavg = wpi[(wpi.DATE >= date1) & (wpi.DATE <= date2)]
        avg = np.mean(wpiavg.VALUE)
        wpi2.loc[len(wpi2)] = [date0, avg]
 
    # plot quarterly averaged oil price-series
    plt.sca(ax1)  # set ax1 axis for plotting
    plt.plot(wpi2.per_end_date, wpi2.avgWPI, '.-k')

unknown-3
where a black line denotes quarterly changing WPI Crude Oil prices stored now in a new Dataframe of wpi2.

Let’s leave only XOM in selection list variable. After line #140 the Dataframe of tmp stores XOM’s selected fundamentals data. In what follows we will use both Dateframes (tmp and wpi2) to look for non-linear correlations as the final goal of this investigation:

142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
    # select some factors
    fundamentals = ["Revenue", "EBIT", "GrossProfit", "CAPEX"]
 
    # compute for them cross-correlations and cross-bicorrelations
    # and display final results
    #
    for f in fundamentals:
        print("%s: %s\n" % (ticker, f))
        # input data
        x = wpi2.avgWPI.values
        y = tmp[f].values
 
        # normalised time-series
        x = (x - np.mean(x))/np.std(x, ddof=1)
        y = (y - np.mean(y))/np.std(y, ddof=1)
 
        N = len(x)  # len(x) should be equal to len(y)
        L = int(np.floor(N**0.4))
 
        print("Cross-Correlation")
        for r in range(1, L+1):
            print("  r =%2g\t Cxy(r=%g) = %.4f" % 
                  (r, r, Cxy(x, y, r, N)))
 
        dof = L
        pvalue = 1 - scipy.stats.chi2.cdf(Hxy(x, y, L, N), dof)
        alpha = 0.1
 
        print("  Hxy(N) = %.4f" % Hxy(x, y, L, N))
        print("  p-value = %.4f" % pvalue)
        print("  H0 rejected in favour of H1 at %g%% c.l.: %s" % 
                                      (100*(1-alpha), pvalue<alpha))
 
        print("Cross-Bicorrelation")
        for r in range(0, L+1):
            for s in range(r+1, L+1):
                print("  r =%2g, s =%2g\t Cxyy(r=%g, s=%g) = %.5f" % 
                     (r, s, r, s, Cxyy(x, y, r, s, N)))
 
        dof = (L-1)*(L/2)
        pvalue = 1 - scipy.stats.chi2.cdf(Hxy(x, y, L, N), dof)
 
        print("  Hxyy(N) = %.4f" % Hxyy(x, y, L, N))
        print("  p-value = %.4f" % pvalue)
        print("  H0 rejected in favour of H1 at %g%% c.l.: %s\n" % 
                                       (100*(1-alpha), pvalue<alpha))

Our Python code generates for XOM the following output:

XOM: Revenue
 
Cross-Corelation
  r = 1	 Cxy(r=1) = 0.7869
  r = 2	 Cxy(r=2) = 0.6239
  r = 3	 Cxy(r=3) = 0.4545
  Hxy(N) = 24.3006
  p-value = 0.0000
  H0 rejected in favour of H1 at 90% c.l.: True
Cross-Bicorrelation
  r = 0, s = 1	 Cxyy(r=0, s=1) = -0.44830
  r = 0, s = 2	 Cxyy(r=0, s=2) = -0.30186
  r = 0, s = 3	 Cxyy(r=0, s=3) = -0.19315
  r = 1, s = 2	 Cxyy(r=1, s=2) = -0.36678
  r = 1, s = 3	 Cxyy(r=1, s=3) = -0.22161
  r = 2, s = 3	 Cxyy(r=2, s=3) = -0.24446
  Hxyy(N) = 10.3489
  p-value = 0.0000
  H0 rejected in favour of H1 at 90% c.l.: True
 
XOM: EBIT
 
Cross-Correlation
  r = 1	 Cxy(r=1) = 0.7262
  r = 2	 Cxy(r=2) = 0.5881
  r = 3	 Cxy(r=3) = 0.3991
  Hxy(N) = 20.6504
  p-value = 0.0001
  H0 rejected in favour of H1 at 90% c.l.: True
Cross-Bicorrelation
  r = 0, s = 1	 Cxyy(r=0, s=1) = -0.39049
  r = 0, s = 2	 Cxyy(r=0, s=2) = -0.25963
  r = 0, s = 3	 Cxyy(r=0, s=3) = -0.17480
  r = 1, s = 2	 Cxyy(r=1, s=2) = -0.28698
  r = 1, s = 3	 Cxyy(r=1, s=3) = -0.18030
  r = 2, s = 3	 Cxyy(r=2, s=3) = -0.21284
  Hxyy(N) = 6.8759
  p-value = 0.0001
  H0 rejected in favour of H1 at 90% c.l.: True
 
XOM: GrossProfit
 
Cross-Correlation
  r = 1	 Cxy(r=1) = 0.7296
  r = 2	 Cxy(r=2) = 0.5893
  r = 3	 Cxy(r=3) = 0.3946
  Hxy(N) = 20.7065
  p-value = 0.0001
  H0 rejected in favour of H1 at 90% c.l.: True
Cross-Bicorrelation
  r = 0, s = 1	 Cxyy(r=0, s=1) = -0.38552
  r = 0, s = 2	 Cxyy(r=0, s=2) = -0.24803
  r = 0, s = 3	 Cxyy(r=0, s=3) = -0.17301
  r = 1, s = 2	 Cxyy(r=1, s=2) = -0.28103
  r = 1, s = 3	 Cxyy(r=1, s=3) = -0.17393
  r = 2, s = 3	 Cxyy(r=2, s=3) = -0.19780
  Hxyy(N) = 6.2095
  p-value = 0.0001
  H0 rejected in favour of H1 at 90% c.l.: True
 
XOM: CAPEX
 
Cross-Correlation
  r = 1	 Cxy(r=1) = -0.2693
  r = 2	 Cxy(r=2) = -0.3024
  r = 3	 Cxy(r=3) = -0.2333
  Hxy(N) = 4.3682
  p-value = 0.2244
  H0 rejected in favour of H1 at 90% c.l.: False
Cross-Bicorrelation
  r = 0, s = 1	 Cxyy(r=0, s=1) = 0.00844
  r = 0, s = 2	 Cxyy(r=0, s=2) = -0.10889
  r = 0, s = 3	 Cxyy(r=0, s=3) = -0.14787
  r = 1, s = 2	 Cxyy(r=1, s=2) = -0.10655
  r = 1, s = 3	 Cxyy(r=1, s=3) = -0.16628
  r = 2, s = 3	 Cxyy(r=2, s=3) = -0.08072
  Hxyy(N) = 6.4376
  p-value = 0.2244
  H0 rejected in favour of H1 at 90% c.l.: False

A quick analysis points at detection of significant non-linear correlations (at 90% confidence level) between WPI Crude Oil and Revenue, EBIT, and GrossProfit for lag $r=1$ (based on cross-correlations) and for lags $(r=0, s=1)$ (based on cross-bicorrelations). There is no significant non-linear relationship between XOM’s CAPEX and oil prices.

Interestingly, rerunning the code for Boeing Company (NYSE ticker: BA) reveals no significant cross-(bi)correlations between the same factors and oil. For example, Boeing’s revenue improves year-over-year,

unknown-4

but it appears to have any significant non-linear link to the oil prices whatsoever:

BA: Revenue
 
Cross-Correlation
  r = 1	 Cxy(r=1) = -0.3811
  r = 2	 Cxy(r=2) = -0.3216
  r = 3	 Cxy(r=3) = -0.1494
  Hxy(N) = 5.4200
  p-value = 0.1435
  H0 rejected in favour of H1 at 90% c.l.: False
Cross-Bicorrelation
  r = 0, s = 1	 Cxyy(r=0, s=1) = 0.11202
  r = 0, s = 2	 Cxyy(r=0, s=2) = 0.03588
  r = 0, s = 3	 Cxyy(r=0, s=3) = -0.08083
  r = 1, s = 2	 Cxyy(r=1, s=2) = 0.01526
  r = 1, s = 3	 Cxyy(r=1, s=3) = 0.00616
  r = 2, s = 3	 Cxyy(r=2, s=3) = -0.07252
  Hxyy(N) = 0.3232
  p-value = 0.1435
  H0 rejected in favour of H1 at 90% c.l.: False

The most interesting aspect of cross-correlation and cross-bicorrelation methods applied in oil vs. stock fundamentals research regards new opportunities to discover correlations for stocks that so far have been completely ignored (or not suspected) to have anything in common with oil price movements in the markets. Dare to go this path. I hope you will find new assets to invest in. The information is there. Now you are equipped with new tools. Use them wisely. Retire early.

REFERENCES
    Bernanke B. S., 2016, The relationship between stocks and oil prices
    Brooks C., Hinich M. J., 1999, Cross-correlations and cross-bicorrelations in Sterling
          exchange rates
(pdf)
    Coronado et al., 2015, A study of co-movements between USA and Latin American stock
          markets: a cross-bicorrelations perspective
(pdf)
    Hamilton J., 2014, Oil prices as an indicator of global economic conditions
    Hinich M.J., 1996. Testing for dependence in the input to a linear time series model.
          Journal of Nonparametric Statistics 6, 205–221.
    Hinich M.J., Patterson D.M., 1995, Detecting epochs of transient dependence in white
          noise
, Mimeo, University of Texas at Austin

DOWNLOADS
    COPPER.csv, DCOILWTICO.csv, DGS10.csv, DTWEXM.csv, VIXCLS.csv, ZACKS-FE.csv

Probability of Black Swan Events at NYSE

Featured by a Russian website Financial One (Apr 26, 2016)

The prediction of extreme rare events (EREs) in the financial markets remains one of the toughest problems. Firstly because of a very limited knowledge we have on their distribution and underlying correlations across the markets. Literally, we walk in dark, hoping it won’t happen today, not to the stocks we hold in our portfolios. But is that darkness really so dark?

In this post we will use a textbook knowledge on classical statistics in order to analyze the universe of 2500+ stocks traded at New York Stock Exchange (NYSE) that experienced the most devastating daily loss in their history (1 data point per 1 stock). Based on that, we will try to shed a new light on the most mind-boggling questions: when the next ERE will happen, how long we need to wait for it, and if it hits what its magnitude “of devastation” will be?

1. Extreme Rare Events of NYSE

An ERE meeting certain criteria has its own name in the financial world: a black swan. We will define it in the next Section. Historically speaking, at some point, people were convinced that only white swans existed until someone noticed a population of black swans in Western Australia. The impossible became possible.

The same we observe in the stock markets. Black Monday on Oct 19, 1987 no doubt is the greatest historical event where the markets suffered from 20%+ losses. The event was so rare that any risk analyst could not predict it and/or protect against it. The history repeated itself in 2000 when the Internet bubble caused many IT stocks to lose a lot over the period of few weeks. An event of 9/11 in New York forced the closure of US stock exchange for 6 days, and on Sep 17, as a consequence of fear, about 25 stocks at NYSE lost more than 15% in one day. The uncertainty cannot be avoided but it can be minimized. Is it so? The financial crisis of 2007-2008 triggered a significant number of black swans to fly again and fly high…

Did we learn enough from that? Do we protect more efficiently against black swans after 2009 market recovery? Will you be surprised if I tell you “no”?!

Let’s conduct a data analysis of the most EREs ever spotted at NYSE among all its stocks. To do that, we will use Python language and Yahoo! Finance database. The quality of data are not required to be superbly accurate and we will download only adjusted-close prices with the corresponding dates. A list of companies (inter alia their tickers) traded at NYSE you can extract from a .xls file available for download at http://www.nasdaq.com/screening/company-list.aspx. It contains 3226 tickers, while, as we will see below, Yahoo! Finance recognizes only 2716 of them. Since 2716 is a huge data set meeting a lot of requirements for the statistical significance, I’m more than happy to go along with it.

In the process of stock data download, we look for information on the most extreme daily loss per stock and collect them all in pandas’ DataFrame dfblack. It will contain a Date, the magnitude of ERE (Return), and Delta0. The latter we calculate on spot as a number of business days between some initial point in time (here, 1990-01-01) and the day of ERE occurrence. The processing time may take up to 45 min depending on your Internet speed. Be patient, it will be worth waiting that long. Using HDF5 we save dfblack DataFrame on the disk.

# Probability of Black Swan Events at NYSE
# (c) 2016 by Pawel Lachowicz, QuantAtRisk.com
 
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pandas_datareader.data as web
import datetime as dt
from scipy.stats import gumbel_l, t, norm, expon, poisson
from math import factorial
 
# Constants
c = (.7, .7, .7)  # grey color
 
col = ['Date', 'Return', 'Delta0']
dfblack = pd.DataFrame(columns=col)  # create an empty DataFrame
 
i = n = 0
for lstline in open("nyseall.lst",'r').readlines():
    ticker=lstline[:-1]
    i += 1
    print(i, ticker)
    try:
        ok = True
        data = web.DataReader(str(ticker), data_source='yahoo', 
                 start='1900-01-01', end='2016-04-14')
    except:
        ok = False
    if(ok):
        n += 1
        data['Return'] = data['Adj Close' ] / data['Adj Close'].shift(1) - 1
        data = data[1:]  # skip first row
 
        df = data['Return'].copy()
        dfs = df.sort_values()
        start = dt.date(1900, 1, 1)
        end = dt.date(dfs.index[0].year, dfs.index[0].month, dfs.index[0].day)
        delta0 = np.busday_count(start,end)
        tmin = end
        rmin = dfs[0]
        dfblack.loc[len(dfblack)]=[tmin, rmin, delta0]  # add new row to dfblack
 
    else:
        print(' not found')
 
dfblack.set_index(dfblack.Date, inplace=True)  # set index by Date
dfblack.sort_index(inplace=True)  # sort DataFrame by Date
del dfblack['Date']  # remove Date column
 
print("No. of stocks with valid data: %g" % n)
 
# saving to file
store = pd.HDFStore('dfblack.h5')
store['dfblack'] = dfblack
store.close()
 
# reading from file
store = pd.HDFStore('dfblack.h5')
dfblack = pd.read_hdf('dfblack.h5', 'dfblack')
store.close()
 
dfblack0 = dfblack.copy()  # a backup copy

where nyseall.lst is a plain text file listing all NYSE tickers:

DDD
MMM
WBAI
WUBA
AHC
...

The first instinct of a savvy risk analyst would be to plot the distribution of the magnitudes of EREs as extracted for the NYSE universe. We achieve that by:

plt.figure(num=1, figsize=(9, 5))
plt.hist(dfblack.Return, bins=50, color='grey')  # a histrogram
plt.xlabel('Daily loss', fontsize = 12)
plt.xticks(fontsize = 12)
 
plt.savefig('fig01.png', format='png')

what reveals:
fig01
Peaked around -18% with a long left tail of losses reaching nearly -98%. Welcome to the devil’s playground! But, that’s just the beginning. The information on dates corresponding to the occurrence of those EREs allows us to build a 2D picture:

# Black Monday data sample
df_bm = dfblack[dfblack.index == dt.date(1987, 10, 19)]  
# Reopening of NYSE after 9/11 data sample
df_wtc = dfblack[dfblack.index == dt.date(2001, 9, 17)]  
# Financial Crisis of 2008 data sample
df_cre = dfblack[(dfblack.index >= dt.date(2008, 9, 19)) & 
           (dfblack.index <= dt.date(2009, 3, 6))]
# Drawdown of 2015/2016 data sample
df_16 = dfblack[(dfblack.index >= dt.date(2015, 12, 1)) & 
           (dfblack.index <= dt.date(2016, 2, 11))]
 
plt.figure(figsize=(13, 6))
plt.plot(dfblack.Return*100, '.k')
plt.plot(df_bm.Return*100, '.r')
plt.plot(df_wtc.Return*100, '.b')
plt.plot(df_cre.Return*100, '.m')
plt.plot(df_16.Return*100, '.y')
plt.xlabel('Time Interval [%s to %s]' % (dfblack.index[0], dfblack.index[-1]), 
               fontsize=12)
plt.ylabel('Magnitude [%]', fontsize=12)
plt.title('Time Distribution of the Most Extreme Daily Losses of %g NYSE 
          Stocks (1 point per stock)' % (len(dfblack)), fontsize=13)
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.xlim([dfblack.index[0], dt.date(2016, 4, 14)])
 
plt.savefig('fig02.png', format='png')

delivering:
fig02
Keeping in mind that every (out of 2716) stock is represented only by 1 data point, the figure reveals a dramatic state of the play: significantly lower number of EREs before 1995 and progressing increase after that with a surprisingly huge number in the past 5 years! As discussed in my previous post, Detecting Human Fear in Electronic Trading, the surge in EREs could be explained by a domination of algorithmic trading over manual trades. It may also suggest that the algorithmic risk management for many stocks is either not as good as we want it to be or very sensitive to the global stock buying/selling pressure across the markets (the butterfly effect). On the other side, the number of new stocks being introduced to NYSE may cause them “to fight for life” and those most vulnerable, short-lived, of unestablished position, reputation, financial history, or least liquidity can be the subject to ERE occurrence more likely. Regardless of the cause number one, there are more and more “sudden drops in the altitude” now than ever. I find it alarming.

For clarity, in the above figure, the historical events of Black Monday 1978-10-19 (89 points), reopening of NYSE after 9/11 (25 points), Financial Crisis 2007-2008 (858 points), and a drawdown of 2015/2016 (256 points!) have been marked by red, blue, purple, and dark yellow colours, respectively.

2. Black Swans of NYSE

The initial distribution of Extreme Rare Events of NYSE (Figure 1) provides us with an opportunity to define a black swan event. There is a statistical theory, Extreme Value Theory (EVT), delivering an appealing explanation on behaviour of rare events. We have discussed it in detail in Extreme VaR for Portfolio Managers and Black Swan and Extreme Loss Modeling articles. Thanks to that, we gain a knowledge and tools so needed to describe what we observe. As previously, in this post we will be only considering the Gumbel distribution $G$ of the corresponding probability density function (pdf) $g$ given as:
$$
G(z;\ a,b) = e^{-e^{-z}} \ \ \mbox{for}\ \ z=\frac{x-b}{a}, \ x\in\Re
$$ and
$$
g(z;\ a,b) = b^{-1} e^{-z}e^{-e^{-z}} \ .
$$ where $a$ and $b$ are the location parameter and scale parameter, respectively. You can convince yourself that fitting $g(z;\ a,b)$ function truly works in practice:

# fitting
locG, scaleG = gumbel_l.fit(dfblack.Return)  # location, scale parameters
 
dx = 0.0001
x = [dx*i for i in range(-int(1/dx), int(1/dx)+1)]
x2 = x.copy()
 
plt.figure(num=3, figsize=(9, 4))
plt.hist(dfblack.Return, bins=50, normed=True, color=c, 
             label="NYSE Data of 2716 EREs")
pdf1 = gumbel_l.pdf(x2, loc=locG, scale=scaleG)
y = pdf1.copy()
plt.plot(x2, y, 'r', linewidth=2, label="Gumbel PDF fit")
plt.xlim([-1, 0])
plt.xlabel('Daily loss [%]', fontsize = 12)
plt.xticks(fontsize = 12)
plt.legend(loc=2)
plt.savefig('fig03.png', format='png')

fig03
By deriving:

print(locG, scaleG, locG-scaleG, locG+scaleG)
 
# Pr(Loss < locG-scaleG)
Pr_bs = gumbel_l.cdf(locG-scaleG, loc=locG, scale=scaleG)
print(Pr_bs)

we see that

-0.151912998711 0.0931852781844 -0.245098276896 -0.058727720527
0.307799372445

i.e. the Gumbel pdf’s location parameter (peak) is at -15.2% with scale of 9.3%. Using an analogy to Normal distribution, we define two brackets, namely:
$$
a-b \ \ \mbox{and} \ \ a+b
$$ to be -24.5% and -5.9%, respectively. Given that, we define the black swan event as a daily loss of:
$$
L < a-b $$ magnitude. Based on NYSE data sample we find that 30.8% of EREs (i.e. greater than 24.5% in loss) meet that criteria. We re-plot their time-magnitude history as follows:

# skip data after 2015-04-10 (last 252 business days)
dfb = dfblack0[dfblack0.index < dt.date(2015, 4, 10)]
# re-fit Gumbel (to remain politically correct in-sample ;)
locG, scaleG = gumbel_l.fit(dfb.Return)
print(locG, locG-scaleG)  # -0.16493276305 -0.257085820659
 
# extract black swans from EREs data set
dfb = dfb[dfb.Return < locG-scaleG]  
dfb0 = dfb.copy()
 
plt.figure(figsize=(13, 6))
plt.plot(dfb.Return*100, '.r')
plt.xlabel('Time Interval [%s to %s]' % (dfb.index[0], dfb.index[-1]), 
               fontsize=14)
plt.ylabel('Black Swan Magnitude [%]', fontsize=14)
plt.title('Time Distribution of Black Swans of %g NYSE Stocks (L < %.1f%%)' % 
              (len(dfb), (locG-scaleG)*100), fontsize=14)
plt.xticks(fontsize = 14)
plt.yticks(fontsize = 14)
plt.ylim([-100, 0])
plt.xlim([dfb.index[0], dt.date(2016, 4, 14)])
plt.savefig('fig04.png', format='png')

fig04
where we deliberately skipped last 252+ data points to allow ourselves for some backtesting later on (out-of-sample). That reduces the number of black swans from 807 down to 627 (in-sample).

3. Modelling the Probability of an Event Based on Observed Occurrences

Let’s be honest. It’s nearly impossible to predict in advance the exact day when next black swan will appear, for which stocks, will it be isolated or trigger some stocks to fall down on the same day? Therefore, what follows from now on, you should consider as a common sense in quantitative modelling of black swans backed by our knowledge of statistics. And yes, despite the opinions of Taleb (2010) in his book of The Black Swan: The Impact of the Highly Improbable we might derive some risk uncertainties and become more aware of the impact if the worst strikes.

The classical guideline we find in Vose’s book Quantitative Risk Analysis: A Guide to Monte Carlo Simulation Modelling (1996). David delivers not only a solid explanations of many statistical distributions but also excellent hints on their practical usage for risk analysts. Since the discovery of this book in 2010 in Singapore, it is my favourite reference manual of all times.

The modelling of the probability of an event based on observed occurrences may follow two distinct processes, analogous to the difference between discrete and continuous distributions: (1) an event that may occur only among a set of specific discrete opportunities, and (2) an event that may occur over a continuum of opportunity.

Considering the black swan events we are not limited by a finite number of instances, therefore we will be taking into account the continuous exposure probability modelling. A continuous exposure process is characterized by the mean interval between events (MIBE) $\beta$. The underlying assumption refers to the uncertainty displaying the properties of a Poisson process, i.e. the probability of an event is independent of however many events have occurred in the past or how recently. And this is our starting point towards a better understanding of NYSE black swans.

3.1. Mean Interval Between Black Swan Events (MIBE)

The MIBE is the average interval between $n$ observed occurrences of a black swan event. Its true value can be estimated from the observed occurrences using central limit theorem:
$$
\mbox{MIBE}\beta = \mbox{Normal} \left( \bar{t}, \frac{\sigma}{\sqrt{n-1}} \right)
$$ where $\bar{t}$ is the average of the $n-1$ observed intervals $t_i$ between the $n$ observed contiguous black swan events and $\sigma$ is the standard deviation of the $t_i$ intervals. The larger the value of $n$, the greater our confidence on knowing its true value.

First, we estimate $\bar{t}$ from the data. Using NumPy’s histogram function, extra, we confirm that a plain mean is the same as the expected value, $E(\Delta t)$, derived based on the histogram,
$$
E(\Delta t) = \sum_{i} p_i x_i \ ,
$$ i.e. $\bar{t} = E(\Delta t)$ where there is $p_i$ probability of observing $x_i$. In Python we execute that check as follows:

deltat = np.diff(dfb.Delta0)  # time differences between points
 
print('tbar = %.2f [days]' % np.mean(deltat))
print('min, max = %.2f, %.2f [days]' % (np.min(deltat), np.max(deltat)))
print()
 
# Confirm the mean using histogram data (mean = the expected value)
y, x = np.histogram(deltat, bins=np.arange(np.ceil(np.max(deltat)+2)),
                    density=True)
hmean = np.sum(x[:-1]*y)
print('E(dt) = %.2f [days]' % hmean)
 
# plot historgam with a fitted Exp PDF
plt.figure(figsize=(9, 5))
plt.bar(x[:-1], y, width=1, color=c)
plt.xlim([0, 20])
plt.title('Time Differences between Black Swan Events', 
              fontsize=12)
plt.xlabel('Days', fontsize=12)
plt.savefig('fig05.png', format='png')

what returns

tbar = 17.38 [days]
min, max = 0.00, 950.00 [days]
 
E(dt) = 17.38 [days]

What is worth noticing is the fact that the largest recorded separation among historical black swans at NYSE was 950 days, i.e. 2.6 years. By plotting the distribution of time differences, we get:
fig05
what could be read out more clearly if we type, e.g.:

# the first 5 data points from the historgram
print(y[0:5])
print(x[0:5])

returning

[ 0.30990415  0.12300319  0.08466454  0.04472843  0.03354633]
[ 0.  1.  2.  3.  4.]

and means that our data suggest that there is a probability of 31.0% of observing the black swan events on the same day, 12.3% with a 1 day gap, 8.5% with a 2 day separation, and so on.

Since MIBE$\beta$ is not given by a single number, the recipe we have for its estimation (see the formula above) allows us solely for modelling of Mean Interval Between Black Swan Events as follows. First, we calculate the corresponding standard deviation:

sig = np.std(deltat, ddof=1)  # standard deviation
 
print(tbar, sig)
print(tbar, sig/np.sqrt(ne-1))
15.8912 57.5940828519
15.8912 2.30192251209

and next

ne = len(dfb)  # 627, a number of black swans in data sample
 
betaMIBE_sample = norm.rvs(loc=tbar, scale=sig/np.sqrt(ne-1), size=10000)
 
plt.figure(figsize=(9, 5))
plt.hist(betaMIBE_sample, bins=25, color=c)
plt.xlabel("MIBE beta [days]", fontsize=12)
plt.title("Mean Interval Between Black Swan Events", fontsize=12)
plt.savefig('fig06.png', format='png')

fig06

3.2. Time Till Next Black Swan Event

Once we have estimated the MIBE $\beta$, it is possible to use its value in order to estimate the time till the next black swan event. It obeys
$$
\mbox{TTNE} = \mbox{Expon}(\beta)
$$ i.e. the exponential distribution given by the probability density function of:
$$
f(x; \beta) = \lambda \exp(-\lambda x) = \frac{1}{\beta} \exp\left(-\frac{x}{\beta}\right) \ .
$$ In Python we model TTNE in the following way:

betaMIBE_one = norm.rvs(loc=tbar, scale=sig/np.sqrt(ne-1), size=1)
 
# The time till next event = Expon(beta)
ttne = expon.rvs(loc=0, scale=betaMIBE, size=100000)
 
y, x = np.histogram(ttne, bins=np.arange(int(np.max(ttne))), density=True)
 
plt.figure(figsize=(9, 5))
plt.bar(x[:-1], y, width=1, color=c, edgecolor=c)
plt.xlim([0, 120])
plt.title("Time Till Next Black Swan Event", fontsize=12)
plt.xlabel("Days", fontsize=12)
plt.savefig('fig07.png', format='png')

fig07
where by checking

print(y[0:5])
print(x[0:5])

we get

[0.05198052  0.0498905   0.04677047  0.04332043]
[1 2 3 4]

id est there is 5.20% of probability that the next black swan event at NYSE will occur on the next day, 4.99% in 2 days, 4.68% in 3 days, etc.

3.3. The Expected Number of Black Swans per Business Day

The distribution of the number of black swan events to occur at NYSE per unit time (here, per business day) can be modelled using Poisson distribution:
$$
\mbox{Poisson}(\lambda) \ \ \mbox{where} \ \ \lambda = \frac{1}{\beta} \ .
$$ The Poisson distribution models the number of occurrences of an event in a time $T$ when the time between successive events follows a Poisson process. If $\beta$ is the mean time between events, as used by the Exponential distribution, then $\lambda = T/\beta$. We will use it in the “out-of-sample” backtesting in Section 4.

Let’s turn words into action by executing the following Python code:

# generate a set of Poisson rvs
nep = poisson.rvs(1/betaMIBE_one, size=100000)
 
y, x = np.histogram(nep, bins=np.arange(np.max(nep)+1), density=True)
 
plt.figure(figsize=(9, 5))
plt.title("The Expected Number of Black Swans per Business Day", fontsize=12)
plt.ylabel("N", fontsize=12)
plt.ylabel("Probability", fontsize=12)
plt.xticks(x)
plt.xlim([-0.1, 2.1])
plt.bar(x[:-1], y, width=0.05, color='r', edgecolor='r', align='center')
plt.savefig('fig08.png', format='png')

fig08
The derived probabilities:

print(y)
print(x[:-1])
[0.95465  0.04419  0.00116]
[0 1 2]

inform us that there is 95.5% of chances that there will be no occurrence of the black swan event per business day, 4.4% that we will record only one, and 0.12% that we may find 2 out of 2500+ NYSE stocks that will suffer from the loss greater than 25.7% on the next business day (unit time).

3.4. Probability of the Occurrence of Several Events in an Interval

The Poisson $(1/\beta)$ distribution calculates the distribution of the number of events that will occur in a single unit interval. The probability of exactly $k$ black swan events in $m$ trials is given by:
$$
\mbox{Pr}[X = N] = \frac{\lambda^k}{k!} \exp(-\lambda) = \frac{1}{\beta^k k!} \exp\left(-\frac{1}{\beta}\right)
$$
with $\beta$ rescaled to reflect the period of exposure, e.g.:
$$
\beta_{yr} = \beta/252
$$ where 252 stands for a number of business days in a calendar year. Therefore, the number of black swan events in next business year can be modelled by:
$$
N = \mbox{Poisson}(\lambda) = \mbox{Poisson}(1/\beta_{yr}) =
\mbox{Poisson}(252/\mbox{Normal}(\bar{t},
\sigma/\sqrt{n-1}))
$$ what we can achieve in Python, coding:

inNdays = 252
 
N = poisson.rvs(inNdays/norm.rvs(loc=tbar, scale=sig/np.sqrt(ne-1)), size=100000)
exN = np.round(np.mean(N))
stdN = np.round(np.std(N, ddof=1))
 
y, x = np.histogram(N, bins=np.arange(int(np.max(N))), density=True)
 
_ = plt.figure(figsize=(9, 5))
_ = plt.title("The Expected Number of Black Swans in next %g Business Days" % 
              inNdays, fontsize=12)
_ = plt.ylabel("N", fontsize=12)
_ = plt.ylabel("Probability", fontsize=12)
_ = plt.bar(x[:-1], y, width=1, color=c)
plt.savefig('fig09.png', format='png')
 
# probability that it will occur <N> events in next 'inNdays' days
print('E(N) = %g' % exN)
print('stdN = %g' % stdN)
 
tmp = [(i,j) for i, j in zip(x,y) if i == exN]
print('Pr(X=%g) = %.4f' % (exN, tmp[0][1]))
 
tmp = [(i,j) for i, j in zip(x,y) if i == 0]
print('Pr(X=%g) = %.4f' % (0, tmp[0][1]))

Our Monte Carlo simulation may return, for instance:
fig9
where the calculated measures are:

E(N) = 16
stdN = 4
Pr(X=16) = 0.0992
Pr(X=0) = 0.0000

The interpretation is straightforward. On average, we expect to record $16\pm 4$ black swan events in next 252 business days among 2500+ stocks traded at NYSE as for Apr 10, 2015 COB. The probability of observing exactly 16 black swans is 9.92%. It is natural, based on our data analysis, that the resultant probability of the “extreme luck” of not having any black swan at NYSE, Pr$(X=0)$, in the following trading year is zero. But, miracles happen!

9.92% is derived based on a single run of the simulation. Using the analytical formula for Pr$(X=N)$ given above, we compute the uncertainty of
$$
\mbox{Pr}[X = (N=16)] = \frac{1}{(\beta_{yr})^{16} 16!} \exp\left(-\frac{1}{\beta_{yr}}\right) \ ,
$$

Pr = []
for nsim in range(100000):
    betayr = norm.rvs(loc=tbar, scale=sig/np.sqrt(ne-1), size=1) / inNdays
    p =  1/(betayr**exN * factorial(exN)) * np.exp(-1/betayr)
    Pr.append(p)
 
print('Pr[X = E(N) = %g] = %.2f +- %.2f' % (exN, np.mean(Pr), 
         np.std(Pr, ddof=1)))

that yields

Pr[X = E(N) = 16] = 0.08 +- 0.02

and stays in and agreement with our result.

4. Prediction and Out-of-Sample Backtesting

All right, time to verify a theoretical statistics in practice. Our goal is to analyze the sample of black swan data skipped so far but available to us. This procedure usually is known as an “out-of-sample” backtesting, i.e. we know “the future” as it has already happened but we pretend it is unknown.

In first step, we look for a suitable time interval of exactly 252 business days “in the future”. It occurs to be:

print(np.busday_count(dt.date(2015, 4, 10) , dt.date(2016,3, 29)))  # 252

Next, we extract out-of-sample (oos) DataFrame and illustrate them in a similar fashion as previously for in-sample data:

oos = dfblack0[(dfblack0.index >= dt.date(2015, 4, 10)) &
               (dfblack0.index <= dt.date(2016, 3, 29))]
 
# extract black swans
oos = oos[oos.Return < locG-scaleG]
 
plt.figure(figsize=(13, 6))
plt.plot(oos.Return*100, '.b')
plt.xlabel('Time Interval [%s to %s]' % (dfblack.index[0], oos.index[-1]), 
                  fontsize=14)
plt.ylabel('Black Swan Magnitude [%]', fontsize=14)
plt.title('Time Distribution of Out-of-Sample Black Swans of %g 
               NYSE Stocks (L < %.1f%%)' % 
              (len(oos), (locG-scaleG)*100), fontsize=14)
plt.xticks(fontsize = 14)
plt.yticks(fontsize = 14)
plt.ylim([-100, 0])
plt.xlim([dfblack.index[0], oos.index[-1]])
plt.savefig('fig10.png', format='png')

fig10
Based on that, we compare the MIBE as predicted with what the next 252 business days reveal:

deltat_oos = np.diff(oos.Delta0)  # time differences between points
 
tbar_oos = np.mean(deltat_oos)
print('MIBE (predicted)   = %.2f +- %.2f [days]' % (tbar, sig/np.sqrt(ne-1)))
print('MIBE out-of-sample = %.2f         [days]' % tbar_oos)
print()
 
y, x = np.histogram(deltat_oos, bins=np.arange(np.ceil(np.max(deltat)+2)),
                    density=True)
 
print(y[0:5])  # [ 0.3814433   0.20618557  0.17525773  0.06185567  0.05154639]
print(x[0:5])  # [ 0.  1.  2.  3.  4.]
 
# Predicted probabilities were:
# [ 0.30990415  0.12300319  0.08466454  0.04472843  0.03354633]

The outcome surprises, i.e.

MIBE (predicted)   = 17.38 +- 2.30 [days]
MIBE out-of-sample = 2.32          [days]

clearly indicating that Apr 2015–Apr 2016 period was nearly 7.5 times more “active” than “all-time” NYSE black swan data would predict to be. Have I already said “it’s alarming”?! Well, at least scary enough for all who trade on the daily basis holding portfolios rich in NYSE assets. The probability of having black swan on the same day increased from 31% to 38%. A new, pure definition of sophisticated gambling.

A verification of the time till next black swan event requires a gentle modification of oos DataFrame. Namely, we need to eliminate all data points with the same Delta0‘s:

oos2 = oos.Delta0.drop_duplicates(inplace=False)
tdiff = np.diff(oos2)
 
y, x = np.histogram(tdiff, bins=np.arange(int(np.max(tdiff))), density=True)
 
_ = plt.figure(figsize=(9, 5))
_ = plt.bar(x[:-1], y, width=1, color=c, edgecolor=c)
_ = plt.xlim([1, 30])
_ = plt.title("Time Till Next Black Swan Event (Out-of-Sample)", fontsize=12)
_ = plt.xlabel("Days", fontsize=12)
_ = plt.ylabel("Probability", fontsize=12)
plt.savefig('fig11.png', format='png')
 
print(y[1:5])
print(x[1:5])

fig11
Again, from our modeling of Poisson($1/\beta$) we expected to have about 5% chances of waiting for the next black swan event a day, two, or three. Out-of-sample data suggest,

[0.33898305  0.28813559  0.10169492  0.08474576]
[1 2 3 4]

that we are 6.3 times more likely to record black swan event at NYSE on the next business day. Terrifying.

The calculation of the expected number of black swans per business day in out-of-sample data I’m leaving with you as a homework in Python programming. Our earlier prediction of those events in next 252 business days returned 16$\pm$4. As we already have seen it, in a considered time interval after Apr 10, 2015 we recorded 98 swans which is $6.1^{+2.1}_{-1.2}$ times more than forecasted based on a broad sample.

In next post of this series, we will try to develop a methodology for a quantitative derivation of the probability of black swan for $N$-asset portfolio.

Reflection

Investors invest in stocks with anticipation of earning profit. As we have witnessed above, the birds become a serious threat to flying high with a significant increase in this investment air-traffic. Your plane (stock) can be grounded and your airline (portfolio) can suffer from huge losses. Hoping is not the solution. The better ways of protecting against the unavoidable should be implemented for investments while your money are in the game.

But what you should do when black swan strikes? Denzel Washington once said If you pray for rain, you gotta deal with the mud too.

How to Get a List of all NASDAQ Securities as a CSV file using Python?

This post will be short but very informative. You can learn a few good Unix/Linux tricks on the way. The goal is well defined in the title. So, what’s the quickest solution? We will make use of Python in the Unix-based environment. As you will see, for any text file, writing a single line of Unix commands is more than enough to deliver exactly what we need (a basic text file processing). If you try to do the same in Windows.. well, good luck!

In general, we need to get through the FTP gate of NASDAQ heaven. It is sufficient to log on as an anonymous user providing your password defined by your email. In fact, any fake email will do the job. Let’s begin coding in Python:

1
2
3
4
5
6
7
8
9
10
# How to Get a List of all NASDAQ Securities as a CSV file using Python?
# +tested in Python 3.5.0b2, Mac OS X 10.10.3
#
# (c) 2015 QuantAtRisk.com, by Pawel Lachowicz
 
import os
 
os.system("curl --ftp-ssl anonymous:jupi@jupi.com "
          "ftp://ftp.nasdaqtrader.com/SymbolDirectory/nasdaqlisted.txt "
          "> nasdaq.lst")

Here we use os module from the Python’s Standard Library and a Unix command of curl. The latter allows us to connect to FTS server of NASDAQ exchange, fetch the file of nasdaqlisted.txt to be usually stored in the SymbolDirectory directory and download it directly to our current folder under a given name of nasdaq.lst. During that process you will see the progress information displayed by Python, e.g.:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   162  100   162    0     0    125      0  0:00:01  0:00:01 --:--:--   125
100  174k  100  174k    0     0  23409      0  0:00:07  0:00:07 --:--:-- 39237

Now, in order to inspect the content of the downloaded file we may run in Python an extra line of code, namely:

12
13
os.system("head -20 nasdaq.lst")
print()

which displays first 20 lines from the top:

<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx</center>
</body>
</html>
Symbol|Security Name|Market Category|Test Issue|Financial Status|Round Lot Size
AAIT|iShares MSCI All Country Asia Information Technology Index Fund|G|N|N|100
AAL|American Airlines Group, Inc. - Common Stock|Q|N|N|100
AAME|Atlantic American Corporation - Common Stock|G|N|D|100
AAOI|Applied Optoelectronics, Inc. - Common Stock|G|N|N|100
AAON|AAON, Inc. - Common Stock|Q|N|N|100
AAPC|Atlantic Alliance Partnership Corp. - Ordinary Shares|S|N|N|100
AAPL|Apple Inc. - Common Stock|Q|N|N|100
AAVL|Avalanche Biotechnologies, Inc. - Common Stock|G|N|N|100
AAWW|Atlas Air Worldwide Holdings - Common Stock|Q|N|N|100
AAXJ|iShares MSCI All Country Asia ex Japan Index Fund|G|N|N|100
ABAC|Aoxin Tianli Group, Inc. - Common Shares|S|N|N|100
ABAX|ABAXIS, Inc. - Common Stock|Q|N|N|100

As you can see, we are not interested in first 8 lines of our file. Before cleaning that mess, let’s inspect the “happing ending” as well:

15
16
os.system("tail -5 nasdaq.lst")
print()

displaying

ZVZZT|NASDAQ TEST STOCK|G|Y|N|100
ZWZZT|NASDAQ TEST STOCK|S|Y|N|100
ZXYZ.A|Nasdaq Symbology Test Common Stock|Q|Y|N|100
ZXZZT|NASDAQ TEST STOCK|G|Y|N|100
File Creation Time: 0624201511:02|||||

Again, we notice that the last line does not make our housewarming party more merrier.

Given that information, we employ heavy but smart one-liner making use of immortal Unix commands of cat and sed in the pipe (pipeline process). Therefore, the next calling in our Python code does 3 miracles all-in-one shot. Have a look:

18
19
os.system("tail -n +9 nasdaq.lst | cat | sed '$d' | sed 's/|/ /g' > "
          "nasdaq.lst2")

If you view the output file of nasdaq.lst2 you will see its content to be exactly as we wanted it to be, i.e.:

$ echo; head nasdaq.lst2; echo "..."; tail nasdaq.lst2
 
AAIT iShares MSCI All Country Asia Information Technology Index Fund G N N 100
AAL American Airlines Group, Inc. - Common Stock Q N N 100
AAME Atlantic American Corporation - Common Stock G N D 100
AAOI Applied Optoelectronics, Inc. - Common Stock G N N 100
AAON AAON, Inc. - Common Stock Q N N 100
AAPC Atlantic Alliance Partnership Corp. - Ordinary Shares S N N 100
AAPL Apple Inc. - Common Stock Q N N 100
AAVL Avalanche Biotechnologies, Inc. - Common Stock G N N 100
AAWW Atlas Air Worldwide Holdings - Common Stock Q N N 100
AAXJ iShares MSCI All Country Asia ex Japan Index Fund G N N 100
...
ZNGA Zynga Inc. - Class A Common Stock Q N N 100
ZNWAA Zion Oil & Gas Inc - Warrants G N N 100
ZSAN Zosano Pharma Corporation - Common Stock S N N 100
ZSPH ZS Pharma, Inc. - Common Stock G N N 100
ZU zulily, inc. - Class A Common Stock Q N N 100
ZUMZ Zumiez Inc. - Common Stock Q N N 100
ZVZZT NASDAQ TEST STOCK G Y N 100
ZWZZT NASDAQ TEST STOCK S Y N 100
ZXYZ.A Nasdaq Symbology Test Common Stock Q Y N 100
ZXZZT NASDAQ TEST STOCK G Y N 100

The command of

tail -n +9 nasdaq.lst

lists all lines of the file skipping first nine at the beginning. Next we push in a pipe that output and list it as a whole using cat command. In next step that output is processed by sed command which (a) removes the last line first; (b) the second one replaces all “|” tokens with “empty space” token. Finally, the processed output is saved as a nasdaq.lst2 file. The power of Unix in a single line. After 15 years of using it I’m still smiling to myself doing that :)

All right. What is left? Getting a list of tickers and storing it into a CSV file. Piece of cake. Here we employ the Unix command of awk in the following way:

21
22
os.system("awk '{print $1}' nasdaq.lst2 > nasdaq.csv")
os.system("echo; head nasdaq.csv; echo '...'; tail nasdaq.csv")

which returns

AAIT
AAL
AAME
AAOI
AAON
AAPC
AAPL
AAVL
AAWW
AAXJ
...
ZNGA
ZNWAA
ZSAN
ZSPH
ZU
ZUMZ
ZVZZT
ZWZZT
ZXYZ.A
ZXZZT

i.e. an isolated list of NASDAQ tickers stored in nasdaq.csv file. From this point, you can read it into Python’s pandas DataFrame as follows:

24
25
26
27
import pandas as pd
data = pd.read_csv("nasdaq.csv", index_col=None, header=None)
data.columns=["Ticker"]
print(data)

displaying

      Ticker
0       AAIT
1        AAL
2       AAME
3       AAOI
4       AAON
5       AAPC
...
 
[3034 rows x 1 columns]

That’s it.

In the following post, I will make use of that list to fetch the stock trading data and analyse the distribution of extreme values–the gateway to prediction of extreme and heavy losses for every portfolio holder (part 2 out of 3). Stay tuned!

DOWNLOADS
   nasdaqtickers.py

RELATED POSTS
   How to Find a Company Name given a Stock Ticker Symbol utilising Quandl API
   Predicting Heavy and Extreme Losses in Real-Time for Portfolio Holders (1)

Hacking Google Finance in Real-Time for Algorithmic Traders. (2) Pre-Market Trading.

Featured in: Data Science Weekly Newsletter, Issue 76 (May 7, 2015)

It has been over a year since I posted Hacking Google Finance in Real-Time for Algorithmic Traders article. Surprisingly, it became the number one URL of QaR that Google has been displaying as a result to various queries and the number two most frequently read post. Thank You! It’s my pleasure to provide quality content covering interesting topics that I find potentially useful.


You can be surprised how fast Python solutions went forward facilitating life of quants and algo traders. For instance, yesterday, haphazardly, I found a code that seems to work equally well as compared to my first version, and, in fact, is more flexible in data content that could be retrieved. The idea stays the same as previously, however, our goal this time is to monitor changes of stock prices provided by Google Finance in real-time before the market opens.

Constructing Pre-Market Price-Series

The pre-market trading session typically occurs between 8:00am and 9:30am EDT each trading day though for some stocks we often observe frequent movements much earlier, e.g. at 6:00am. Many investors and traders watch the pre-market trading activity to judge the strength and direction of the market in anticipation for the regular trading session. Pre-market trading activity generally has limited volume and liquidity, and therefore, large bid-ask spreads are common. Many retail brokers offer pre-market trading, but may limit the types of orders that can be used during the pre-market period$^1$.

In Google Finance the stock price in pre-market is usually displayed right beneath the tricker, for example:

AAPLpm

The price of the stock (here: AAPL) varies depending on interest, good/bad news, etc.

In Python we can fetch those changes (I adopt a code found on the Web) in the following way:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import urllib2  # works fine with Python 2.7.9 (not 3.4.+)
import json
import time
 
def fetchPreMarket(symbol, exchange):
    link = "http://finance.google.com/finance/info?client=ig&q="
    url = link+"%s:%s" % (exchange, symbol)
    u = urllib2.urlopen(url)
    content = u.read()
    data = json.loads(content[3:])
    info = data[0]
    t = str(info["elt"])    # time stamp
    l = float(info["l"])    # close price (previous trading day)
    p = float(info["el"])   # stock price in pre-market (after-hours)
    return (t,l,p)
 
 
p0 = 0
while True:
    t, l, p = fetchPreMarket("AAPL","NASDAQ")
    if(p!=p0):
        p0 = p
        print("%s\t%.2f\t%.2f\t%+.2f\t%+.2f%%" % (t, l, p, p-l,
                                                 (p/l-1)*100.))
    time.sleep(60)

In this code we target Google to get every 60 seconds an update of the pre-market price (line #14). What we retrieve is a JSON file of the form:

// [
{
"id": "22144"
,"t" : "AAPL"
,"e" : "NASDAQ"
,"l" : "125.80"
,"l_fix" : "125.80"
,"l_cur" : "125.80"
,"s": "1"
,"ltt":"4:02PM EDT"
,"lt" : "May 5, 4:02PM EDT"
,"lt_dts" : "2015-05-05T16:02:28Z"
,"c" : "-2.90"
,"c_fix" : "-2.90"
,"cp" : "-2.25"
,"cp_fix" : "-2.25"
,"ccol" : "chr"
,"pcls_fix" : "128.7"
,"el": "126.10"
,"el_fix": "126.10"
,"el_cur": "126.10"
,"elt" : "May 6, 6:35AM EDT"
,"ec" : "+0.30"
,"ec_fix" : "0.30"
,"ecp" : "0.24"
,"ecp_fix" : "0.24"
,"eccol" : "chg"
,"div" : "0.52"
,"yld" : "1.65"
,"eo" : ""
,"delay": ""
,"op" : "128.15"
,"hi" : "128.45"
,"lo" : "125.78"
,"vo" : "21,812.00"
,"avvo" : "46.81M"
,"hi52" : "134.54"
,"lo52" : "82.90"
,"mc" : "741.44B"
,"pe" : "15.55"
,"fwpe" : ""
,"beta" : "0.84"
,"eps" : "8.09"
,"shares" : "5.76B"
,"inst_own" : "62%"
,"name" : "Apple Inc."
,"type" : "Company"
}
]

You can download it individually if we execute in the browser a query:

http://www.google.com/finance/info?infotype=infoquoteall&q=NASDAQ:AAPL

Some of those information you can easily decipher. For our task we need to get only: el (an asset price in pre-market or after-hours trading; a.k.a. extended hours trading); elt (corresponding time stamp); and l (most recent last price). This is what our Python code does for us in lines #12-14. Nice and smoothly.

When executed before 9.30am EDT (here for NASDAQ:AAPL), we may construct the pre-market price-series every time the price has been changed:

May 6, 6:35AM EDT	125.80	126.18	+0.38	+0.30%
May 6, 6:42AM EDT	125.80	126.21	+0.41	+0.33%
May 6, 6:45AM EDT	125.80	126.16	+0.36	+0.29%
May 6, 6:46AM EDT	125.80	126.18	+0.38	+0.30%
May 6, 6:49AM EDT	125.80	126.10	+0.30	+0.24%
May 6, 6:51AM EDT	125.80	126.20	+0.40	+0.32%
May 6, 6:57AM EDT	125.80	126.13	+0.33	+0.26%
May 6, 7:00AM EDT	125.80	126.20	+0.40	+0.32%
May 6, 7:01AM EDT	125.80	126.13	+0.33	+0.26%
May 6, 7:07AM EDT	125.80	126.18	+0.38	+0.30%
May 6, 7:09AM EDT	125.80	126.20	+0.40	+0.32%
May 6, 7:10AM EDT	125.80	126.19	+0.39	+0.31%
May 6, 7:10AM EDT	125.80	126.22	+0.42	+0.33%
May 6, 7:12AM EDT	125.80	126.20	+0.40	+0.32%
May 6, 7:22AM EDT	125.80	126.27	+0.47	+0.37%
May 6, 7:28AM EDT	125.80	126.24	+0.44	+0.35%
...
May 6, 9:02AM EDT	125.80	126.69	+0.89	+0.71%
May 6, 9:03AM EDT	125.80	126.71	+0.91	+0.72%
May 6, 9:04AM EDT	125.80	126.73	+0.93	+0.74%
May 6, 9:08AM EDT	125.80	126.67	+0.87	+0.69%
May 6, 9:09AM EDT	125.80	126.69	+0.89	+0.71%
May 6, 9:10AM EDT	125.80	126.68	+0.88	+0.70%
May 6, 9:13AM EDT	125.80	126.67	+0.87	+0.69%
May 6, 9:14AM EDT	125.80	126.72	+0.92	+0.73%
May 6, 9:16AM EDT	125.80	126.74	+0.94	+0.75%
May 6, 9:17AM EDT	125.80	126.72	+0.92	+0.73%
May 6, 9:18AM EDT	125.80	126.70	+0.90	+0.72%
May 6, 9:19AM EDT	125.80	126.73	+0.93	+0.74%
May 6, 9:20AM EDT	125.80	126.75	+0.95	+0.76%
May 6, 9:21AM EDT	125.80	126.74	+0.94	+0.75%
May 6, 9:21AM EDT	125.80	126.79	+0.99	+0.79% (*)
May 6, 9:23AM EDT	125.80	126.78	+0.98	+0.78%
May 6, 9:24AM EDT	125.80	126.71	+0.91	+0.72%
May 6, 9:25AM EDT	125.80	126.73	+0.93	+0.74%
May 6, 9:26AM EDT	125.80	126.75	+0.95	+0.76%
May 6, 9:27AM EDT	125.80	126.70	+0.90	+0.72%
May 6, 9:28AM EDT	125.80	126.75	+0.95	+0.76%
May 6, 9:29AM EDT	125.80	126.79	+0.99	+0.79%

Since the prices in pre-market tend to vary slowly, 60 second time interval is sufficient to keep our eye on the stock. You can compare a live result retrieved using our Python code at 9:21am (*) with the above screenshot I took at the same time.

A simple joy of Python in action. Enjoy!

HOMEWORK
     1. The code fails after 9.30am EST (NYC time). Modify it to catch this exception.
     2. Modify the code (or write a new function) that works after 9.30am EDT.
     3. It is possible to get $N\gt1$ queries for $N$ stocks by calling, for example:
            NASDAQ:AAPL,NYSE:JNJ,… in line #7 of the code. Modify the program
            to fetch pre-market time-series, $x_i(t)$ $(i=1,…,N)$, for $N$-asset portfolio.
            Given that, compute a fractional root-mean-square volatility, $\sigma_{x_i(t)}/\langle x_i(t) \rangle$,
            i.e. standard deviation divided by the mean, between 6am and 9.30am EDT
            for each asset and check can you use it as an indicator for stock price movement
            after 9.30am? Tip: the higher frms the more trading is expected in first 15 min
            of a new session at Wall Street.
     4. Modify the code to monitor after-hours trading till 4.30pm.

RELATED POSTS
    Hacking Google Finance in Real-Time for Algorithmic Traders

FURTHER READING
    Chenoweth, M., 2011, Downloading Google Intraday historical data with Python
    NetworkError.org, 2013, Google’s Undocumented Finance API

REFERENCES
    $^1$Pre-Market, Investopedia, http://www.investopedia.com/terms/p/premarket.asp

How to Find a Company Name given a Stock Ticker Symbol utilising Quandl API

Quandl.com offers an easy solution to that task. Their WIKI database contains 3339 major stock tickers and corresponding company names. They can be fetched via secwiki_tickers.csv file. For a plain file of portfolio.lst storing the list of your tickers, for example:

AAPL
IBM
JNJ
MSFT
TXN

you can scan the .csv file for the company name coding in Python:

# How to Find a Company Name given a Stock Ticker Symbol utilising Quandl API
# (c) 2015 QuantAtRisk.com, by Pawel Lachowicz
 
import pandas as pd
 
df = pd.read_csv('secwiki_tickers.csv')
dp = pd.read_csv('portfolio.lst',names=['pTicker'])
 
pTickers = dp.pTicker.values  # converts into a list
 
tmpTickers = []
 
for i in range(len(pTickers)):
    test = df[df.Ticker==pTickers[i]]
    if not (test.empty):
        print("%-10s%s" % (pTickers[i], list(test.Name.values)[0]))

what returns:

AAPL      Apple Inc.
IBM       International Business Machines Corporation
JNJ       Johnson & Johnson
MSFT      Microsoft Corporation
TXN       Texas Instruments Inc.

Please note that there is a possibility to combine more stocks from other Quandl’s resources in one file. For more information see Quandl.com‘s documentation online.

I will dedicate a separate Chapter on hacking the financial websites in my next book of Python for Quants. Volume II. in mid-2015.

Fast Walsh–Hadamard Transform in Python

Lesson 9>>

I felt myself a bit unsatisfied after my last post on Walsh–Hadamard Transform and Tests for Randomness of Financial Return-Series leaving you all with a slow version of Walsh–Hadamard Transform (WHT). Someone wise once said: in order to become a champion, you need to flight one round longer. So here I go, one more time, on WHT in Python. Please excuse me or learn from it. The choice is yours.

This post is not only a great lesson on how one can convert an algorithm that is slow into its faster version, but also how to time it and take benefits out of its optimisation for speed. Let’s start from the beginning.

1. From Slow WFT to Fast WFT

In the abovementioned post we outlined that the Slow Walsh-Hadamard Transform for any signal $x(t)$ of length $n=2^M$ where $M\in\mathbb{Z}^+\gt 2$ we may derive as:
$$
WHT_n = \mathbf{x} \bigotimes_{i=1}^M \mathbf{H_2}
$$ where $\mathbf{H_2}$ is Hadamard matrix of order 2 and signal $x(t)$ is discrete and real-valued.

The Kronecker product of two matrices plays a key role in the definition of WH matrices. Thus, the Kronecker product of $\mathbf{A}$ and $\mathbf{B}$ where the former is a square matrix of order $n$ and the latter is of order $m$ is the square matrix $\mathbf{C}$ such:
$$
\mathbf{C} = \mathbf{A} \bigotimes \mathbf{B}
$$ Fino & Algazi (1976) outlined that if $\mathbf{A}$ and $\mathbf{B}$ are unitary matrices and thus $\mathbf{C}$ is also unitary and can be defined by the fast algorithm as shown below:

fwht

which can be computed in place. The technical details of it are beyond the scope of this post however the paper of Fino & Algazi (1976) is a good place to start your research on Fast Walsh-Hadamard Transform algorithm.

The algorithm requires bit inversion permutation. This tricky subject has been covered by Ryan Compton in his post entitled Bit-Reversal Permutation in Python. For me, its a gateway (or a shortcut) to covert Matlab’s function of bitrevorder which permutes data into bit-reversed order. Therefore, for any Python’s list we obtain its bit-reversed order making use of Ryan’s functions, namely:

def bit_reverse_traverse(a):
    # (c) 2014 Ryan Compton
    # ryancompton.net/2014/06/05/bit-reversal-permutation-in-python/
    n = a.shape[0]
    assert(not n&(n-1) ) # assert that n is a power of 2
    if n == 1:
        yield a[0]
    else:
        even_index = np.arange(n/2)*2
        odd_index = np.arange(n/2)*2 + 1
        for even in bit_reverse_traverse(a[even_index]):
            yield even
        for odd in bit_reverse_traverse(a[odd_index]):
            yield odd
 
def get_bit_reversed_list(l):
    # (c) 2014 Ryan Compton
    # ryancompton.net/2014/06/05/bit-reversal-permutation-in-python/
    n = len(l)
    indexs = np.arange(n)
    b = []
    for i in bit_reverse_traverse(indexs):
        b.append(l[i])
    return b

that can be called for any Python’s list or 1D NumPy object as follows:

from random import randrange, seed
from WalshHadamard import *
 
seed(2015)
 
X=[randrange(-1,2,2) for i in range(2**3)]
print("X = ")
print(X)
 
x=get_bit_reversed_list(X)
x=np.array(x)
print("\nBit Reversed Order of X = ")
print(x)

what returns:

X = 
[1, 1, 1, -1, 1, -1, 1, -1]
 
Bit Reversed Order of X = 
[ 1  1  1  1  1 -1 -1 -1]

i.e. exactly the same output as we can obtain employing for the same signal of $X$ the Matlab’s function of bitrevorder(X), namely:

>> X=[1, 1, 1, -1, 1, -1, 1, -1]
X =
     1     1     1    -1     1    -1     1    -1
 
>> bitrevorder(X)
ans =
     1     1     1     1     1    -1    -1    -1

2. Fast Walsh–Hadamard Transform in Python

Given the above, we get the Fast WHT adopting a Python version of the Mathworks’ algorithm and making use of Ryan’s bit reversed order for any real-valued discrete signal of $x(t)$, as follows:

def FWHT(X):
    # Fast Walsh-Hadamard Transform for 1D signals
    # of length n=2^M only (non error-proof for now)
    x=get_bit_reversed_list(X)
    x=np.array(x)
    N=len(X)
 
    for i in range(0,N,2):
        x[i]=x[i]+x[i+1]
        x[i+1]=x[i]-2*x[i+1]
 
    L=1
    y=np.zeros_like(x)
    for n in range(2,int(log(N,2))+1):
        M=2**L
        J=0; K=0
        while(K<N):
            for j in range(J,J+M,2):
                y[K]   = x[j]   + x[j+M]
                y[K+1] = x[j]   - x[j+M]
                y[K+2] = x[j+1] + x[j+1+M]
                y[K+3] = x[j+1] - x[j+1+M]
                K=K+4
            J=J+2*M
        x=y.copy()
        L=L+1
 
    y=x/float(N)

where an exemplary call of this function can take place in your Python’s main program as here:

from random import randrange, seed
from WalshHadamard import *
 
seed(2015)
 
X=[randrange(-1,2,2) for i in range(2**3)]
 
print("X = ")
print(X)
print("Slow WHT(X) = ")
print(WHT(X)[0])
print("Fast WHT(X) = ")
print(FWHT(X))

returning the output, e.g.:

X = 
[1, 1, 1, -1, 1, -1, 1, -1]
Slow WHT(X) = 
[ 0.25  0.75  0.25 -0.25  0.25 -0.25  0.25 -0.25]
Fast WHT(X) = 
[ 0.25  0.75  0.25 -0.25  0.25 -0.25  0.25 -0.25]

3. Slow vs. Fast Walsh-Hadamard Transform in Python

When it comes to making speed comparisons I always feel uncomfortable due to one major factor: the machine I use to run the test. And since I do not have a better one, I use what I’ve got: my Apple MacBook Pro 2013, with 2.6 GHz Intel Core i7, and 16 GB 1600 MHz DDR3. That’s it. Theodore Roosevelt once said: Do what you can, with what you have, where you are. So, that’s the best what I have where I am.

Let’s design a simple test which will test the performance of both Fast and Slow WHT in Python as defined above. We will be interested in the computation times for both transforms for a variable length of the discrete signal $X(t)$ (here: fixed to be the same thanks to the Python’s seed functions that freezes signal to be the same every time it is called to be random) defined as of $n=2^M$ for $M$ in interval between $[4,15]$ as an example.

First we will plot a physical time it takes to get both transforms followed by the graph presenting the speedup we gain by employing Fast WHT. The main code that executes our thought process looks as follows:

from random import randrange, seed
from WalshHadamard import *
import time
 
maxM=16
 
m=[]; s=[]; f=[]
for M in range(4,maxM+1):
    shwt=fwht=t0=fast=slow=0
    # generate random binary (-1,1) signal X
    # of length n=2**M
    seed(2015)
    X=[randrange(-1,2,2) for i in range(2**M)]
    # compute Slow WHT for X
    t0=time.time()
    shwt,_,_=WHT(X)
    slow=time.time()-t0 # time required to get SWHT
    s.append(slow)
    del shwt, slow, t0
    # compute Fast WHT for X
    t0=time.time()
    fwht=FWHT(X)
    fast=time.time()-t0 # time required to get FWHT
    m.append(M)
    f.append(fast)
    del fwht, fast, t0
 
speedup=np.array(s)/np.array(f)
 
f1=plt.figure(1)
plt.plot(m,s,"ro-", label='Slow WHT')
plt.hold(True)
plt.plot(m,f,"bo-", label='Fast WHT')
plt.legend(loc="upper left")
ax = plt.gca()
ax.set_yscale("log")
plt.xlabel("Signal length order, M")
plt.ylabel("Computation time [sec]")
 
f2=plt.figure(2)
plt.plot(m,speedup,'go-')
plt.xlabel("Signal length order, M")
plt.ylabel("Speedup (x times)")
plt.hold(True)
plt.plot((4,M),(1,1),"--k")
plt.xlim(4,M)
 
plt.show()

where we obtain:
figure_1
and the speedup plot:
figure_2
From the graph we notice an immediate benefit of Fast WHT when it comes to much much longer signals. Simplicity of complexity in action. Piece of cake, a walk in a park.

Fast & Furious 8? Well, QuantAtRisk.com delivers you the trailer before the official trailer. Enjoy! But only if feel the need… the need for speed!

DOWNLOAD
     WalshHadamard.py

REFERENCES
     Fino B.J, Algazi, V. R., 1976, Unified Matrix Treatment of the Fast Walsh-Hadamard
         Transform
, IEEE, Transactions on Computers (pdf)

Walsh–Hadamard Transform and Tests for Randomness of Financial Return-Series

Randomness. A magic that happens behind the scene. An incomprehensible little blackbox that does the job for us. Quants. Many use it every day without thinking, without considering how those beautifully uniformly distributed numbers are drawn?! Why so fast? Why so random? Is randomness a byproduct of chaos and order tamed somehow? How trustful can we be placing this, of minor importance, command of random() as piece of our codes?


Randomness Built-in. Not only that’s a name of a chapter in my latest book but the main reason I’m writing this article: wandering sideways around the subject that led me to new undiscovered territories! In general the output in a form of a random number comes for a dedicated function. To be perfectly honest, it is like a refined gentleman, of sophisticated quality recognised by its efficiency when a great number of drafts is requested. The numbers are not truly random. They are pseudo-random. That means the devil resides in details. A typical pseudo-random number generator (PRNG) is designed for speed but defined by underlying algorithm. In most of computer languages Mersenne Twister developed by 松本 眞 and 西村 拓士 in 1997 has become a standard. As one might suspect, an algorithm must repeat itself and in case of our Japanese hero, its period is superbly long, i.e. $2^{19937}-1$. A reliable implementation in C or C++ guarantees enormous speedups in pseudo-random numbers generation. Interestingly, Mersenne Twister is not perfect. Its use had been discouraged when it comes to obtaining cryptographic random numbers. Can you imagine that?! But that’s another story. Ideal for a book chapter indeed!

In this post, I dare to present the very first, meaningful, and practical application of the Walsh–Hadamard Transform (WHT) in quantitative finance. Remarkably, this tool, of marginal use in digital signal processing, had been shown to serve as a great facility in testing any binary sequence for its statistically significant randomness!

Within the following sections we introduce the transform to finance. After Oprina et al. (2009) we encode WHT Significance Tests to Python (see Download section) and verify a hypothesis of the evidence of randomness as generated by Mersenne Twister PRNG against the alternative one at vey high significance levels of $\alpha\le 0.00001$. Next, we show a practical application of the WHT framework in search for randomness in any financial time-series by example of 30-min FX return-series and compare them to the results obtained from PRNG available in Python by default. Curious about the findings? Welcome to the world of magic!

1. Walsh–Hadamard Transform And Binary Sequences

I wish I had for You this great opening story on how Jacques Hadamard and Joseph L. Walsh teamed up with Jack Daniels on one Friday’s night in the corner pub somewhere in San Francisco coming up to a memorable breakthrough in theory of numbers. I wish I had! Well, not this time. However, as a make-up, below I will present in a nutshell a theoretical part on WHT I’ve been studying for past three weeks and found it truly mind-blowing because of its simplicity.

Let’s consider any real-valued discrete signal $X(t_i)$ where $i=0,1,…,N-1$. Its trimmed version, $x(t_i)$, of the total length of $n=2^M$ such that $2^M\le(N-1)$ and $M\in\mathbb{Z}^+$ at $M\ge 2$ is considered as an input signal for the Walsh–Hadamard Transform, the latter defined as:
$$
WHT_n = \mathbf{x} \bigotimes_{i=1}^M \mathbf{H_2}
$$ where the Hadamard matrix of order $n=2^M$ is obtainable recursively by the formula:
$$
\mathbf{H_{2^M}} =
\begin{pmatrix}
H_{2^{M-1}} & H_{2^{M-1}} \\
H_{2^{M-1}} & -H_{2^{M-1}}
\end{pmatrix}
\ \ \ \ \ \text{therefore} \ \ \ \ \ \mathbf{H_2} =
\begin{pmatrix}
1 & 1 \\
1 & -1
\end{pmatrix}
$$ and $\otimes$ denotes the Kronecker product between two matrices. Given that, $WHT_n$ is the dot product between the signal (1D array; vector) and resultant Kronecker multiplications of $\mathbf{H_2}$ $M$-times (Johnson & Puschel 2000).

1.1. Walsh Functions

The Walsh-Hadamard transform uses the orthogonal square-wave functions, $w_j(x)$, introduced by Walsh (1923), which have only two values $\pm 1$ in the interval $0\le x\lt 1$ and the value zero elsewhere. The original definition of the Walsh functions is based on the following recursive equations:
$$
w_{2j}(x) = w_j(2x)+(-1)^j w_j (2x -1) \quad \mbox{for} \ \ j=1,2,… \\
w_{2j-1}(x) = w_{j-1}(2x)-(-1)^{j-1} w_{j-1} (2x -1) \quad \mbox{for} \ \ j=1,2,…
$$ with the initial condition of $w_0(x)= 1$. You can meet with different ordering of Walsh functions in the literature but in general it corresponds to the ordering of the orthogonal harmonic functions $h_j(x)$, which are defined as:
$$
h_{2j}(x) = \cos(2\pi j x)\quad \mbox{for} \ \ j=0,1,… \\
h_{2j-1}(x) = \sin(2\pi j x)\quad \mbox{for} \ \ j=1,2,…
$$ on the interval $0\le x\lt 1$, and have zero value for all other values of $x$ outside this interval. A comparison of both function classes looks as follows:

hwfunctions

where the first eight harmonic functions and Walsh functions are given in the left and right panels, respectively. Walsh functions with even and odd orders are called the cal and sal functions, respectively, and they correspond to the cosine and sine functions in Fourier analysis (Aittokallio et al. 2001).

1.2. From Hadamard to Walsh Matrix

In Python the Hadamard matrix of order $2^M$ is obtainable as a NumPy 2D array making use of SciPy module as follows:

from scipy.linalg import hadamard
M=3; n=2**M
H=hadamard(n)
print(H)

what is this case returns:

[[ 1  1  1  1  1  1  1  1]
 [ 1 -1  1 -1  1 -1  1 -1]
 [ 1  1 -1 -1  1  1 -1 -1]
 [ 1 -1 -1  1  1 -1 -1  1]
 [ 1  1  1  1 -1 -1 -1 -1]
 [ 1 -1  1 -1 -1  1 -1  1]
 [ 1  1 -1 -1 -1 -1  1  1]
 [ 1 -1 -1  1 -1  1  1 -1]]

Each row of the matrix corresponds to Walsh function. However, the ordering is different (Hadamard ordering). Therefore, to be able to “see” the shape of Walsh functions as presented in the figure above, we need to rearrange their indexing. The resultant matrix is known as Walsh matrix. We derive it in Python:

def Hadamard2Walsh(n):
    # Function computes both Hadamard and Walsh Matrices of n=2^M order
    # (c) 2015 QuantAtRisk.com, coded by Pawel Lachowicz, adopted after
    # au.mathworks.com/help/signal/examples/discrete-walsh-hadamard-transform.html
    import numpy as np
    from scipy.linalg import hadamard
    from math import log
 
    hadamardMatrix=hadamard(n)
    HadIdx = np.arange(n)
    M = log(n,2)+1
 
    for i in HadIdx:
        s=format(i, '#032b')
        s=s[::-1]; s=s[:-2]; s=list(s)
        x=[int(x) for x in s]
        x=np.array(x)
        if(i==0):
            binHadIdx=x
        else:
            binHadIdx=np.vstack((binHadIdx,x))
 
    binSeqIdx = np.zeros((n,M)).T
 
    for k in reversed(xrange(1,int(M))):
        tmp=np.bitwise_xor(binHadIdx.T[k],binHadIdx.T[k-1])
        binSeqIdx[k]=tmp
 
    tmp=np.power(2,np.arange(M)[::-1])
    tmp=tmp.T
    SeqIdx = np.dot(binSeqIdx.T,tmp)
 
    j=1
    for i in SeqIdx:
        if(j==1):
            walshMatrix=hadamardMatrix[i]
        else:
            walshMatrix=np.vstack((walshMatrix,hadamardMatrix[i]))
        j+=1
 
    return (hadamardMatrix,walshMatrix)

Therefore, by calling the function in an exemplary main program:

from WalshHadamard import Hadamard2Walsh
import matplotlib.pyplot as plt
import numpy as np
 
M=3; n=2**M
(H,W)=Hadamard2Walsh(n)
# display Hadamard matrix followed by Walsh matrix (n=8)
print(H); print;
print(W)
 
# a visual comparison of Walsh function (j=2)
M=3;
n=2**M
_,W=Hadamard2Walsh(n)
plt.subplot(2,1,1)
plt.step(np.arange(n).tolist(),W[2],where="post")
plt.xlim(0,n)
plt.ylim(-1.1,1.1); plt.ylabel("order M=3")
 
M=8;
n=2**M
_,W=Hadamard2Walsh(n)
plt.subplot(2,1,2)
plt.step(np.arange(n).tolist(),W[2],where="post")
plt.xlim(0,n)
plt.ylim(-1.1,1.1); plt.ylabel("order M=8")
 
plt.show()

first, we display Hadamard and Walsh matrices of order $n=2^3$, respectively:

[[ 1  1  1  1  1  1  1  1]
 [ 1 -1  1 -1  1 -1  1 -1]
 [ 1  1 -1 -1  1  1 -1 -1]
 [ 1 -1 -1  1  1 -1 -1  1]
 [ 1  1  1  1 -1 -1 -1 -1]
 [ 1 -1  1 -1 -1  1 -1  1]
 [ 1  1 -1 -1 -1 -1  1  1]
 [ 1 -1 -1  1 -1  1  1 -1]]
 
[[ 1  1  1  1  1  1  1  1]
 [ 1  1  1  1 -1 -1 -1 -1]
 [ 1  1 -1 -1 -1 -1  1  1]
 [ 1  1 -1 -1  1  1 -1 -1]
 [ 1 -1 -1  1  1 -1 -1  1]
 [ 1 -1 -1  1 -1  1  1 -1]
 [ 1 -1  1 -1 -1  1 -1  1]
 [ 1 -1  1 -1  1 -1  1 -1]]

and next we visually verify that the shape of the third Walsh function ($j=2$) is preserved for two different orders, here $M=3$ and $M=8$, respectively:
walsh38

The third possibility of ordering the Walsh functions is so that they are arranged in increasing order of their sequencies or number of zero-crossings: sequency order. However, we won’t be interested in it this time.

1.3. Signal Transformations

As we have seen in the beginning, the WHT is able to perform a signal transformation for any real-vauled time-series. The sole requirement is that the signal ought to be of $2^M$ length. Now, it should be obvious why is so. When you think about WHT for a longer second you may understand its uniqueness as contrasted with the Fourier transform. Firstly, the waveforms are much simpler. Secondly, the complexity of computation is significantly reduced. Finally, if the input signal is converted from its original form down to only two discrete values $\pm 1$, we end up with a bunch of trivial arithmetical calculations while WHT.

The Walsh-Hadamard transform found its application in medical signal processing, audio/sound processing, signal and image compression, pattern recognition, and cryptography. In the most simplistic cases, one deals with the input signal to be of the binary form, e.g. 01001011010110. If we consider the binary function $f: \mathbb{Z}_2^n \rightarrow \mathbb{Z}_2$ then the following transformation is possible:
$$
\bar{f}(\mathbf{x}) = 1 – 2f(\mathbf{x}) = (-1)^{f(\mathbf{x})}
$$ therefore $\bar{f}: \mathbb{Z}_2^n \rightarrow \{-1,1\}$. What does it do is it performs the following conversion, for instance:
$$
\{0,1,0,1,1,0,1,0,0,0,1,…\} \rightarrow \{-1,1,-1,1,1,-1,1,-1,-1,-1,1,…\}
$$ of the initial binary time-series. However, what would be more interesting for us as it comes to the financial return-series processing, is the transformation:
$$
\bar{f}(\mathbf{x}) = \left\{
\begin{array}{l l}
1 & \ \text{if $f(\mathbf{x})\ge 0$}\\
-1 & \ \text{if $f(\mathbf{x})\lt 0$}
\end{array} \right.
$$ Given that, any return-series in the value interval $[-1,1]$ (real-valued) is transformed to the binary form of $\pm 1$, for example:
$$
\{0.031,0.002,-0.018,-0.025,0.011,…\} \rightarrow \{1,1,-1,-1,1,…\} \ .
$$ This simple signal transformation in connection with the power of Walsh-Hadamard Transform opens new possibilities of analysing the underlying true signal. WHT is all “made of” $\pm 1$ values sleeping in its Hadamard matrices. Coming in a close contact with the signal of the same form this is “where the rubber meets the road” (Durkee, Jacobs, & Meat Loaf 1995).

1.4. Discrete Walsh-Hadamard Transform in Python

The Walsh-Hadamard transform is an orthogonal transformation that decomposes a signal into a set of orthogonal, rectangular waveforms (Walsh functions). For any real-valued signal we derive WHT as follows:

def WHT(x):
    # Function computes (slow) Discrete Walsh-Hadamard Transform
    # for any 1D real-valued signal
    # (c) 2015 QuantAtRisk.com, by Pawel Lachowicz
    x=np.array(x)
    if(len(x.shape)<2): # make sure x is 1D array
        if(len(x)>3):   # accept x of min length of 4 elements (M=2)
            # check length of signal, adjust to 2**m
            n=len(x)
            M=trunc(log(n,2))
            x=x[0:2**M]
            h2=np.array([[1,1],[1,-1]])
            for i in xrange(M-1):
                if(i==0):
                    H=np.kron(h2,h2)
                else:
                    H=np.kron(H,h2)
 
            return (np.dot(H,x)/2.**M, x, M)
        else:
            print("HWT(x): Array too short!")
            raise SystemExit
    else:
        print("HWT(x): 1D array expected!")
        raise SystemExit

Despite the simplicity, this solution slows down for signals of length of $n\ge 2^{22}$ where, in case of my MacBook Pro, 16 GB of RAM is just not enough! Therefore, the mechanical derivation of WHT making use of Kronecker products between matrices is often referred to as Slow Walsh-Hadamard Transform. It is obvious that Fast WHT exists but its application for the use of this article (and research) is not required. Why? We will discuss it later on.

We can see our Python’s WHT(x) function in action coding, for instance:

from WalshHadamard import WHT
from random import randrange
 
x1=[0.123,-0.345,-0.021,0.054,0.008,0.043,-0.017,-0.036]
wht,_,_=WHT(x1)
print("x1  = %s" % x1)
print("WHT = %s" % wht)
 
x2=[randrange(-1,2,2) for i in xrange(2**4)]
wht,_,_=WHT(x2)
print;
print("x2  = %s" % x2)
print("WHT = %s" % wht)

what returns

x1  = [0.123, -0.345, -0.021, 0.054, 0.008, 0.043, -0.017, -0.036]
WHT = [-0.023875  0.047125 -0.018875  0.061125 -0.023375  0.051125 
       -0.044875 0.074625]
 
x2  = [1, -1, 1, 1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, -1, -1]
WHT = [ 0.375  0.125  0.125 -0.125 -0.125  0.125 -0.375 -0.125  
        0.375  0.125 -0.375  0.375 -0.125  0.125  0.125  0.375]

where we performed WHT of a real-values signal (e.g. a financial return-series) of $x_1$ and a random binary sequence of $x_2$. Is it correct? You can always verify those results in Matlab (version used here: 2014b) by executing the corresponding function of fwht from the Matlab’s Signal Processing Toolbox:

>> x1=[0.123, -0.345, -0.021, 0.054, 0.008, 0.043, -0.017, -0.036]
x1 =
    0.1230  -0.3450  -0.0210  0.0540  0.0080  0.0430  -0.0170  -0.0360
 
>> y1 = fwht(x1,length(x1),'hadamard')
y1 =
   -0.0239    0.0471   -0.0189    0.0611   -0.0234    0.0511   -0.0449    0.0746
 
>> x2=[1, -1, 1, 1, 1, 1, 1, 1, -1, 1, 1, -1, 1, 1, -1, -1]
x2 =
     1 -1  1  1  1  1  1  1  -1  1  1  -1  1  1  -1  -1
 
>> y2 = fwht(x2,length(x2),'hadamard')'
y2 =
    0.3750
    0.1250
    0.1250
   -0.1250
   -0.1250
    0.1250
   -0.3750
   -0.1250
    0.3750
    0.1250
   -0.3750
    0.3750
   -0.1250
    0.1250
    0.1250
    0.3750

All nice, smooth, and in agreement. The difference between WHT(x) and Matlab’s fwht is that the former trims the signal down while the latter allows for padding with zeros. Just keep that in mind if you employ Matlab in your own research.

2. Random Sequences and Walsh-Hadamard Transform Statistical Test

You may raise a question. Why do we need to introduce WHT at all, transform return-series into binary $\pm 1$ sequences, and what does it have in common with randomness? The answer is straightforward and my idea is simple, namely: we may test any financial return time-series for randomness using our fancy WHT approach. A bit of creativity before sundown.

WFT of the binary signal returns a unique pattern. If signal $x(t)$ is of random nature, we might suspect not to find any sub-string of it to be repeatable. That’s the main motivation standing behind PRNGs: to imitate true randomness met in nature.

In 2009 Oprina et alii (hereinafter Op09) proposed a statistical test based on results derived for any binary $\pm 1$ signal $x(t)$. But they were smart, instead of looking at the signal as a whole, they suggested its analysis chunk by chunk (via signal segmentation into equi-lengthy blocks). The statistical test they designed aims at performing class of autocorrelation tests with the correlation mask given by the rows of Hadamard matrix. As a supplement to the methodology presented in NIST Special Publication 800-22 (Rukhin at al. 2010) which specifies 16 independent statistical tests for random and pseudorandom number generators for cryptographic applications, Op09 proposed another two methods, based on confidence intervals, which can detect a more general failure in the random and pseudorandom generators. All 5 statistical tests of Op09 form the Walsh-Hadamard Transform Statistical Test Suite and we, what follows, will encode them to Python, focusing both on statistical meaning and application.

The working line standing behind the Op09’s Suite was a generalised test suitable for different purposes: randomness testing, cryptographic design, crypto-analysis techniques and stegano-graphic detection. In this Section, first, we concentrate our effort to see how this new test suite works for the standard Mersenne Twister PRNG concealed in the Python’s random function class. Next, we move to the real-life financial data as an input where we aim to verify whether any chunk (block) of time-series can be regarded as of random nature at a given significance level of $\alpha$. If so, which one. In not, well, well, well… why not?! For the latter case, this test opens a new window of opportunities to look for non-stochastic nature of, e.g. trading signals, and their repeatable properties (if any).

2.1. Signal Pre-Processing

Each tests requires some input data. In case of Op09 tests, we need to provide: a trimmed signal $x(t)$ of the total length $n=2^M$; choose a sequence (block) size of length $2^m$; a significance level of $\alpha$ (rejection level); a probability $p$ of occurrence of the digit 1. In step 1 we transform (if needed) $x(t_i)$ into $\pm 1$ sequence as specified in Section 1.3. In step 2 we compute lower and upper rejection limits of the test $u_{\alpha/2}$ and $u_{1-\alpha/2}$. In step 3 we compute the number of sequences to be processed $a=n/(2m)$ and split $x(t)$ into $a$ adjacent blocks (sequences).

Since it sounds complicated let’s see how easy it is in fact. We design a function that splits $X(t)$ signal into $a$ blocks of $b$ length:

def xsequences(x):
    x=np.array(x)   # X(t) or x(t)
    if(len(x.shape)<2): # make sure x is 1D array
        if(len(x)>3):   # accept x of min length of 4 elements (M=2)
            # check length of signal, adjust to 2**M if needed
            n=len(x)
            M=trunc(log(n,2))
            x=x[0:2**M]
            a=(2**(M/2))  # a number of adjacent sequences/blocks
            b=2**M/a      # a number of elements in each sequence
            y=np.reshape(x,(a,b))
            #print(y)
            return (y,x,a,b,M)
        else:
            print("xsequences(x): Array too short!")
            raise SystemExit
    else:
        print("xsequences(x): 1D array expected!")
        raise SystemExit

where the conversion of Python list’s (or NumPy 1D array’s) values into $\pm 1$ signal we obtain by:

def ret2bin(x):
    # Function converts list/np.ndarray values into +/-1 signal
    # (c) 2015 QuantAtRisk.com, by Pawel Lachowicz
    Y=[]; ok=False
    print(str(type(x)))
    print(x)
    if('numpy' in str(type(x)) and 'ndarray' in str(type(x))):
        x=x.tolist()
        ok=True
    elif('list' in str(type(x))):
        ok=True
    if(ok):
        for y in x:
            if(y<0):
                Y.append(-1)
            else:
                Y.append(1)
        return Y
    else:
        print("Error: neither 1D list nor 1D NumPy ndarray")
        raise SystemExit

To be perfectly informed about what took place we may wish to display a summary on pre-processed signal as follows:

def info(X,xt,a,b,M):
    line()
    print("Signal\t\tX(t)")
    print("  of length\tn = %d digits" % len(X))
    print("trimmed to\tx(t)")
    print("  of length\tn = %d digits (n=2^%d)" % (a*b,M))
    print("  split into\ta = %d sub-sequences " % a)
    print("\t\tb = %d-digit long" % b)
    print

Now, let see how this section of our data processing works in practice. As an example, first, we generate a random signal $X(t)$ of length $N=2^{10}+3$ and values spread in the interval of $[-0.2,0.2]$. That should provide us with some sort of feeling of the financial return time-series (e.g. collected daily based on stock or FX-series trading). Next, employing the function of xsequences we will trim the signal $X(t)$ down to $x(t)$ of length $n=2^{10}$ however converting first the return-series into $\pm 1$ sequence denoted by $x'(t)$. Finally, the $x(t)$ we split into $a$ sub-sequences of length $b$; $x_{\rm seq}$. An exemplary main program executing those steps could be coded as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
from WalshHadamard import ret2bin, xsequencies, info
from random import uniform, seed
import numpy as np
 
# generate a random signal X(t)
seed(4515)
X=[uniform(-0.2,0.2) for i in range(2**10)]; X=X+[0.12,-0.04,0.01]
Xorg=X
 
# convert its values into +/-1 sequence
X=ret2bin(X)  # x'(t)
 
# split X'(t) into a blocks; save result in xseq 2D array
(xseq,x,a,b,M) = xsequences(X)
 
print("X(t) =")
for i in xrange(0,len(Xorg)):
    print("%10.5f" % Xorg[i])
print("\nx'(t) =")
print(np.array(X))
print("\nx(t) =")
print(x)
print("\nxseq =")
print(xseq)
print
 
info(X,x,a,b,M)

returning:

X(t) =
   0.17496
   0.07144
  -0.15979
   0.11344
   0.08134
  -0.01725
     ...
  -0.16005
   0.01419
  -0.08748
  -0.03218
  -0.07908
  -0.02781
   0.12000
  -0.04000
   0.01000
 
X'(t) =
[ 1  1 -1 ...,  1 -1  1]
 
x(t) =
[ 1  1 -1 ..., -1 -1 -1]
 
xseq =
[[ 1  1 -1 ..., -1 -1  1]
 [-1 -1 -1 ...,  1 -1  1]
 [-1 -1  1 ..., -1  1  1]
 ..., 
 [-1  1  1 ...,  1 -1  1]
 [-1  1  1 ..., -1 -1 -1]
 [ 1 -1 -1 ..., -1 -1 -1]]
 
----------------------------------------------------------------------
Signal		X(t)
  of length	n = 1027 digits
trimmed to	x(t)
  of length	n = 1024 digits (n=2^10)
  split into	a = 32 sub-sequences 
		b = 32-digit long

Having such framework for initial input data, we are ready to program WHT Statistical Test based on Op09 recipe.

2.2. Test Statistic

2D matrix of $x_{\rm seq}$ holding our signal under investigation is the starting point to its tests for randomness. Op09’s test is based on computation of WHT for each row of $x_{\rm seq}$ and the t-statistics, $t_{ij}$, as a test function based on Walsh-Hadamard transformation of all sub-sequencies of $x(t)$.

It is assumed that for any signal $y(t_i)$ where $i=0,1,…$ the WHT returns a sequence $\{w_i\}$ and: (a) for $w_0$ the mean value is $m_0=2^m(1-2p)$; the variance is given by $\sigma^2_0=2^{m+2}p(1-p)$ and the distribution of $(w_0-m_0)/\sigma_0 \sim N(0,1)$ for $m>7$; (b) for $w_i$ ($i\ge 1$) the mean value is $m_i=0$; the variance is $\sigma^2_i=2^{m+2}p(1-p)$ and the distribution of $(w_i-m_i)/\sigma_i \sim N(0,1)\ $ for $m>7$. Recalling that $p$ stands for probability of occurrence of the digit 1 in $x_{{\rm seq},j}$ for $p = 0.5$ (our desired test probability) the mean value of $w_i$ is equal 0 for every $i$.

In $x_{\rm seq}$ array for every $j=0,1,…,(a-1)\ $ and for every $i=0,1,…,(b-1)\ $ we compute t-statistic as follows:
$$
t_{ij} = \frac{w_{ij} – m_i}{\sigma_i}
$$ where $w_{ij}$ is the $i$-th Walsh-Hadamard transform component of the block $j$. In addition, we convert all $t_{ij}$ into $p$-values:
$$
p{\rm-value} = P_{ij} = {\rm Pr}(X\lt t_{ij}) = 1-\frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{\frac{-x^2}{2}} dx
$$ such $t_{ij}\sim N(0,1)$, i.e. has a normal distribution with zero mean and unit standard deviation. Are you still with me? Great! The Python code for this part of our analysis may look in the following way:

def tstat(x,a,b,M):
    # specify the probability of occurrence of the digit "1"
    p=0.5
    print("Computation of WHTs...")
    for j in xrange(a):
        hwt, _, _ = WHT(x[j])
        if(j==0):
            y=hwt
        else:
            y=np.vstack((y,hwt))   # WHT for xseq
    print("  ...completed")
    print("Computation of t-statistics..."),
    t=[];
    for j in xrange(a):     # over sequences/blocks (rows)
        for i in xrange(b): # over sequence's elements (columns)
            if(i==0):
                if(p==0.5):
                    m0j=0
                else:
                    m0j=(2.**M/2.)*(1.-2.*p)
                sig0j=sqrt((2**M/2)*p*(1.-p))
                w0j=y[j][i]
                t0j=(w0j-m0j)/sig0j
                t.append(t0j)
            else:
                sigij=sqrt((2.**((M+2.)/2.))*p*(1.-p))
                wij=y[j][i]
                tij=wij/sigij
                t.append(tij)
    t=np.array(t)
    print("completed")
    print("Computation of p-values..."),
    # standardised t-statistics; t_{i,j} ~ N(0,1)
    t=(t-np.mean(t))/(np.std(t))
    # p-values = 1-[1/sqrt(2*pi)*integral[exp(-x**2/2),x=-inf..t]]
    P=1-ndtr(t)
    print("completed\n")
    return(t,P,y)

and returns, as the output, two 2D arrays storing $t_{ij}$ statistics for every $w_{ij}$ (t variable) and corresponding $p$-values of $P_{ij}$ (P variable).

Continuing the above example, we extend its body by adding what follows:

28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
from WalshHadamard import tstat
 
(t, P, xseqWHT) = tstat(xseq,a,b,M)
 
print("t =")
print(t)
print("\np-values =")
print(P)
print("\nw =")
print(xseqWHT)
 
# display WHTs for "a" sequences
plt.imshow(xseqWHT, interpolation='nearest', cmap='PuOr')
plt.colorbar()
plt.xlabel("Sequence's Elements"); plt.ylabel("Sequence Number")
plt.title("Walsh-Hadamard Transforms")
plt.show()

to derive:

Computation of WHTs...
  ...completed
Computation of t-statistics... completed
Computation of p-values... completed
 
t =
[ 0.26609308  0.72730106 -0.69960094 ..., -1.41305193 -0.69960094
  0.01385006]
 
p-values =
[ 0.39508376  0.23352077  0.75791172 ...,  0.92117977  0.75791172
  0.4944748 ]
 
w =
[[ 0.125   0.125  -0.125  ...,  0.125  -0.125  -0.125 ]
 [ 0.1875 -0.1875 -0.0625 ..., -0.0625  0.0625 -0.0625]
 [ 0.0625  0.0625 -0.4375 ..., -0.0625 -0.0625 -0.0625]
 ..., 
 [-0.1875 -0.3125  0.1875 ...,  0.1875  0.1875 -0.1875]
 [-0.125   0.125  -0.125  ..., -0.125  -0.125  -0.375 ]
 [ 0.      0.125  -0.25   ..., -0.25   -0.125   0.    ]]

and, at the end, we display WHTs computed for every single (horizontal) sequence of $x_{\rm seq}$ array as stored in $w$ matrix by tstats function:
Walsh-Hadamard Transforms
Again, every element (a square) in the above figure corresponds to $w_{ij}$ value. I would like also pay your attention to the fact that both arrays of $t$ and $P$ are 1D vectors storing all t-statistics and $p$-values, respectively. In the following WHT tests we will make an additional effort to split them into $a\times b$ matrices corresponding exactly to $w$-like arrangement. That’s an easy part so don’t bother too much about that.

2.3. Statistical Test Framework

In general, here, we are interested in verification of two opposite statistical hypotheses. The concept of hypothesis testing is given in every textbook on the subject. We test our binary signal $x(t)$ for randomness. Therefore, $H_0$: $x(t)$ is generated by a binary memory-less source i.e. the signal does not contain any predictable component; $H_1$: $x(t)$ is not produced by a binary memory-less source, i.e. the signal contains a predictable component.

Within the standard framework of hypothesis testing we can make two errors. The first one refers to $\alpha$ (so-called significance level) and denotes the probability of occurrence of a false positive result. The second one refers to $\beta$ and denotes the probability of the occurrence of a false negative result.

The testing procedure that can be applied here is: for a fixed value of $\alpha$ we find a confidence region for the test statistic and check if the statistical test value is in the confidence region. The confidence levels are computed using the quantiles $u_{\alpha/2}$ and $u_{1-\alpha/2}$ (otherwise, specified in the text of the test). Alternatively, if an arbitrary $t_{\rm stat}$ is the value of the test statistics (test function) we may compare $p{\rm-value}={\rm Pr}(X\lt t_{\rm stat})$ with $\alpha$ and decide on randomness when $p$-value$\ \ge\alpha$.

2.4. Crude Decision (Test 1)

The first WHT test from the Op09’s suite is a crude decision or majority decision. For chosen $\alpha$ and at $u_\alpha$ denoting the quantile of order $\alpha$ of the normal distribution, if:
$$
t_{ij} \notin [u_{\alpha/2}; u_{1-\alpha/2}]
$$ then reject the hypothesis of randomness regarding $i$-th test statistic of the signal $x(t)$ at the significance level of $\alpha$. Jot down both $j$ and $i$ corresponding to sequence number and sequence’s element, respectively. Op09 suggest that this test is suitable for small numbers of $a<1/\alpha\ $ which is generally always fulfilled for our data. We code this test in Python as follows:

def test1(cl,t,a,b,otest):
    alpha=1.-cl/100.
    u1=norm.ppf(alpha/2.)
    u2=norm.ppf(1-alpha/2.)
    Results1=[]
    for l in t:
        if(l<u1 or l>u2):
            Results1.append(0)
        else:
            Results1.append(1)
    nfail=a*b-np.sum(Results1)
    print("Test 1 (Crude Decision)")
    print("  RESULT: %d out of %d test variables stand for " \
          "randomness" % (a*b-nfail,a*b))
    if((a*b-nfail)/float(a*b)>.99):
        print("\t  Signal x(t) appears to be random")
    else:
        print("\t  Signal x(t) appears to be non-random")
    otest.append(100.*(a*b-nfail)/float(a*b)) # gather per cent of positive results
    print("\t  at %.5f%% confidence level" % (100.*(1.-alpha)))
    print
    return(otest)

Op09’s decision on rejection of $H_0$ is too stiff. In the function we calculate the number of $t_{ij}$’s falling outside the test interval. If their number exceeds 1%, we claim on the lack of evidence of randomness for $x(t)$ as a whole.

2.5. Proportion of Sequences Passing a Test (Test 2)

Recall that for each row (sub-sequence of $x(t)$) and its elements, we have computed both $t_{ij}$’s and $P_{ij}$ values. Let’s use the latter here. In this test, first, we check for every row of (re-shaped) $t$ 2D array a number of $p$-values to be $P_{ij}\lt\alpha$. If this number is greater than zero, we reject $j$-th sub-sequence of $x(t)$ at the significance level of $\alpha$ to pass the test. For all $a$ sub-sequences we count its total number of those which did not pass the test, $n_2$. If:
$$
n_2 \notin \left[ a\alpha \sqrt{a \alpha (1-\alpha))} u_{\alpha/2}; a\alpha \sqrt{a \alpha (1-\alpha))} u_{1-\alpha/2} \right]
$$ then there is evidence that signal $x(t)$ is non-random.

We code this test simply as:

def test2(cl,P,a,b,otest):
    alpha=1.-cl/100.
    u1=norm.ppf(alpha/2.)
    u2=norm.ppf(1-alpha/2.)
    Results2=[]
    rP=np.reshape(P,(a,b))  # turning P 1D-vector into (a x b) 2D array!
    for j in xrange(a):
        tmp=rP[j][(rP[j]<alpha)]
        #print(tmp)
        if(len(tmp)>0):
            Results2.append(0)   # fail for sub-sequence
        else:
            Results2.append(1)   # pass
 
    nfail2=a-np.sum(Results2)  # total number of sub-sequences which failed
    t2=nfail2/float(a)
    print("Test 2 (Proportion of Sequences Passing a Test)")
    b1=alpha*a+sqrt(a*alpha*(1-alpha))*u1
    b2=alpha*a+sqrt(a*alpha*(1-alpha))*u2
    if(t2<b1 or t2>b2):
        print("  RESULT: Signal x(t) appears to be non-random")
        otest.append(0.)
    else:
        print("  RESULT: Signal x(t) appears to be random")
        otest.append(100.)
    print("\t  at %.5f%% confidence level" % (100.*(1.-alpha)))
    print
    return(otest)

This test is also described well by Rukhin et al. (2010) though we follow the method of Op09 adjusted for the proportion of $p$-values failing the test for randomness as counted sub-sequence by sub-sequence.

2.7. Uniformity of p-values (Test 3)

In this test, the distribution of $p$-values is examined to ensure uniformity. This may be visually illustrated using a histogram, whereby, the interval between 0 and 1 is divided into $K=10$ sub-intervals, and the $p$-values, i.e. in our case $P_{ij}$’s, that lie within each sub-interval are counted and displayed.

Uniformity may also be determined via an application of a $\chi^2$ test and the determination of a $p$-value corresponding to the Goodness-of-Fit Distributional Test on the $p$-values obtained for an arbitrary statistical test (i.e., a $p$-value of the $p$-values). We accomplish that via computation of the test statistic:
$$
\chi^2 = \sum_{i=1}^{K} \frac{\left( F_i-\frac{a}{K} \right)^2}{\frac{a}{K}}
$$ where $F_i$ is the number of $P_{ij}$ in the histogram’s bin of $i$, and $a$ is the number of sub-sequences of $x(t)$ we investigate.

We reject the hypothesis of randomness regarding $i$-th test statistic $t_{ij}$ of $x(t)$ at the significance level of $\alpha$ if:
$$
\chi^2_i \notin [0; \chi^2(\alpha, K-1)] \ .
$$ Let $\chi^2(\alpha, K-1)$ be the quantile of order $\alpha$ of the distribution $\chi^2(K-1)$. In Python we may calculated it in the following way:

from scipy.stats import chi2
alpha=0.001
K=10
# for some derived variable of chi2_test
print(Chi2(alpha,K-1))

If our test value of Test 3 $\chi_i^2\le \chi^2(\alpha, K-1)$ then we count $i$-th statistics to be not against randomness of $x(t)$. This is an equivalent to testing $i$-th $p$-value of $P_{ij}$ if $P_{ij}\ge\alpha$. The latter can be computed in Python as:

from scipy.special import gammainc
alpha=0.001
K=10
# for some derived variable of chi2_test
pvalue=1-gammainc((K-1)/2.,chi2_test/2.)

where gammainc(a,x) stands for incomplete gamma function defined as:
$$
\Gamma(a,x) = \frac{1}{\Gamma(a)} \int_0^x e^{-t} t^{a-1} dt
$$ and $\Gamma(a)$ denotes a standard gamma function.

Given that, we code Test 3 of Uniformity of $p$-values in the following way:

def test3(cl,P,a,b,otest):
    alpha=1.-cl/100.
    rP=np.reshape(P,(a,b))
    rPT=rP.T
    Results3=0
    for i in xrange(b):
        (hist,bin_edges,_)=plt.hist(rPT[i], bins=list(np.arange(0.0,1.1,0.1)))
        F=hist
        K=len(hist)  # K=10 for bins as defined above
        S=a
        chi2=0
        for j in xrange(K):
            chi2+=((F[j]-S/K)**2.)/(S/K)
        pvalue=1-gammainc(9/2.,chi2/2.)
        if(pvalue>=alpha and chi2<=Chi2(alpha,K-1)):
            Results3+=1
    print("Test 3 (Uniformity of p-values)")
    print("  RESULT: %d out of %d test variables stand for randomness"\
          % (Results3,b))
    if((Results3/float(b))>.99):
        print("\t  Signal x(t) appears to be random")
    else:
        print("\t  Signal x(t) appears to be non-random")
    otest.append(100.*(Results3/float(b)))
    print("\t  at %.5f%% confidence level" % (100.*(1.-alpha)))
    print
    return(otest)

where again we allow for less than 1% of all results not to stand against the rejection of $x(t)$ as random signal.

Please note on the transposition of rP matrix. The reason for testing WHT’s values (converted to $t_{ij}$’s or $P_{ij}$’s) is to detect autocorrelation patterns in the tested signal $x(t)$. The same approach has been applied by Op09 in Test 4 and Test 5 as discussed in the following sections and constitutes the core of invention and creativity added to Rukhin et al. (2010)’s suite of 16 tests for signals generated by various PRNGs to meet the cryptographic levels of acceptance as close-to-truly random.

2.8. Maximum Value Decision (Test 4)

This test is based again on the confidence levels approach. Let $T_{ij}=\max_j t_{ij}$ then if:
$$
T_{ij} \notin \left[ u_{\left(\frac{\alpha}{2}\right)^{a^{-1}}}; u_{\left(1-\frac{\alpha}{2}\right)^{a^{-1}}} \right]
$$ then reject the hypothesis of randomness (regarding $i$-th test function) of signal $x(t)$ at the significance level of $\alpha$. We encode it to Python no simpler as that:

def test4(cl,t,a,b,otest):
    alpha=1.-cl/100.
    rt=np.reshape(t,(a,b))
    rtT=rt.T
    Results4=0
    for i in xrange(b):
        tmp=np.max(rtT[i])
        u1=norm.ppf((alpha/2.)**(1./a))
        u2=norm.ppf((1.-alpha/2.)**(1./a))
        if not(tmp<u1 or tmp>u2):
            Results4+=1
    print("Test 4 (Maximum Value Decision)")
    print("  RESULT: %d out of %d test variables stand for randomness" % (Results4,b))
    if((Results4/float(b))>.99):
        print("\t  Signal x(t) appears to be random")
    else:
        print("\t  Signal x(t) appears to be non-random")
    otest.append(100.*(Results4/float(b)))
    print("\t  at %.5f%% confidence level" % (100.*(1.-alpha)))
    print
    return(otest)

Pay attention how this test looks at the results derived based on WHTs. It is sensitive to the distribution of maximal values along $i$-th’s elements of $t$-statistics.

2.9. Sum of Square Decision (Test 5)

Final test makes use of the $C$-statistic designed as:
$$
C_i = \sum_{j=0}^{a-1} t_{ij}^2 \ .
$$ If $C_i \notin [0; \chi^2(\alpha, a)]$ we reject the hypothesis of randomness of $x(t)$ at the significance level of $\alpha$ regarding $i$-th test function. The Python reflection of this test finds its form:

def test5(cl,t,a,b,otest):
    alpha=1.-cl/100.
    rt=np.reshape(t,(a,b))
    rtT=rt.T
    Results5=0
    for i in xrange(b):
        Ci=0
        for j in xrange(a):
           Ci+=(rtT[i][j])**2.
        if(Ci<=Chi2(alpha,a)):
            Results5+=1
    print("Test 5 (Sum of Square Decision)")
    print("  RESULT: %d out of %d test variables stand for randomness" % (Results5,b))
    if((Results5/float(b))>.99):
        print("\t  Signal x(t) appears to be random")
    else:
        print("\t  Signal x(t) appears to be non-random")
    otest.append(100.*(Results5/float(b)))
    print("\t  at %.5f%% confidence level" % (100.*(1.-alpha)))
    print
    return(otest)

Again, we allow of 1% of false negative results.

2.10. The Overall Test for Randomness of Binary Signal

We accept signal $x(t)$ to be random if the average passing rate from all five WHT statistical tests is greater than 99%, i.e. 1% can be due to false negative results, at the significance level of $\alpha$.

def overalltest(cl,otest):
    alpha=1.-cl/100.
    line()
    print("THE OVERALL RESULT:")
    if(np.mean(otest)>=99.0):
        print("   Signal x(t) displays an evidence for RANDOMNESS"),
        T=1
    else:
        print("   Signal x(t) displays an evidence for NON-RANDOMNESS"),
        T=0
    print("at %.5f%% c.l." % (100.*(1.-alpha)))
    print("   based on Walsh-Hadamard Transform Statistical Test\n")
    return(T)

and run all 5 test by calling the following function:

def WHTStatTest(cl,X):
    (xseq,xt,a,b,M) = xsequences(X)
    info(X,xt,a,b,M)
    if(M<7):
        line()
        print("Error:  Signal x(t) too short for WHT Statistical Test")
        print("        Acceptable minimal signal length: n=2^7=128\n")
    else:
        if(M>=7 and M<19):
            line()
            print("Warning: Statistically advisable signal length: n=2^19=524288\n")
        line()
        print("Test Name: Walsh-Hadamard Transform Statistical Test\n")
        (t, P, _) = tstat(xseq,a,b,M)
        otest=test1(cl,t,a,b,[])
        otest=test2(cl,P,a,b,otest)
        otest=test3(cl,P,a,b,otest)
        otest=test4(cl,t,a,b,otest)
        otest=test5(cl,t,a,b,otest)
        T=overalltest(cl,otest)
        return(T)  # 1 if x(t) is random

fed by binary $\pm 1$ signal of $X$ (see example in Section 2.1). The last function return T variable storing $1$ for the overall decision that $x(t)$ is random, $0$ otherwise. It can be used for a great number of repeated WHT tests for different signals in a loop, thus for determination of ratio of instances the WHT Statistical Test passed.

3. Randomness of random()

I know what you think right now. I have just spent an hour reading all that stuff so far, how about some real-life tests? I’m glad you asked! Here we go! We start with Mersenne Twister algorithm being the Rolls-Royce engine of Python’s random() function (and its derivatives). The whole fun of the theoretical part given above comes down to a few lines of code as given below.

Let’s see our WHT Suite of 5 Statistical Tests in action for a very long (of length $n=2^{21}$) random binary signal of $\pm 1$ form. Let’s run the exemplary main program calling the test:

# Walsh-Hadamard Transform and Tests for Randomness of Financial Return-Series
# (c) 2015 QuantAtRisk.com, by Pawel Lachowicz
#
# Mersenne Twister PRNG test using WHT Statistical Test
 
from WalshHadamard import WHTStatTest, line
from random import randrange
 
# define confidence level for WHT Statistical Test
cl=99.9999
 
# generate random binary signal X(t)
X=[randrange(-1,2,2) for i in xrange(2**21)]
line()
print("X(t) =")
for i in range(20):
    print(X[i]),
print("...")
 
WHTStatTest(cl,X)

what returns lovely results, for instance:

----------------------------------------------------------------------
X(t) =
-1 -1 1 1 1 -1 1 -1 1 1 1 1 1 -1 -1 1 1 -1 1 1 ...
----------------------------------------------------------------------
Signal		X(t)
  of length	n = 2097152 digits
trimmed to	x(t)
  of length	n = 2097152 digits (n=2^21)
  split into	a = 1024 sub-sequences 
		b = 2048-digit long
 
----------------------------------------------------------------------
Test Name: Walsh-Hadamard Transform Statistical Test
 
Computation of WHTs...
  ...completed
Computation of t-statistics... completed
Computation of p-values... completed
 
Test 1 (Crude Decision)
  RESULT: 2097149 out of 2097152 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 2 (Proportion of Sequences Passing a Test)
  RESULT: Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 3 (Uniformity of p-values)
  RESULT: 2045 out of 2048 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 4 (Maximum Value Decision)
  RESULT: 2047 out of 2048 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 5 (Sum of Square Decision)
  RESULT: 2048 out of 2048 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
----------------------------------------------------------------------
THE OVERALL RESULT:
   Signal x(t) displays an evidence for RANDOMNESS at 99.99990% c.l.
   based on Walsh-Hadamard Transform Statistical Test

where we assumed the significance level of $\alpha=0.000001$. Impressive indeed!

A the same $\alpha$ however for shorter signal sub-sequencies ($a=256; n=2^{16}$), we still get a significant number of passed tests supporting randomness of random(), for instance:

----------------------------------------------------------------------
X(t) =
-1 -1 1 -1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 1 1 -1 1 -1 ...
----------------------------------------------------------------------
Signal		X(t)
  of length	n = 65536 digits
trimmed to	x(t)
  of length	n = 65536 digits (n=2^16)
  split into	a = 256 sub-sequences 
		b = 256-digit long
 
----------------------------------------------------------------------
Warning: Statistically advisable signal length: n=2^19=524288
 
----------------------------------------------------------------------
Test Name: Walsh-Hadamard Transform Statistical Test
 
Computation of WHTs...
  ...completed
Computation of t-statistics... completed
Computation of p-values... completed
 
Test 1 (Crude Decision)
  RESULT: 65536 out of 65536 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 2 (Proportion of Sequences Passing a Test)
  RESULT: Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 3 (Uniformity of p-values)
  RESULT: 254 out of 256 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 4 (Maximum Value Decision)
  RESULT: 255 out of 256 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 5 (Sum of Square Decision)
  RESULT: 256 out of 256 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
----------------------------------------------------------------------
THE OVERALL RESULT:
   Signal x(t) displays an evidence for RANDOMNESS at 99.99990% c.l.
   based on Walsh-Hadamard Transform Statistical Test

For this random binary signal $x(t)$, the Walsh-Hadamard Transforms for the first 64 signal sub-sequences reveal pretty “random” distributions of $w_{ij}$ values:
wht3

4. Randomness of Financial Return-Series

Eventually, we stand face to face in the grand finale with the question: do financial return time-series can be of random nature? More precisely, if we convert any return-series (regardless of time step, frequency of trading, etc.) to the binary $\pm 1$ signal, do the corresponding positive and negative returns occur randomly or not? Good question.

Let’s see by example of FX 30-min return-series of USDCHF currency pair traded between Sep/2009 and Nov/2014, how our WHT test for randomness works. We use the tick-data downloaded from Pepperstone.com and rebinned up to 30-min evenly distributed price series as provided in my earlier post of Rebinning Tick-Data for FX Algo Traders. We read the data by Python and convert the price-series into return-series.

What follows is similar what we have done within previous examples. First, we convert the return-series into binary signal. As we will see, the signal $X(t)$ is 65732 point long and can be split into 256 sub-sequences 256-point long. Therefore $n=2^{16}$ stands for the trimmed signal of $x(t)$. For the same of clarity, we plot first 64 segments (256-point long) for the USDCHF price-series marking all segments with vertical lines. Next, we run the WHT Statistical Test for all $a=256$ sequences but we display WHTs for only first 64 blocks as visualised for the price-series. The main code takes form:

# Walsh-Hadamard Transform and Tests for Randomness of Financial Return-Series
# (c) 2015 QuantAtRisk.com, by Pawel Lachowicz
#
# 30-min FX time-series (USDCHF) traded between 9/2009 and 11/2014
 
from WalshHadamard import WHTStatTest, ret2bin, line as Line
import matplotlib.pyplot as plt
import numpy as np
import csv
 
# define confidence level for WHT Statistical Test
cl=99.9999
 
# open the file and read in the 30min price of USD/CHF
P=[]
with open("USDCHF.30m") as f:
    c = csv.reader(f, delimiter=' ', skipinitialspace=True)
    for line in c:
        price=line[6]
        P.append(price)
 
x=np.array(P,dtype=np.float128) # convert to numpy array
r=x[1:]/x[0:-1]-1.  # get a return-series
Line()
print("r(t) =")
for i in range(7):
    print("%8.5f" % r[i]),
print("...")
 
X=ret2bin(r)
print("X(t) =")
for i in range(7):
    print("%8.0f" %X[i]),
print("...")
 
plt.plot(P)
plt.xlabel("Time (from 1/05/2009 to 27/09/2010)")
plt.ylabel("USDCHF (30min)")
#plt.xlim(0,len(P))
plt.ylim(0.95,1.2)
plt.xlim(0,64*256)
plt.gca().xaxis.set_major_locator(plt.NullLocator())
for x in range(0,256*265,256):
    plt.hold(True)
    plt.plot((x,x), (0,10), 'k-')
plt.show()
plt.close("all")
 
WHTStatTest(cl,X)

and displays both plots as follows: (a) USDCHF clipped price-series,
usdchf1
and (b) WFTs for first 64 sequences 256-point long,
usdchf2

From the comparison of both figures one can understand the level of details how WHT results are derived. Interesting, for FX return-series, the WHT picture seems to be quite non-uniform suggesting that our USDCHF return-series is random. Is it so? The final answer deliver the results of WHT statistical test, summarised by our program as follows:

----------------------------------------------------------------------
r(t) =
-0.00092 -0.00033 -0.00018  0.00069 -0.00009 -0.00003 -0.00086 ...
X(t) =
      -1       -1       -1        1       -1       -1       -1 ...
----------------------------------------------------------------------
Signal		X(t)
  of length	n = 65731 digits
trimmed to	x(t)
  of length	n = 65536 digits (n=2^16)
  split into	a = 256 sub-sequences 
		b = 256-digit long
 
----------------------------------------------------------------------
Warning: Statistically advisable signal length: n=2^19=524288
 
----------------------------------------------------------------------
Test Name: Walsh-Hadamard Transform Statistical Test
 
Computation of WHTs...
  ...completed
Computation of t-statistics... completed
Computation of p-values... completed
 
Test 1 (Crude Decision)
  RESULT: 65536 out of 65536 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 2 (Proportion of Sequences Passing a Test)
  RESULT: Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 3 (Uniformity of p-values)
  RESULT: 250 out of 256 test variables stand for randomness
	  Signal x(t) appears to be non-random
	  at 99.99990% confidence level
 
Test 4 (Maximum Value Decision)
  RESULT: 255 out of 256 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
Test 5 (Sum of Square Decision)
  RESULT: 256 out of 256 test variables stand for randomness
	  Signal x(t) appears to be random
	  at 99.99990% confidence level
 
----------------------------------------------------------------------
THE OVERALL RESULT:
   Signal x(t) displays an evidence for RANDOMNESS at 99.99990% c.l.
   based on Walsh-Hadamard Transform Statistical Test

Nice!! Wasn’t worth it all the efforts to see this result?!

I’ll leave you with a pure joy of using the software I created. There is a lot to say and a lot to verify. Even the code itself can be amended a bit and adjusted for different sequence number $a$ and $b$’s. If you discover a strong evidence for non-randomness, post it in comments or drop me an e-mail. I can’t do your homework. It’s your turn. I need to sleep sometimes… ;-)

DOWNLOAD
     WalshHadamard.py, USDCHF.30m

REFERENCES
    to be added

GPU-Accelerated Finance in Python with NumbaPro Library. Really?

When in 2010 I lived in Singapore I crossed my life path with some great guys working in the field of High-Performance Computing: Łukasz Orłowski, Marek T. Michalewicz, Iain Bell from Quadrant Capital. They both pointed my attention towards GPU computations utilizing Nvidia CUDA architecture. There was one problem. Everything was wrapped up with C syntax with a promise to do it more efficiently in C++. Soon.


So I waited and studied C/C++ at least at the level allowing me to understand some CUDA codes. Years were passing by until the day when I discovered an article of Mark Harris, NumbaPro: High-Performance Python with CUDA Acceleration, delivering Python-friendly CUDA solutions to all my nightmares involving C/C++ coding. At this point you just need to understand one thing: not every quant or algo trader is a fan of C/C++ the same as some people prefer Volvo to Audi, including myself ;)

Let’s have a sincere look at what the game is about. It is more than tempting to put your hands on a piece of code that allows you to speed-up some quantitative computations in Python making use of a new library.

Accelerate your Python

I absolutely love Continuum Analytics for the mission they stand for: making Python language easily accessible and used by everyone worldwide! It is a great language with a great syntax, easy to pick up, easy to be utilised in the learning process of the fundamentals of programming. Thanks to them, now you can download and install Python’s distribution of Anaconda for Windows, Mac OS X, or Linux just in few minutes (see my earlier post on Setting up Python for Quantitative Analysis in OS X 10.10 Yosemite as an example).

When you visit their webpage you can spot Anaconda’s Add-Ons, three additional software packages to their Python distribution. Among them, they offer Accelerate module containing NumbaPro library. Once you read the description, and I quote,

Accelerate is an add-on to Continuum’s free enterprise Python distribution, Anaconda. It opens up the full capabilities of your GPU or multi-core processor to Python. Accelerate includes two packages that can be added to your Python installation: NumbaPro and MKL Optimizations. MKL Optimizations makes linear algebra, random number generation, Fourier transforms, and many other operations run faster and in parallel. NumbaPro builds fast GPU and multi-core machine code from easy-to-read Python and NumPy code with a Python-to-GPU compiler.

NumbaPro Features
– NumbaPro compiler targets multi-core CPU and GPUs directly from
    simple Python syntax
– Easily move vectorized NumPy functions to the GPU
– Multiple CUDA device support
– Bindings for CUDA libraries, including cuBlas, cuRand, cuSparse, and cuFFT
– Support for array slicing and fast array math
– Use multiple threads without worrying about the GIL
– Supported on NVIDIA CUDA-enabled GPUs with compute capability 2.0
    or above on Intel/AMD (x86) processors.

your blood pressure increases and the level of endorphins skyrockets. Why? Simply because of the promise to do some tasks faster utilising GPU in a parallel mode! If you are new to GPU or CUDA I recommend you to read some well written posts on Mike’s website, for instance, Installing Nvidia CUDA on Mac OSX for GPU-based Parallel Computing or Monte Carlo Simulations in CUDA – Barrier Option Pricing. You will grasp the essence of what is all about. In general, much ado about CUDA is still around making use of your GPU and proving this extra upmhhh in speedup. If you have any quantitative problem in mind and it can be executed in the parallel mode, NumbaPro is a tool you need to look at but – not every engine sounds the same. Hold on till the end of this post. It will be worth it.

Selling the Speed

When you approach a new concept or a new product and someone tries to sell it to you, he needs to impress you to win your attention and boost your curiosity. Imagine for a moment that you have no idea about GPU or CUDA and you want to add two vectors. In Python you can do it as follows:

import numpy as np
from timeit import default_timer as timer
 
def VectorAdd(a,b,c):
    for i in xrange(a.size):
        c[i]=a[i]+b[i]
 
def main():
 
    N=32000000
 
    A=np.ones(N, dtype=np.float32)
    B=np.ones(N, dtype=np.float32)
    C=np.zeros(N, dtype=np.float32)
 
    start=timer()
    VectorAdd(A,B,C)
    totaltime=timer()-start
 
    print("\nCPU time: %g sec" % totaltime)
 
if __name__ == '__main__':
    main()

We all know that once you run the code, Python does not compile it, it goes line by line and interprets what it reads. So, we aim at adding two vectors, $A$ and $B$, containing 32 millions of elements. I get $C=A+B$ matrix on my MacBook Pro (2.6 GHz Intel Core i7, 16 GB 1600 MHz DDR3 RAM, NVIDIA GeForce GT 650M 1GB) after:

CPU time: 9.89753 sec

Can we do it better? With NumbaPro the required changes to the code itself are minor. All we need to add is a function decorator that tells how and where the function should be executed. In fact, what NumbaPro does is that it “compiles” VectorAdd function on-the-go and deploys computations to the GPU unit:

import numpy as np
from timeit import default_timer as timer
from numbapro import vectorize
 
@vectorize(["float32(float32,float32)"], target="gpu")
def VectorAdd(a,b):
    return a+b
 
def main():
 
    N=32000000
 
    A=np.ones(N, dtype=np.float32)
    B=np.ones(N, dtype=np.float32)
    C=np.zeros(N, dtype=np.float32)
 
    start=timer()
    C=VectorAdd(A,B)
    totaltime=timer()-start
 
    print("\nGPU time: %g sec" % totaltime)
 
if __name__ == '__main__':
    main()

We get

GPU time: 0.286101 sec

i.e. 34.6x speed-up. Not bad, right?! Not bad if you’re a sale person indeed! But, hey, what’s that?:

import numpy as np
from timeit import default_timer as timer
 
def main():
 
    N=32000000
 
    A=np.ones(N, dtype=np.float32)
    B=np.ones(N, dtype=np.float32)
    C=np.zeros(N, dtype=np.float32)
 
    start=timer()
    C=A+B
    totaltime=timer()-start
 
    print("\nCPU time: %g sec" % totaltime)
 
if __name__ == '__main__':
    main()

Run it to discover that:

CPU time: 0.0592878 sec

i.e. 4.82x faster than using GPU. Oh, boy! CUDA:NumPy (0:1).

Perfect Pitch

When Nvidia introduced CUDA among some exemplary C codes utilising CUDA programming we could find an immortal Black-Scholes model for option pricing. In this Nobel-prize winning solution, we derive a call option price for non-dividend-paying underlying stock:
$$
C(S,t) = N(d_1)S – N(d_2)Ke^{-r(T-t)}
$$
where $(T-t)$ is the time to maturity (scalar), $r$ is the risk free rate (scalar), $S$ is the spot price of the underlying asset, $K$ is the strike price, and $\sigma$ is the volatility of returns of the underlying asset. $N(\dot)$ is the cumulative distribution function (cnd) of the standard normal distribution and has an analytical form. A classical way to code it in Python is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
import numpy as np
import time
 
RISKFREE = 0.02
VOLATILITY = 0.30
 
def cnd(d):
    A1 = 0.31938153
    A2 = -0.356563782
    A3 = 1.781477937
    A4 = -1.821255978
    A5 = 1.330274429
    RSQRT2PI = 0.39894228040143267793994605993438
    K = 1.0 / (1.0 + 0.2316419 * np.abs(d))
    ret_val = (RSQRT2PI * np.exp(-0.5 * d * d) *
               (K * (A1 + K * (A2 + K * (A3 + K * (A4 + K * A5))))))
    return np.where(d > 0, 1.0 - ret_val, ret_val)
 
def black_scholes(callResult, putResult, stockPrice, optionStrike, optionYears,
                  Riskfree, Volatility):
    S = stockPrice
    X = optionStrike
    T = optionYears
    R = Riskfree
    V = Volatility
    sqrtT = np.sqrt(T)
    d1 = (np.log(S / X) + (R + 0.5 * V * V) * T) / (V * sqrtT)
    d2 = d1 - V * sqrtT
    cndd1 = cnd(d1)
    cndd2 = cnd(d2)
 
    expRT = np.exp(- R * T)
    callResult[:] = (S * cndd1 - X * expRT * cndd2)
 
def randfloat(rand_var, low, high):
    return (1.0 - rand_var) * low + rand_var * high
 
def main (*args):
    OPT_N = 4000000
    iterations = 10
    if len(args) >= 2:
        iterations = int(args[0])
 
    callResult = np.zeros(OPT_N)
    stockPrice = randfloat(np.random.random(OPT_N), 5.0, 30.0)
    optionStrike = randfloat(np.random.random(OPT_N), 1.0, 100.0)
    optionYears = randfloat(np.random.random(OPT_N), 0.25, 10.0)
 
    time0 = time.time()
    for i in range(iterations):
        black_scholes(callResult, putResult, stockPrice, optionStrike,
                      optionYears, RISKFREE, VOLATILITY)
    time1 = time.time()
    print("Time: %f msec per option" % ((time1-time0)/iterations/OPT_N*1000))
 
if __name__ == "__main__":
    import sys
    main(*sys.argv[1:])

what returns

Time: 0.000192 msec per option

The essence of this code is to derive 4 million independent results based on feeding the function with random stock prices, option strike prices, and times to maturity. They enter the game under cover as row vectors with randomised values (see lines #45-47). Anaconda Accelerate’s CUDA solution for the same code is:

import numpy as np
import math
import time
from numba import *
from numbapro import cuda
from blackscholes import black_scholes # save the previous code as
                                       # black_scholes.py
 
RISKFREE = 0.02
VOLATILITY = 0.30
 
A1 = 0.31938153
A2 = -0.356563782
A3 = 1.781477937
A4 = -1.821255978
A5 = 1.330274429
RSQRT2PI = 0.39894228040143267793994605993438
 
@cuda.jit(argtypes=(double,), restype=double, device=True, inline=True)
def cnd_cuda(d):
    K = 1.0 / (1.0 + 0.2316419 * math.fabs(d))
    ret_val = (RSQRT2PI * math.exp(-0.5 * d * d) *
               (K * (A1 + K * (A2 + K * (A3 + K * (A4 + K * A5))))))
    if d > 0:
        ret_val = 1.0 - ret_val
    return ret_val
 
@cuda.jit(argtypes=(double[:], double[:], double[:], double[:], double[:],
                    double, double))
def black_scholes_cuda(callResult, putResult, S, X,
                       T, R, V):
#    S = stockPrice
#    X = optionStrike
#    T = optionYears
#    R = Riskfree
#    V = Volatility
    i = cuda.threadIdx.x + cuda.blockIdx.x * cuda.blockDim.x
    if i >= S.shape[0]:
        return
    sqrtT = math.sqrt(T[i])
    d1 = (math.log(S[i] / X[i]) + (R + 0.5 * V * V) * T[i]) / (V * sqrtT)
    d2 = d1 - V * sqrtT
    cndd1 = cnd_cuda(d1)
    cndd2 = cnd_cuda(d2)
 
    expRT = math.exp((-1. * R) * T[i])
    callResult[i] = (S[i] * cndd1 - X[i] * expRT * cndd2)
 
def randfloat(rand_var, low, high):
    return (1.0 - rand_var) * low + rand_var * high
 
def main (*args):
    OPT_N = 4000000
    iterations = 10
 
    callResultNumpy = np.zeros(OPT_N)
    putResultNumpy = -np.ones(OPT_N)
    stockPrice = randfloat(np.random.random(OPT_N), 5.0, 30.0)
    optionStrike = randfloat(np.random.random(OPT_N), 1.0, 100.0)
    optionYears = randfloat(np.random.random(OPT_N), 0.25, 10.0)
    callResultNumba = np.zeros(OPT_N)
    putResultNumba = -np.ones(OPT_N)
    callResultNumbapro = np.zeros(OPT_N)
    putResultNumbapro = -np.ones(OPT_N)
 
# Numpy ----------------------------------------------------------------
    time0 = time.time()
    for i in range(iterations):
        black_scholes(callResultNumpy, putResultNumpy, stockPrice,
                      optionStrike, optionYears, RISKFREE, VOLATILITY)
    time1 = time.time()
    dtnumpy = ((1000 * (time1 - time0)) / iterations)/OPT_N
    print("\nNumpy Time            %f msec per option") % (dtnumpy)
 
# CUDA -----------------------------------------------------------------
    time0 = time.time()
    blockdim = 1024, 1
    griddim = int(math.ceil(float(OPT_N)/blockdim[0])), 1
    stream = cuda.stream()
    d_callResult = cuda.to_device(callResultNumbapro, stream)
    d_putResult = cuda.to_device(putResultNumbapro, stream)
    d_stockPrice = cuda.to_device(stockPrice, stream)
    d_optionStrike = cuda.to_device(optionStrike, stream)
    d_optionYears = cuda.to_device(optionYears, stream)
 
    time2 = time.time()
 
    for i in range(iterations):
        black_scholes_cuda[griddim, blockdim, stream](
            d_callResult, d_putResult, d_stockPrice, d_optionStrike,
            d_optionYears, RISKFREE, VOLATILITY)
        d_callResult.to_host(stream)
        d_putResult.to_host(stream)
        stream.synchronize()
 
    time3 = time.time()
    dtcuda = ((1000 * (time3 - time2)) / iterations)/OPT_N
 
    print("Numbapro CUDA Time    %f msec per option (speed-up %.1fx)
             \n") % (dtcuda, dtnumpy/dtcuda)
#   print(callResultNumbapro)
 
 
if __name__ == "__main__":
    import sys
    main(*sys.argv[1:])

returning

Numpy Time            0.000186 msec per option
Numbapro CUDA Time    0.000024 msec per option (speed-up 7.7x)

In order to understand why CUDA wins over NumPy this time is not so difficult. First we have a programmable analytical form of the problem. We deploy it to GPU and perform exhaustive calculations involving cnd_cuda function for the estimation of the cumulative distribution function of the standard normal distribution. Splitting the task into many concurrently running threats on GPU reduces the time. Again, it’s possible because all option prices can be computed independently. CUDA:NumPy (1:1).

Multiplied Promises

In finance, the concept of portfolio optimization is well established (see my ebook on that, Applied Portfolio Optimization with Risk Management, as an example). The idea standing behind is to find such a vector of weights, $w$, for all assets that the derived estimated portfolio risk ($\sigma_P$) and return ($\mu_P$) meets our needs or expectations.

An alternative (but not greatly recommended) approach would involve optimization through randomisation of $w$ vectors. We could generate a big number of them, say $N$, in order to obtain:
$$
\mu_{P,i} = m w_i^T \ \ \ \mbox{and} \ \ \ \sigma_{P,i} = w_iM_2w_i^T
$$ for $i=1,…,N$. Here, $m$ is a row-vector holding estimated expected returns for all assets in portfolio $P$ and based on return-series we end up with $M_2$ covariance matrix $(MxM$ where $M$ is a number of assets in $P$). In the first case, we aim at the multiplication of row-vector with a transposed row-vector of weight whereas for the latter we perform the multiplication of the row-vector with the square matrix (as the first operation). If $N$ is really big, say a couple of millions, the computation could be somehow accelerated using GPU. Therefore, we need a code for matrix multiplication on GPU in Python.

Let’s consider a more advanced concept of ($K\times K$)$\times$($K\times K$) matrix multiplication. If that works faster, than our random portfolios problem should be even faster. Continuum Analytics provides with a ready-to-use solution:

import numpy as np
from numbapro import cuda
import numba
from timeit import default_timer as timer
from numba import float32
 
bpg = 32
tpb = 32
 
n = bpg * tpb
 
shared_mem_size = (tpb, tpb)
griddim = bpg, bpg
blockdim = tpb, tpb
 
@numba.cuda.jit("void(float32[:,:], float32[:,:], float32[:,:])")
def naive_matrix_mult(A, B, C):
    x, y = cuda.grid(2)
    if x >= n or y >= n:
        return
 
    C[y, x] = 0
    for i in range(n):
        C[y, x] += A[y, i] * B[i, x]
 
 
@numba.cuda.jit("void(float32[:,:], float32[:,:], float32[:,:])")
def optimized_matrix_mult(A, B, C):
 
    # Declare shared memory
    sA = cuda.shared.array(shape=shared_mem_size, dtype=float32)
    sB = cuda.shared.array(shape=shared_mem_size, dtype=float32)
 
    tx = cuda.threadIdx.x
    ty = cuda.threadIdx.y
    x, y = cuda.grid(2)
 
    acc = 0
    for i in range(bpg):
        if x < n and y < n:
            # Prefill cache
            sA[ty, tx] = A[y, tx + i * tpb]
            sB[ty, tx] = B[ty + i * tpb, x]
 
        # Synchronize all threads in the block
        cuda.syncthreads()
 
        if x < n and y < n:
            # Compute product
            for j in range(tpb):
                acc += sA[ty, j] * sB[j, tx]
 
        # Wait until all threads finish the computation
        cuda.syncthreads()
 
    if x < n and y < n:
        C[y, x] = acc
 
 
# Prepare data on the CPU
A = np.array(np.random.random((n, n)), dtype=np.float32)
B = np.array(np.random.random((n, n)), dtype=np.float32)
 
print "(%d x %d) x (%d x %d)" % (n, n, n, n)
 
# Prepare data on the GPU
dA = cuda.to_device(A)
dB = cuda.to_device(B)
dC = cuda.device_array_like(A)
 
# Time the unoptimized version
s = timer()
naive_matrix_mult[griddim, blockdim](dA, dB, dC)
numba.cuda.synchronize()
e = timer()
unopt_ans = dC.copy_to_host()
tcuda_unopt = e - s
 
# Time the optimized version
s = timer()
optimized_matrix_mult[griddim, blockdim](dA, dB, dC)
numba.cuda.synchronize()
e = timer()
opt_ans = dC.copy_to_host()
tcuda_opt = e - s
 
assert np.allclose(unopt_ans, opt_ans)
print "CUDA without shared memory:", "%.2f" % tcuda_unopt, "s"
print "CUDA with shared memory   :", "%.2f" % tcuda_opt, "s"
 
s = timer()
np.dot(A,B)
e = timer()
npt=e-s
print "NumPy dot product         :", "%.2f" % npt, "s"

what returns

(1024 x 1024) x (1024 x 1024)
CUDA without shared memory: 0.76 s
CUDA with shared memory   : 0.25 s
NumPy dot product         : 0.06 s

and leads to CUDA:NumPy (1:2) score of the game. The natural questions arise. Is it about the matrix size? Maybe it is too simple problem we try to solve it with an improper tool? Or the way how we approach matrix allocation and deployment to GPU itself?

The last question made me digging deeper. In Python you can create one big matrix holding a number of smaller matrices. The following code tries to perform 4 million $(2\times 2)$ matrix multiplications where matrix $B$ is randomised every single time (see our random portfolio problem). In Anaconda Accelerate we achieve it as follows:

import numbapro
import numba.cuda
import numpy as np
from timeit import default_timer as timer
# Use the builtin matrix_multiply in NumPy for CPU test
import numpy.core.umath_tests as ut
 
 
@numbapro.guvectorize(['void(float32[:,:], float32[:,:], float32[:,:])'],
                      '(m, n),(n, p)->(m, p)', target='gpu')
def batch_matrix_mult(a, b, c):
    for i in range(c.shape[0]):
        for j in range(c.shape[1]):
            tmp = 0
            for n in range(a.shape[1]):
                 tmp += a[i, n] * b[n, j]
            c[i, j] = tmp
 
 
def main():
 
    n   = 4000000
    dim = 2
 
    sK=0
    KN=10
 
    for K in range(KN):
 
        # Matrix Multiplication:   c = a x b
 
        a = np.random.random(n*dim*dim).astype(np.float32).reshape(n,dim,dim)
        c = np.random.random(n*dim*dim).astype(np.float32).reshape(n,dim,dim)
 
        # NUMPY -------------------------------------------------------------
        start = timer()
        b = np.random.random(n*dim*dim).astype(np.float32).reshape(n,dim,dim)
        d=ut.matrix_multiply(a, b)
        np_time=timer()-start
 
        # CUDA --------------------------------------------------------------
        dc = numba.cuda.device_array_like(c)
        da = numba.cuda.to_device(a)
 
        start = timer()
        b = np.random.random(n*dim*dim).astype(np.float32).reshape(n,dim,dim)
        db = numba.cuda.to_device(b)
        batch_matrix_mult(da, db, out=dc)
        numba.cuda.synchronize()
        dc.copy_to_host(c)
        cuda_time=timer()-start
 
        sK += np_time/cuda_time
 
        del da, db
 
    print("\nThe average CUDA speed-up: %.5fx") % (sK/KN)
 
 
if __name__ == '__main__':
    main()

leading us to

The average CUDA speed-up: 0.79003x

i.e. deceleration. Playing with the sizes of matrices and their number may result in error caused by GPU memory required for matrix $A$ and $B$ allocation on GPU. It seems we have CUDA:NumPy (1:3).

Black Magic in Black Box

I approached NumbaPro solution as a complete rookie. I spent a considerable amount of time searching for Anaconda Accelerate’s GPU codes demonstrating massive speed-ups as promised. I found different fragments in different places across the Web. With the best method known among all beginners, namely, copy and paste, I re-ran what I found. Then modified and re-ran again. And again, and again. I felt the need, the need for speed! But failed finding my tail wind.

This post may demonstrate my lack of understanding of what is going on or reveal a blurred picture standing behind: a magic that works if you know all the tricks. I hope that at least you enjoyed the show!

Covariance Matrix for N-Asset Portfolio fed by Quandl in Python

A construction of your quantitative workshop in Python requires a lot of coding or at least spending a considerable amount of time assembling different blocks together. There are many simple fragments of code reused many times. The calculation of covariance matrix is not a problem once NumPy is engaged but the meaning is derived once you add some background idea what you try to achieve.

Let’s see in this lesson of Accelerated Python for Quants tutorial how to use Quandl.com data provider in construction of any $N$-Asset Portfolio based on SEC securities and for return-series we may calculate a corresponding covariance matrix.

Quandl and SEC Stock List

Quandl is a great source of data. With their ambition to become the largest data provider on the planet free of charge, no doubt they do an amazing job. You can use their Python API to feed your code directly with Open, High, Low, Close for any SEC stock. In the beginning we will need a list of companies (tickers) and, unfortunately, the corresponding internal call-tickers as referred to by Quandl. The .csv file containing all information you can download from this website or directly here: secwiki_tickers.csv. The file contains the following header:

Ticker       Name       Sector       Industry       Price       Collection

where we are interested in matching Ticker with Price field. The latter for AAPL stock displays “WIKI/AAPL” code. That’s all we need for now to grab.

Our Portfolio

Let’s say we have a freedom of choice to select any of 2277 stock data from our SEC universe (provided in secwiki_tickers.csv file). For the sake of simplicity I’ll select ony three and save them in a plain text file of portfolio.lst containing:

AAPL
IBM
TXN

I do it on purpose in order to show you how easily we can read in this list from a file in Python. As usual, we start our adventure from data pre-processing part:

1
2
3
4
5
6
7
8
9
10
11
# Covariance Matrix for N-Asset Portfolio fed by Quandl in Python
# (c) 2014 QuantAtRisk, by Pawel Lachowicz
 
import Quandl
import numpy as np
import pandas as pd
 
df=pd.read_csv('secwiki_tickers.csv')
 
dp=pd.read_csv('portfolio.lst',names=['pTicker'])
pTickers=dp.pTicker.values  # converts into a list

In order to install Quandl module in your Mac/Linux environment simply type pip install Quandl (for more information see here). In the code above, we employ pandas’ read_csv function both for reading in the data from .csv file as well as from a plain text file. For the latter, line #10, we add a name of the column, pTicker, to point at portfolio tickers, and next we convert pandas’ DataFrame object into a Python list.

Now, we gonna use the power of Python over Matlab in search for the corresponding Quandl (Price) ticker code:

13
14
15
16
17
18
19
20
21
tmpTickers=[]
for i in range(len(pTickers)):
    test=df[df.Ticker==pTickers[i]]
    if not(test.empty):
        tmp=test.Price.values+'.4'  # of <type 'numpy.ndarray'>
        tmp2=tmp.tolist()
        tmpTickers.append(tmp2)
 
print(tmpTickers)

This is executed in line #15 almost automatically. The result of the search is the DataFrame record (empty or containing the corresponding information on the stock as read out from (df) .csv data source. Simply, if the ticker of the security in our portfolio.lst file does not exist, it is skipped. Quandl allows you to retrieve information on stock’s Open, High, Low, Close, Volume by calling (Price) Quandl ticker in the following form, respectively:

'WIKI/AAPL.1'
'WIKI/AAPL.2'
'WIKI/AAPL.3'
'WIKI/AAPL.4'
'WIKI/AAPL.5'

Below we will stick to Close prices. That is why, in line #17, we add ‘.4′ to the string. Please note that we should get the same data by calling ‘GOOG/NASDAQ_AAPL.4′ and ‘WIKI/AAPL.4′ Quandl tickers. Because a variable tmp is of numpy.ndarray type, in line #18, we convert it into a list type. The final list of tmpTickers contains therefore all corresponding Quandl tickers in the form:

[['WIKI/AAPL.4'], ['WIKI/IBM.4'], ['WIKI/TXN.4']]

and before they can be used for return-series retrieval from the Quandl database, we need to scalp each item a bit in the following way:

23
24
25
26
27
28
29
30
tmp=[]
for i in range(len(tmpTickers)):
    tmp2=str(tmpTickers[i]).strip('[]')
    print(tmp)
    tmp.append(str(tmp2).strip('\'\''))
 
QuandlTickers=tmp
print(QuandlTickers)

what returns:

['WIKI/AAPL.4', 'WIKI/IBM.4', 'WIKI/TXN.4']

Return-Series Download

We fetch the data pretty easily:

32
33
34
35
36
37
data= Quandl.get(QuandlTickers, authtoken='YourAuthToken', \
                 trim_start='2014-10-01', trim_end='2014-11-04', \
                 transformation='rdiff')
 
d=data.values.T
print(d)

where rdiff enforces price-series transformation into return-series expressed as row-on-row % change, y'[t] = (y[t] – y[t-1])/y[t-1] (see more here: Using the Quandl API). By default we download daily returns:

[[-0.01558313  0.00725953 -0.0028028   0.         -0.00873319  0.02075949
   0.00218254 -0.00287072 -0.00913333 -0.01062018 -0.01225316 -0.01312282
   0.01464783  0.02139859  0.0271652   0.00507466  0.01786581  0.00372031
  -0.00104543  0.01550756  0.00562114 -0.00335383  0.00953449  0.01296296
  -0.00731261]
 [-0.01401254 -0.00138911  0.0094163   0.0019611  -0.01761532  0.0196543
  -0.01552598 -0.00262847 -0.01296187  0.00152572 -0.01115343 -0.01050894
   0.0122887  -0.0711343  -0.03471319 -0.00882191  0.00241053 -0.0006166
  -0.00129566  0.01068759 -0.00085575  0.00544476  0.00030423 -0.00024331
  -0.01040399]
 [-0.01698469         nan         nan -0.00416489 -0.01319035  0.020213
  -0.01959949 -0.07127336 -0.0189518   0.00906272  0.01063578  0.01941066
   0.00183528  0.01694527  0.05314118 -0.00320718  0.00815101  0.01212766
   0.00798823  0.01147028 -0.00350515 -0.01655287  0.0448138   0.00845751
   0.00638978]]

You may notice some missing data in the form denoted by nan. It may happen for any dataset. Life is not perfect. Quandl is only an option. In the first approximation we may fill the missing returns with zeros using pandas:

39
40
41
42
43
44
for i in range(d.shape[0]):
    df=pd.DataFrame(d[i])
    df.fillna(0,inplace=True)
    d[i]=df.values.T
 
print(d)

what works quickly and efficiently:

[[-0.01558313  0.00725953 -0.0028028   0.         -0.00873319  0.02075949
   0.00218254 -0.00287072 -0.00913333 -0.01062018 -0.01225316 -0.01312282
   0.01464783  0.02139859  0.0271652   0.00507466  0.01786581  0.00372031
  -0.00104543  0.01550756  0.00562114 -0.00335383  0.00953449  0.01296296
  -0.00731261]
 [-0.01401254 -0.00138911  0.0094163   0.0019611  -0.01761532  0.0196543
  -0.01552598 -0.00262847 -0.01296187  0.00152572 -0.01115343 -0.01050894
   0.0122887  -0.0711343  -0.03471319 -0.00882191  0.00241053 -0.0006166
  -0.00129566  0.01068759 -0.00085575  0.00544476  0.00030423 -0.00024331
  -0.01040399]
 [-0.01698469  0.          0.         -0.00416489 -0.01319035  0.020213
  -0.01959949 -0.07127336 -0.0189518   0.00906272  0.01063578  0.01941066
   0.00183528  0.01694527  0.05314118 -0.00320718  0.00815101  0.01212766
   0.00798823  0.01147028 -0.00350515 -0.01655287  0.0448138   0.00845751
   0.00638978]]

Having data is better than missing them. It is important to check how many nans every return-series has. In our case of TXN missing information constitutes 8% at 25 data points. A good indicator is to have less than 3%. Otherwise an impact on any further calculations may be significantly propagated. In this point it is good to recall Bruce Lee who said: “Empty your mind, be formless, shapeless, like water. If you put water into a cup, it becomes the cup. If you put water in a bottle, it becomes the bottle. If you put it in a teacup, it becomes the teacup. Now, water can flow, and it can crush. Be water, my friend.” Two added zeros contribute to the TXN distribution of returns in its central part, therefore the more we “fill in” with zeros, a fit of normal distribution flattens the tails (a distribution becomes more peaked). Therefore, in such case, I would agree with Bruce only on the “crush” part of his wisdom.

Covariance Matrix

Keeping above short note on some dirty tricks in mind, we obtain the desired covariance matrix simply and painfully,

46
47
covmat=np.cov(d)
print(covmat)

id est,

[[  1.44660397e-04  -2.13905277e-05   1.31742330e-04]
 [ -2.13905277e-05   3.12211511e-04  -5.62146677e-05]
 [  1.31742330e-04  -5.62146677e-05   5.44348868e-04]]

what accomplishes our efforts.

 

Recommended Reading:
   Applied Portfolio Optimization with Risk Management using Matlab

Related QaR Posts:
   2-Asset Portfolio Construction
   Create a Portfolio of Stocks based on Google Finance Data fed by Quandl

GARCH(1,1) Model in Python

In my previous article GARCH(p,q) Model and Exit Strategy for Intraday Algorithmic Traders we described the essentials of GARCH(p,q) model and provided an exemplary implementation in Matlab. In general, we apply GARCH model in order to estimate the volatility one time-step forward, where:
$$
\sigma_t^2 = \omega + \alpha r_{t-1}^2 + \beta \sigma_{t-1}^2
$$ based on the most recent update of $r$ and $\sigma$, where $r_{t-1} = \ln({P_{t-1}}/{P_{t-2}})$ and $P$ corresponds to an asset price. For any financial time-series, $\{r_j\}$, the estimation of $(\omega,\alpha,\beta)$ parameters can be conducted utilising the maximum likelihood method. The latter is an iterative process by looking for the maximum value of the sum among all sums defined as:
$$
\sum_{i=3}^{N} \left[ -\ln(\sigma_i^2) – \frac{r_i^2}{\sigma_i^2} \right]
$$ where $N$ denotes the length of the return series $\{r_j\}$ ($j=2,…,N$) under study.

Let’s assume we have a test array of input data, $\{r_j\}$, stored in Python variable of r, and we write a function, GARCH11_logL, that will be used in the optimisation process. It contains two input parameters where parma is an 3-element array with some trial values corresponding to $(\omega,\alpha,\beta)$ and u denotes the return-series.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# GARCH(1,1) Model in Python
#   uses maximum likelihood method to estimate (omega,alpha,beta)
# (c) 2014 QuantAtRisk, by Pawel Lachowicz; tested with Python 3.5 only
 
import numpy as np
from scipy import optimize
import statistics as st
 
r = np.array([0.945532630498276,
              0.614772790142383,
              0.834417758890680,
              0.862344782601800,
              0.555858715401929,
              0.641058419842652,
              0.720118656981704,
              0.643948007732270,
              0.138790608092353,
              0.279264178231250,
              0.993836948076485,
              0.531967023876420,
              0.964455754192395,
              0.873171802181126,
              0.937828816793698])
 
def GARCH11_logL(param, r):
    omega, alpha, beta = param
    n = len(r)
    s = np.ones(n)*0.01
    s[2] = st.variance(r[0:3])
    for i in range(3, n):
        s[i] = omega + alpha*r[i-1]**2 + beta*(s[i-1])  # GARCH(1,1) model
    logL = -((-np.log(s) - r**2/s).sum())
    return logL

In this point it is important to note that in line #32 we multiply the sum by $-1$ in order to find maximal value of the expression. Why? This can be understood as we implement optimize.fmin function from Python’s optimize module. Therefore, we seek for best estimates as follows:

34
o = optimize.fmin(GARCH11_logL,np.array([.1,.1,.1]), args=(r,), full_output=1)

and we display the results,

36
37
38
R = np.abs(o[0])
print()
print("omega = %.6f\nbeta  = %.6f\nalpha = %.6f\n" % (R[0], R[2], R[1]))

what, in case of the array of r as given in the code, returns the following results:

Optimization terminated successfully.
         Current function value: 14.705098
         Iterations: 88
         Function evaluations: 162
 
omega = 0.788244
beta  = 0.498230
alpha = 0.033886

Python vs. Matlab Solution

Programming requires caution. It is always a good practice to test the outcome of one algorithm against alternative solutions. Let’s run the GARCH(1,1) model estimation for the same input array and compare Python and Matlab results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
% GARCH(1,1) Model in Matlab 2013a
% (c) 2014 QuantAtRisk, by Pawel Lachowicz
 
clear all; close all; clc;
 
r=[0.945532630498276, ...
   0.614772790142383, ...
   0.834417758890680, ...
   0.862344782601800, ...
   0.555858715401929, ...
   0.641058419842652, ...
   0.720118656981704, ...
   0.643948007732270, ...
   0.138790608092353, ...
   0.279264178231250, ...
   0.993836948076485, ...
   0.531967023876420, ...
   0.964455754192395, ...
   0.873171802181126, ...
   0.937828816793698]';
 
% GARCH(p,q) parameter estimation
model = garch(1,1) % define model
[fit,VarCov,LogL,Par] = estimate(model,r);
% extract model parameters
parC=Par.X(1); % omega
parG=Par.X(2); % beta (GARCH)
parA=Par.X(3); % alpha (ARCH)
% estimate unconditional volatility
gamma=1-parA-parG;
VL=parC/gamma;
volL=sqrt(VL);
% redefine model with estimatated parameters
model=garch('Constant',parC,'GARCH',parG,'ARCH',parA)

what returns:

model = 
 
    GARCH(1,1) Conditional Variance Model:
    --------------------------------------  
    Distribution: Name = 'Gaussian'
               P: 1
               Q: 1
        Constant: NaN
           GARCH: {NaN} at Lags [1]
            ARCH: {NaN} at Lags [1]
 
____________________________________________________________
   Diagnostic Information
 
Number of variables: 3
 
Functions 
Objective:       @(X)OBJ.nLogLikeGaussian(X,V,E,Lags,1,maxPQ,T,nan,trapValue)
Gradient:        finite-differencing
Hessian:         finite-differencing (or Quasi-Newton)
 
Constraints
Nonlinear constraints:                do not exist
 
Number of linear inequality constraints:    1
Number of linear equality constraints:      0
Number of lower bound constraints:          3
Number of upper bound constraints:          3
 
Algorithm selected
   sequential quadratic programming
 
 
____________________________________________________________
   End diagnostic information
                                                          Norm of First-order
 Iter F-count            f(x) Feasibility  Steplength        step  optimality
    0       4    1.748188e+01   0.000e+00                           5.758e+01
    1      27    1.723863e+01   0.000e+00   1.140e-03   6.565e-02   1.477e+01
    2      31    1.688626e+01   0.000e+00   1.000e+00   9.996e-01   1.510e+00
    3      35    1.688234e+01   0.000e+00   1.000e+00   4.099e-02   1.402e+00
    4      39    1.686305e+01   0.000e+00   1.000e+00   1.440e-01   8.889e-01
    5      44    1.685246e+01   0.000e+00   7.000e-01   2.379e-01   5.088e-01
    6      48    1.684889e+01   0.000e+00   1.000e+00   9.620e-02   1.379e-01
    7      52    1.684835e+01   0.000e+00   1.000e+00   2.651e-02   2.257e-02
    8      56    1.684832e+01   0.000e+00   1.000e+00   8.389e-03   7.046e-02
    9      60    1.684831e+01   0.000e+00   1.000e+00   1.953e-03   7.457e-02
   10      64    1.684825e+01   0.000e+00   1.000e+00   7.888e-03   7.738e-02
   11      68    1.684794e+01   0.000e+00   1.000e+00   3.692e-02   7.324e-02
   12      72    1.684765e+01   0.000e+00   1.000e+00   1.615e-01   5.862e-02
   13      76    1.684745e+01   0.000e+00   1.000e+00   7.609e-02   8.429e-03
   14      80    1.684740e+01   0.000e+00   1.000e+00   2.368e-02   4.072e-03
   15      84    1.684739e+01   0.000e+00   1.000e+00   1.103e-02   3.142e-03
   16      88    1.684739e+01   0.000e+00   1.000e+00   1.183e-03   2.716e-04
   17      92    1.684739e+01   0.000e+00   1.000e+00   9.913e-05   1.378e-04
   18      96    1.684739e+01   0.000e+00   1.000e+00   6.270e-05   2.146e-06
   19      97    1.684739e+01   0.000e+00   7.000e-01   4.327e-07   2.146e-06
 
Local minimum possible. Constraints satisfied.
 
fmincon stopped because the size of the current step is less than
the default value of the step size tolerance and constraints are 
satisfied to within the selected value of the constraint tolerance.
 
<stopping criteria details>
 
 
    GARCH(1,1) Conditional Variance Model:
    ----------------------------------------
    Conditional Probability Distribution: Gaussian
 
                                  Standard          t     
     Parameter       Value          Error       Statistic 
    -----------   -----------   ------------   -----------
     Constant       0.278061       26.3774      0.0105417
     GARCH{1}       0.457286       49.4915      0.0092397
      ARCH{1}      0.0328433       1.65576      0.0198358
 
model = 
 
    GARCH(1,1) Conditional Variance Model:
    --------------------------------------  
    Distribution: Name = 'Gaussian'
               P: 1
               Q: 1
        Constant: 0.278061
           GARCH: {0.457286} at Lags [1]
            ARCH: {0.0328433} at Lags [1]

id est
$$
(\omega,\beta,\alpha)_{\rm Matlab} = (0.278061,0.457286,0.0328433) \ .
$$This slightly differs itself from the Python solution which was
$$
(\omega,\beta,\alpha)_{\rm Py} =(0.788244,0.498230,0.033886) \ .
$$At this stage it is difficult to assess which solution is “better”. Both algorithms and applied methodologies simply may be different what is usually the case. Having that in mind, further extensive tests are required, for example, a dependance of Python solution on trial input $(\omega,\alpha,\beta)$ values as displayed in line #33 of the Python code.

Deriving Limits in Python

Lesson 6>>

Some quant problems require an intensive work with mathematical (time-)series given initial conditions. In this short lesson on Python, let’s consider the following problem and solve it analytically and with aid of Python.

Problem

Given a series $\{a_n\}$, where $n\in N^{+}$, and $a_1=\sqrt{2}$ and $a_{n+1} = \sqrt{2}^{\log_2 {a_n}}$, solve:
$$
\lim_{n \to \infty} b_n \ \ \ \ \mbox{where} \ \ b_n = a_1\cdot a_2\cdot … \cdot a_n \ .
$$

Solution

By employing
$$
a^{\log_a b} = b \ \ \ \ \mbox{where} \ \ a\in \Re^{+} – \{1\} \ \ \ \mbox{and} \ \ \ b\in \Re^{+}
$$and taking into account the initial conditions for our problem, we derive:
$$
a_1 = \sqrt{2} = 2^{\frac{1}{2}} \\
a_2 = \sqrt{2}^{\log_2 a_1} = 2^{\frac{1}{2}\log_2 2^{\frac{1}{2}}} = 2^{\log_2 2^\frac{1}{4}} = 2^\frac{1}{2^2} \\
a_3 = \sqrt{2}^{\log_2 a_2} = 2^{\frac{1}{2}\log_2 2^{\frac{1}{2^2}}} = 2^\frac{1}{2^3} \\
… \\
a_n = \sqrt{2}^{\log_2 a_{n-1}} = 2^\frac{1}{2^n}
$$ therefore
$$
a_n = 2^\frac{1}{2^n} \ .
$$
We find
$$
b_n = a_1\cdot a_2\cdot … \cdot a_n = 2^{\frac{1}{2}} \cdot 2^\frac{1}{2^2} \cdot … \cdot 2^\frac{1}{2^n} =
$$
$$
= 2^{\frac{1}{2}+\frac{1}{4}+…+\frac{1}{2^n}} = 2^{1-\left( \frac{1}{2} \right)^n} \ .
$$
Having that, we might code in Python the limit for $\{b_n\}$-series in an iterative way:

1
2
3
4
5
6
7
# Deriving Limits in Python, Accelerated Python for Quants Tutorial, Lesson 5
# (c) 2014 QuantAtRisk
 
from sympy import *
 
for m in xrange(1,40):
    print 2**(1-0.5**m)

what results in the output:

1.41421356237
1.68179283051
1.83400808641
1.91520656140
1.95714412418
1.97845602639
1.98919884697
1.99459211217
1.99729422578
1.99864665501
1.99932321299
1.99966157786
1.99983078177
1.99991538910
1.99995769410
1.99997884694
1.99998942344
1.99999471171
1.99999735586
1.99999867793
1.99999933896
1.99999966948
1.99999983474
1.99999991737
1.99999995869
1.99999997934
1.99999998967
1.99999999484
1.99999999742
1.99999999871
1.99999999935
1.99999999968
1.99999999984
1.99999999992
1.99999999996
1.99999999998
1.99999999999
1.99999999999
2.0

while more elegant solution for $\{b_n\}$ we may obtain as follows:

9
10
11
12
n=Symbol('n')
expr=2**(1-0.5**n)
a=limit(expr, n, oo)
print a

revealing

2

confirmed by an analytical solution:
$$
\lim_{n \to \infty} b_n = \lim_{n \to \infty} 2^{1-\left( \frac{1}{2} \right)^n} = 2 \ \ .
$$
Please note that Python’s command limit requires explicit definition of the variable $n$ to be symbolically assigned before calculation, and sympy package needs to be uploaded.

Asset Allocation for Tangent Portfolio with Risk-Free Asset in Python

Lesson 5>>

Studying a new programming language is an adventurous journey. It takes time in case of Python. Even for me. Not every concept is easy to grasp and testing a piece of code with its endless modifications may take a longer while. We have two choices: either to get a good book like upcoming Python for Quants or sit down and apply a copy-and-paste-and-test method. The latter approach is much longer but rewarding in the middle of our learning process. Hit and try. So, why not to try?


You cannot write the code which performs complex operations omitting necessary set of functions. There is no need to reinvent the wheel. In Python, the community works hard to deliver best programming solutions. In this lesson, we will accelerate by conducting an investigation of Python code aimed at finding optimised weights for a tangent portfolio problem.

In QaR ebook on Applied Portfolio Optimization with Risk Management using Matlab we discussed in great detail the theory and practical calculations for various cases of portfolios with different objective functions. Markowitz in 1952 underlined that the goal of portfolio choice was either to look for such portfolio which could deliver maximum return at a given level of risk or minimum risk for a given level of return. Based on the theoretical works of Sharpe in 1964, Lintner in 1965 and Tobin in 1958, the importance of the risk-free asset in the portfolio has been proved to equip the investor with a better control over risk.

We can split the budget into fractions of our capital designated for an investment in the risk-free option (e.g. the savings account in a bank) while the rest will be delegated to other assets with diversified risk levels. It was shown that for any portfolio with the risk-free component, the expected return is:
$$
R_P = mw^T + (1+{\bf 1}w^T)r_f \ \ \ \mbox{at} \ \ \ \sigma_P = wCw^T
$$ where by $C$, $m$, and $w$ the covariance matrix, individual asset expected return matrix, and asset weighting have been denoted accordingly. If we aim at variance, $\sigma_P$, to be minimised:
$$
\min_{w} \ \ wCw^T
$$ subject to $mw^T + (1+{\bf 1}^T)r_f = r_{\rm target}$, we formulate the minimum variance portfolio optimization problem. It occurs that all minimum variance portfolios are a combination of the risk-free asset and a given risky portfolio. The latter is often called the tangent portfolio and has been shown that it must contain of all assets available to investors (held in quantity to its market value relative to the total market value of all assets). That makes the second name of the tangent portfolio: the market portfolio. The objective function of the form:
$$
\max_{w} \ \ \frac{mw^T-r_f}{\sqrt{wCw^T}}
$$ subject to ${\bf 1}w^T=1$ called the Sharpe ratio corresponding to the market portfolio directly. The risk-free asset is connected with the tangent portfolio by the straight line therefore provides an investor with a good blend of risk-controlled portfolios. Modern Portfolio Theory tells us that the tangent portfolio is given by:
$$
{\bf w} = C^{-1}({\bf m} – {\bf 1}r_f)
$$ where the vector ${\bf w}$ stores the computed weights for each asset in portfolio $P$. Since finding the tangent portfolio given $N$ assets is, luckily, an analytical problem (i.e. without employment of the solvers), that makes this task a straightforward problem to be coded in Python as follows.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Asset Allocation for Tangent Portfolio with Risk-Free Asset in Python
# Accelerated Python for Quants Tutorial, Lesson 4
# (c) 2014 QuantAtRisk
 
from numpy import matrix, power
from math import sqrt
 
def TangentPortfolio(m,C,rf):
    # find number of rows and columns for the covariance matrix 
    (nr,nc)=C.shape
    A=matrix([[0.0] for r in xrange(nr)])
    A=(1/C)*(m-rf)
    (nr,nc)=A.shape
    A=A/sum(A[r,0] for r in xrange(nr))
    w=[A[r,0] for r in xrange(nr)]
    pret=mu.T*A
    prsk=power(A.T*(C*A),0.5)
    return matrix(w),pret,prsk

Here, we can see a new element of Python language which is a definition and usage of a function. We start its syntax with def, next write a desired function name with input parameters in the brackets, and end it with a colon. The body of the function must be always indented (min 4 space signs; don’t use tab!) and if some results are intended to be sent out of the function, return function should be specified at the end, listing all variables of interest.

In addition we make use of numpy module from which we import only two functions that we will be using. The first one is matrix that allows us to implement matrix or vector notation explicitly (we avoid Python’s lists or arrays at this stage). The dimensions of any matrix M can be return into tuple as shown in line #10.

Please note how Python eases our life. In line #15 we create a new one-row vector (matrix) referring directly to certain elements of other matrix by putting for…in loop inside the matrix of w itself. How brilliant it is! Lastly, using a function of power we take its first argument to the power of 1/2, i.e. we compute a square root.

To see some action, let us first define an exemplary covariance matrix and vector with expected returns corresponding to 3-assets in the portfolio:

20
21
22
23
24
cov=matrix([[0.04, 0.004, 0.02],[0.004, 0.09, 0.09],[0.02,0.09,0.16]])
mu=matrix([[0.13],[0.11],[0.19]])
rf=0.05
 
w,ret,rsk=TangentPortfolio(mu,cov,rf)

where investing at the risk-free rate of 5% has been added to complete the grand picture of the problem we discuss here. Line #24 reveals the way how we call our function and assign calculated values within the function to outer variables (their names can be different). We display the results on the screen by typing:

26
27
28
29
30
print("Portfolio weights")
print(w.T)
 
print("Expected Portfolio Return and Risk")
print ret,rsk

what returns:

Portfolio weights
[[ 0.46364368]
 [ 0.4292997 ]
 [ 0.10705661]]
 
Expected Portfolio Return and Risk
[[ 0.1278374]] [[ 0.19715402]]

what simply communicates that the expected portfolio return equals 12.8% at 19.7% of risk if we allocate 46%, 42%, and 10% in asset number 1, 2, and 3, respectively.

We find that Sharpe ratio,

32
33
sharpe=(ret-rf)/rsk
print(sharpe)

equals 0.3948 which corresponds to the Sharpe ratio of the first asset:

35
for r in xrange(3): print((mu[r,0]-rf)/sqrt(cov[r,r]))
0.4
0.2
0.35

Finally, if we denote by $\zeta$ a fraction of capital we want to invest in risky assets, leaving $(1-\zeta)$ in the bank at $r_f=5\%$ rate, then the expected portfolio return will be:
$$
\zeta wm+(1-\zeta)r_r \ \ \ \mbox{at} \ \ \ \zeta\sqrt{wCw^T}
$$ of risk, therefore for two different cases, for example:

37
38
39
40
41
42
43
alpha=0.7
print(((matrix(alpha)*w)*mu)+(1-alpha)*rf)
print(matrix(alpha)*power(w*cov*w.T,1))
 
alpha=0.25
print(((matrix(alpha)*w)*mu)+(1-alpha)*rf)
print(matrix(alpha)*power(w*cov*w.T,1))

we get

[[ 0.10448618]]
[[ 0.0272088]]
 
[[ 0.06945935]]
[[ 0.00971743]]

what confirms that by putting 70% of our capital, for instance, into three stocks should result in 10.4% gain at 2.7% rate of risk, while an allocation of 75% of the capital in the bank promises 6.9% return, i.e. approaching earlier defined risk-free rate of 5% pa.

Hungry of Python? Want some more? Stay Tuned!

A new ebook Python for Quants is coming this July! Don’t miss the full introductory course to Python with many other practical examples ready-to-rerun and use for your projects:

Python for Quants small

Hacking Google Finance in Real-Time for Algorithmic Traders

Forecasting risk in algorithmic stock trading is of paramount importance for everyone. You should always look for the ways how to detect sudden price changes and take immediate actions to protect your investments.

Imagine you opened a new long position last Wednesday for NASDAQ:NVDA buying 1500 shares at the market price of USD16.36. On the next day price goes down to USD15.75 at the end of the session. You are down 3.87% or almost a grand in one day. If you can handle that, it’s okay but if the drop were more steep? Another terrorist attack, unforeseen political event, North Korea nuclear strike? Then what? You need to react!

If you have information, you have options in your hands. In this post we will see how one can use real-time data of stock prices displayed on Google Finance website, fetch and record them on your computer. Having them, you can build your own warning system for sudden price swings (risk management) or run the code in the background for a whole trading session (for any stock, index, etc. accessible through Google Finance) and capture asset prices with an intraday sampling (e.g. every 10min, 30min, 1h, and so on). From this point only your imagination can stop you from using all collected data.

Hacking with Python

If you ever dreamt of becoming a hacker, this is your chance to shine! I have got my inspiration after reading the book of Violent Python: A Cookbook for Hackers, Forensic Analysts, Penetration Testers and Security Engineers by TJ O’Connor. A powerful combination of the beauty and the beast.

The core of our code will be contained in a small function which does the job. For a specified Google-style ticker (query), it fetches the data directly from the server returning the most current price of an asset:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Hacking Google Finance in Real-Time for Algorithmic Traders
# 
# (c) 2014 QuantAtRisk.com, by Pawel Lachowicz
 
import urllib, time, os, re, csv
 
def fetchGF(googleticker):
    url="http://www.google.com/finance?&q="
    txt=urllib.urlopen(url+googleticker).read()
    k=re.search('id="ref_(.*?)">(.*?)<',txt)
    if k:
        tmp=k.group(2)
        q=tmp.replace(',','')
    else:
        q="Nothing found for: "+googleticker
    return q

Just make sure that a Google ticker is correctly specified (as will see below). Next, let’s display on the screen our local time and let’s force a change of the system time to the one corresponding to New York City, NY. The latter assumption we make as we would like to track the intraday prices of stock(s) traded at NYSE or NASDAQ. However, if you are tracking FTSE 100 index, the Universal Time (UTC) of London is advisable as an input parameter.

18
19
20
21
22
23
24
25
26
27
# display time corresponding to your location
print(time.ctime())
print
 
# Set local time zone to NYC
os.environ['TZ']='America/New_York'
time.tzset()
t=time.localtime() # string
print(time.ctime())
print

Having that, let us define a side-function combine which we will use to glue all fetched data together into Python’s list variable:

29
30
31
32
33
34
def combine(ticker):
    quote=fetchGF(ticker) # use the core-engine function
    t=time.localtime()    # grasp the moment of time
    output=[t.tm_year,t.tm_mon,t.tm_mday,t.tm_hour,  # build a list
            t.tm_min,t.tm_sec,ticker,quote]
    return output

As an input, we define Google ticker of our interest:

36
ticker="NASDAQ:AAPL"

for which we open a new text file where all queries will be saved in real-time:

39
40
41
42
# define file name of the output record
fname="aapl.dat"
# remove a file, if exist
os.path.exists(fname) and os.remove(fname)

Eventually, we construct the final loop over trading time. Here, we fetch the last data at 16:00:59 New York time. The key parameter in the game is freq variable where we specify the intraday sampling (in seconds). From my tests, using a private Internet provider, I have found that the most optimal sampling was 600 sec (10 min). Somehow, for shorter time intervals, Google Finance detected too frequent queries sent from my IP address. This test succeed from a different IP location, therefore, feel free to play with your local Internet network to find out what is the lowest available sampling time for your geolocation.

43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
freq=600 # fetch data every 600 sec (10 min)
 
with open(fname,'a') as f:
    writer=csv.writer(f,dialect="excel") #,delimiter=" ")
    while(t.tm_hour<=16):
        if(t.tm_hour==16):
            while(t.tm_min<01):
                data=combine(ticker)
                print(data)
                writer.writerow(data) # save data in the file
                time.sleep(freq)
            else:
                break
        else:
            for ticker in tickers:
            data=combine(ticker)
            print(data)
            writer.writerow(data) # save data in the file
            time.sleep(freq)
 
f.close()

To see how the above code works in practice, I conducted a test on Jan/9 2014, starting at 03:31:19 Sydney/Australia time, corresponding to 11:31:19 New York time. Setting the sampling frequency to 600 sec, I was able to fetch the data in the following form:

Thu Jan  9 03:31:19 2014
 
Wed Jan  8 11:31:19 2014
 
[2014, 1, 8, 11, 31, 19, '543.71']
[2014, 1, 8, 11, 41, 22, '543.66']
[2014, 1, 8, 11, 51, 22, '544.22']
[2014, 1, 8, 12, 1, 23, '544.80']
[2014, 1, 8, 12, 11, 24, '544.32']
[2014, 1, 8, 12, 21, 25, '544.86']
[2014, 1, 8, 12, 31, 27, '544.47']
[2014, 1, 8, 12, 41, 28, '543.76']
[2014, 1, 8, 12, 51, 29, '543.86']
[2014, 1, 8, 13, 1, 30, '544.00']
[2014, 1, 8, 13, 11, 31, 'Nothing found for: NASDAQ:AAPL']
[2014, 1, 8, 13, 21, 33, '543.32']
[2014, 1, 8, 13, 31, 34, '543.84']
[2014, 1, 8, 13, 41, 36, '544.26']
[2014, 1, 8, 13, 51, 37, '544.10']
[2014, 1, 8, 14, 1, 39, '544.30']
[2014, 1, 8, 14, 11, 40, '543.88']
[2014, 1, 8, 14, 21, 42, '544.29']
[2014, 1, 8, 14, 31, 45, '544.15']
...

As you can notice, they were displayed on the screen (line #59 in the code) in the form of Python’s list. It is important to note that the time we make an effort to capture and associate it with fetched asset price (query) is the computer’s system time, therefore please don’t expect regular time intervals as one may get from a verified market data providers. We are hacking in real-time! However, if you think about the data themselves, this time precision is not of great importance. As long as we fetch the data every freq seconds, that sufficiently allows us to build a risk management system or even to measure a rolling volatility of an asset. Your trading model will benefit anyway.

Have also a note that if our Internet connection fails or there are some disturbances of a different kind, we will miss the data in a sent query as visible in the example above.

Looks exciting? Give me High Five! and say Hell Yeah!

Code Modification: Portfolio of Assets

The presented Python code can be very easily modified if you wish to try fetching data for a couple of assets concurrently every freq seconds. Simply extend and amend all the lines starting at row #36, for example in the following form:

36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
tickers=["NASDAQ:AAPL","NASDAQ:GOOG","NASDAQ:BIDU","NYSE:IBM", \
         "NASDAQ:INTC","NASDAQ:MSFT","NYSEARCA:SPY"]
 
# define the name of an output file
fname="portfolio.dat"
# remove a file, if exist
os.path.exists(fname) and os.remove(fname)
 
freq=600 # fetch data every 600 sec (10 min)
 
with open(fname,'a') as f:
    writer=csv.writer(f,dialect="excel") #,delimiter=" ")
    while(t.tm_hour<=16):
        if(t.tm_hour==16):
            while(t.tm_min<01):
                #for ticker in tickers:
                    data=combine(ticker)
                    print(data)
                    writer.writerow(data)
                time.sleep(freq)
            else:
                break
        else:
            for ticker in tickers:
                data=combine(ticker)
                print(data)
                writer.writerow(data)
            time.sleep(freq)
 
f.close()

That’s it! For the sake of real-time verification, here is a screenshot how does it work:

Thu Jan  9 07:01:43 2014
 
Wed Jan  8 15:01:43 2014
 
[2014, 1, 8, 15, 1, 44, 'NASDAQ:AAPL', '543.55']
[2014, 1, 8, 15, 1, 44, 'NASDAQ:GOOG', '1140.30']
[2014, 1, 8, 15, 1, 45, 'NASDAQ:BIDU', '182.65']
[2014, 1, 8, 15, 1, 45, 'NYSE:IBM', '187.97']
[2014, 1, 8, 15, 1, 46, 'NASDAQ:INTC', '25.40']
[2014, 1, 8, 15, 1, 47, 'NASDAQ:MSFT', '35.67']
[2014, 1, 8, 15, 1, 47, 'NYSEARCA:SPY', '183.43']
[2014, 1, 8, 15, 11, 48, 'NASDAQ:AAPL', '543.76']
[2014, 1, 8, 15, 11, 49, 'NASDAQ:GOOG', '1140.06']
[2014, 1, 8, 15, 11, 49, 'NASDAQ:BIDU', '182.63']
[2014, 1, 8, 15, 11, 50, 'NYSE:IBM', '187.95']
[2014, 1, 8, 15, 11, 51, 'NASDAQ:INTC', '25.34']
[2014, 1, 8, 15, 11, 52, 'NASDAQ:MSFT', '35.67']
[2014, 1, 8, 15, 11, 53, 'NYSEARCA:SPY', '183.34']
...

where we can see that we were able to grab the prices of 6 stocks and 1 ETF (Exchange Trading Fund tracking S&P500 Index) every 10 min.

Reflection

You may wonder whether hacking is legal or not? The best answer I find in the words of Gordon Gekko: Someone reminded me I once said “Greed is good”,

Hacking Google Finance in Python

PART II
    Hacking Google Finance in Real-Time for Algo Traders. (2) Pre-Market Trading.

Retrieve the Data of Fund Performance utilizing Google and Python

Do you remember my post on Get the Data of Fund Performance directly into Excel utilizing VBA and Google? If not, have a look as this time we will do the same but in Python. Shortly, given a list of APIR codes (referring to different investment option performance) we want to fetch the publicly available data at InvestSMART.com.au website. Previously we built a code in Excel/VBA. It works but it is slow as the process makes use of Excel’s plug-ins for establishing connection with external websites and downloading the data into Excel’s workbooks. We can improve it significantly using Python language.

The second reason for doing it in Python is purely educational. We will see below how we can read the data from CSV file, send a query from Python directly to Google Search, retrieve a list of responses, get the URL of the page we are interested in, download it as a HTML code, parse it to get the content of the webpage without HTML ornaments, scan the content for required information (in our case: the most actual fund performance figures available online), and finally to save all in a file.

This is our roadmap for today. Are you ready for the journey?

In Search of Sunrise

To get the job done, we will need to utilize some existing modules for Python. The most important one is google module available to download from here. We also will need mechanize module, a stateful programmatic web browsing in Python. If you need to install BeautifulSoap, here is all what you look for. Okay then, let’s roll it:

1
2
3
4
5
6
7
8
9
10
# Retrieve the Data of Fund Performance utilizing Google and Python
#  
# (c) 2014 QuantAtRisk.com, by Pawel Lachowicz
 
import mechanize, csv, os, unicodedata
from google import search as gs
from bs4 import BeautifulSoup
 
# remove file, if exists
os.path.exists("investsmart.csv") and os.remove("investsmart.csv")

where the last two lines refer to the comma-separated-values file where we will save all fetched data. Sometimes it’s a good practice to remove it if it exists in a local directory. As you will see in a moment, we do it at the beginning as whatever will be retrieved by us from the web and filtered out for content we will be appending to the output file of investsmart.csv in a loop.

We feed our algo with a list of APIR codes stored in a text file of codes.csv of the exemplary structure:

AMP1232AU
AMP1524AU
AMP0167AU
AMP1196AU
...

In order to read each line from this file, we commence with:

12
13
14
15
16
17
18
19
with open('test.csv', 'rb') as f:
    reader = csv.reader(f) # reads the lines as a list
    for irow in reader:
        apircode=irow[0]
        print(apircode)
 
        # define query for Google Search
        phrase="investsmart com au "+apircode

where line #14 iterates through file line-by-line reading in the APIR code (line #15) as the first element from the irow list. Having code in the variable of apircode we define a query for search engine (line #19) and open an inner operational loop, here referring to the output file:

21
22
23
24
25
26
27
28
29
30
        with open('investsmart.csv', 'a') as csvfile:
            writer=csv.writer(csvfile, dialect="excel")
 
            i=0
            # Search in Google for InvestSMART APIR code
            for url in gs(phrase, stop=1):
                i+=1
                if(i==1):
                    link=url # store the first URL from Google results
                    print("%s\n" % link)

The whole action starts from line #24 where we execute the first procedure of sending query to Google (command gs in line #26). As you look at this creature, it is, in fact, the masterpiece, beauty, and Python’s elegance all in one. Not only we hit the search engine with our phrase but we filter the incoming data from Google Search according to URLs list and extract URL corresponding to the first one found by Google (line #29). Next, we browse through the first link,

32
33
34
35
            br=mechanize.Browser()  # open a virtual browser
            resp=br.open(link)      
            content=resp.get_data()
            #print content

and download the HTML code of the website (line #34). To have a better feeling what the variable content contains for the first APIR code from our codes.csv input file, just uncomment and execute code in line #35.

For AMP1232AU investment option, the content displays the website HTML code starting from:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
 
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7" lang="en"> <![endif]--> 
<!--[if IE 7]>    <html class="no-js lt-ie9 lt-ie8" lang="en"> <![endif]--> 
<!--[if IE 8]>    <html class="no-js lt-ie9" lang="en"> <![endif]--> 
<!--[if gt IE 8]><!--> <html class="no-js" lang="en"> <!--<![endif]--> 
 
<html> 
<head> 
<title>AMP SigSup - AMP Super Cash - Managed fund Profile - Managed funds - InvestSMART - managed funds, shares and investment news</title> 
 
<meta http-equiv="X-UA-Compatible" content="IE=9" /> 
<meta name="verify-v1" content="xgkff+3TBcugNz7JE2NiJoqkiVs1PHybWgFkaBuhblI=" /> 
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> 
<meta name="title" content="Discount access and research on more than 1,000 top performing managed funds" /> 
<meta name="description" content="Rebate of entry fees on the majority of Australian managed funds. Research on managed funds and shares." /> 
 
...

and containing (somewhere in the middle) the information we would like to extract:

...
 
<!-- ****************************** --> 
<!-- ***   Performance Table    *** --> 
<!-- ****************************** --> 
<br /><br /> 
<table class="Container" width="100%" cellpadding="0" cellspacing="0" border="0"> 
  <tr> 
    <td class="ContainerHeader" align="left"><b>Fund Performance</b> (as at 30th Nov 2013)&nbsp;<a href="javascript:PopupWindow('/education/GlossaryPopup.asp?id=133', '_blank', 700, 450)" class="glossaryLnk" title="More information...">&nbsp;</a></td> 
    <td class="ContainerHeader" align="right">NOTE : returns for periods greater than 1 year are annualised</td> 
  </tr> 
  <tr> 
    <td colspan="2" width="100%" align="left" valign="top" class="ContainerBody"> 
      <table width="100%" cellpadding="0" cellspacing="0" border="0" class="DataTable"> 
        <tr class="DataTableHeader"> 
          <td align="left"  valign="top" nowrap>&nbsp;</td> 
          <td align="right" valign="top" nowrap>1 Month<br />%</td> 
          <td align="right" valign="top" nowrap>3 Month<br />%</td> 
          <td align="right" valign="top" nowrap>6 Month<br />%</td> 
          <td align="right" valign="top" nowrap>1 Year<br />% p.a.</td> 
          <td align="right" valign="top" nowrap>2 Year<br />% p.a.</td> 
          <td align="right" valign="top" nowrap>3 Year<br />% p.a.</td> 
          <td align="right" valign="top" nowrap>5 Year<br />% p.a.</td> 
          <!--<td align="right" valign="top" nowrap>7 Year<br />% p.a.</td>--> 
          <td align="right" valign="top" nowrap>10 Year<br />% p.a.</td> 
        </tr> 
 
 
        <tr class="DataTableRow" onMouseOver="this.className = 'DataTableRowHighlight';" onMouseOut ="this.className = 'DataTableRow';"> 
          <td align="left"  valign="top"><b>Total Return</b></td> 
          <td align="right" valign="top"><b>0.12</b></td> 
          <td align="right" valign="top"><b>0.37</b></td> 
          <td align="right" valign="top"><b>0.8</b></td> 
          <td align="right" valign="top"><b>1.77</b></td> 
          <td align="right" valign="top"><b>2.24</b></td> 
          <td align="right" valign="top"><b>2.62</b></td> 
          <td align="right" valign="top"><b>-</b></td> 
          <!--<td align="right" valign="top"><b>-</b></td>--> 
          <td align="right" valign="top"><b>-</b></td> 
        </tr>
 
...

As we can see the HTML code is pretty nasty and digging up numbers from a subsection “Total Returns” seems to be troublesome. Well, not for Python. Just have a look how easily you can strip HTML from all those Christmas Tree ornaments:

37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
            # transform code into beautiful soup ;-)
            soup = BeautifulSoup(content)
 
            # perform post-processing for stripped data
            outputline=[]
            row=0
            roww=0
            for string in soup.stripped_strings:
                if(string=="Fund Performance"):
                    print(string)
                    j=1
                    row=1
                if(row!=0): 
                    row+=1
                if(row==3):
                    updated=string
                    print(string)
                if(string=="Total Return") and (j==1):
                    print(string)
                    roww=1
                    j=0
                if(roww!=0):
                    roww+=1
                if(roww>=3) and (roww<11):
                    if(roww==3):
                        print(string)
                        s=string.lstrip('(').rstrip(')').lstrip('as at ')
                    else:
                        s=string
                    outputline.append(s)
                    print(string)

In line #38 we make use of BeautifulSoup – a Python library designed for quick turnaround projects like screen-scraping. The transformed data we can scan line-by-line (the loop starting in line #44) where the outer function of .stripped_strings makes all HTML bumpers and stickers vanish leaving us with pure text! From that point, all following lines of Python code (#45-67) are designed to extract specific information, print it on the screen, and append to Python’s list of outputline. For AMP1232AU investment option from our query, Python displays:

AMP1232AU
http://www.investsmart.com.au/managed-funds/profile.asp?function=print&FundID=16654
 
Fund Performance
(as at 30th Nov 2013)
Total Return
0.12
0.12
0.37
0.8
1.77
2.24
2.62
-
-

If Google returns the first link to be something completely different than InvestSMART webpage with requested APIR code,

70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
            # check validity of desired data for APIR code
            if(len(outputline)>0):
                rowout=[]
                outputline.insert(0,apircode) # add APIR code to the list
                outputline.append(updated)    # add the performance numbers
                for item in outputline:
                    rowout.append(item.encode('ascii')) # convert <type:unicode>
                                                        # to <type:string>
                print(rowout) # show final version of the list on the screen
                if(len(rowout)>0):
                    writer.writerow(rowout) # if non-zero, save/append the list
                                            # with data in "investsmart.csv" 
                                            # output file, otherwise ignore
                                            # this query's results
            csvfile.close()
 
f.close()

the outputline string should be of zero length. With extra few lines for post-processing, the output file stores the subject of our small endeavour, i.e.:

pawels-mbp:Py pawel$ cat investsmart.csv 
AMP1232AU,0.12,0.37,0.8,1.77,2.24,2.62,-,-,(as at 30th Nov 2013)
AMP1524AU,-2.47,0.14,-1.58,8.63,14.09,9.42,-,-,(as at 30th Nov 2013)
AMP0167AU,0.39,1.96,3.18,8.45,7.72,5.82,5.09,4.89,(as at 30th Nov 2013)
...

what ends our journey In Search of Sunrise. The total time of scanning 1000 APIR codes is cut down by 60% making use of Python instead of VBA. That brings to my mind an AirAustral commercial that I saw a year ago: Arrive fast, then slow down.

First Contact with Python

Python does miracles. Really. The more I’m diving into the abilities of this programming language the more I’m left speechless. This is this sort of feeling when you can swim among the sharks in the ocean and one day you jump off the plane and discover skydiving for the very first time. You are flying! Sounds appealing enough? Great! Welcome onboard!

py

Before the air of Python coding becomes your second nature you need to start your experience from the basics. They are no so exciting but provide you with the tastes and flavors of the language itself. With the promise to fly anywhere and a motivation standing behind that, it is solely up to you how hungry you are?! How hungry you are to leave your comfort zone of current programming habits in Java, C++ or Matlab, and step into the zone of magic?! This is a proving ground, the judgment day of your character: are you strong enough to accelerate your life, to gain a new skill and do something for yourself that a world will thank you tomorrow?!

Say “I’m ready!” if you are ready because I’m ready, then we shall begin…

Knockin’ on Heaven’s Door

If you went through my previous post on Python, you learned that there were at least two way of running Python in Unix/Linux/MacOSX environment: an interactive method (using Terminal) or active method (using IDLE; an integrated development environment). As for the latter solution, it utilizes a text editor to keep a track record of your codes while the former acts upon what you type and forgets about that as soon as you exit Python. To visualize that case simply see the clip at the end of this post where I provide you with a quick guide to running Python codes for the aforementioned two scenarios.

Now, we will compress our ability to assimilate Python’s fundamentals in the following couples of lines by referring to the codes executable from the text-editor level. Let the guys from MIT don’t think they are the best among the rest.

Python works in the mode of interpretation of every single line of code, line-by-line. It assumes you are responsible for what you wish to do or calculate so it doesn’t check your syntax in first place, flashes with red light and warns you. It trusts you! So as it goes through your code and finds typos, unknown variables, wrong types, or unrecognizable keywords or functions – then it will halt. Pretty fair deal. The rules of game are easy to grasp quickly. Let’s see the basics.

That’s one small Step for a Man, …

It is too much of a hassle to install and use Python as a simple calculator. But you can do it. Just add, subtract, do anything what follows the general rules of arithmetic:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Accelerated Python for Quants, Part 3
# (c) 2013 Pawel Lachowicz, QuantAtRisk.com
 
# Variables (BTW, the comments in Python start with #)
x=2
y=9
z=13.513
 
a=x+x
b=(x-3*x)*7.32+x/y
c=y**(1/x)
txt= 'is derived' # string
 
print a
print "b= ", b
print("The value of a equals to %g where c= %f %s" %(a,c,txt))

where we defined three variables $x, y$ and $z$, assigned to them some values and performed simple operations. The numerical results have been assigned to different three new variables, namely $a, b$, and $c$ with our intention to display them on the screen using the Python command of print. Running the code returns:

4
b=  -29.28
The value of a equals to 4 where c= 1.000000 is derived

Now, this is your first lesson on what Python assumes about your style of talking to it. It resembles a never-ending discussion among all women: does the size matter? I don’t know. Just ask them! But all I know is that for Python the type of the variables matters! Our variable of $x$ has been defined as an integer so $a=x+x=4$ as expected. But if you had tried to calculate:

print y/x

you would get:

4

instead of 4.5. To tell Python what you want, simply underline so-called float variable(s) to ensure that the calculation will be undertaken at the floating-point level by the processor. The following operations will force the outcome to be of the float type. Just check them out:

print 9.0/2.0, 9.0/2, 9./2, 9/2.0, 9/2.

Use the command of type to verify your variables anytime, for instance:

print(type(a))
print(type(b))
print(type(c))

should display

<type 'int'>
<type 'float'>
<type 'int'>

Note that in line #11 of the code, the value of $c$ is integer despite the fact we tried to perform the calculation of
$$
c=y^{1/x} \equiv \sqrt[x]{y} = 3 \ .
$$ Moreover, it failed delivering the outcome of $c=1$ which is far away for 3, right? This is all because of the difference in the type and you need to be aware of it while coding in Python. How to fix it? If your coding churns lots of variables of different kind you need to keep a track record of all of them and make sure what you want to obtain in result. A simple trick to force floating-point calculations is by the application of float() function acting directly on the variable or expression, for example:

c=float(y**(1/float(x)))
print c
c_i=int(c)
print c_i

will give you:

3.0
3

where the last line shows you how to convert, now correctly derived value of $c$ equal 3, into an integer type again assigned to a new variable of $c_i$.

Take also a closer look at lines #14-16 of the code where I showed you how differently you can display the information on the screen. The line #16 is an old-school way where you wish to display a sentence as a string-type variable which would contain some text of yours and include the values of two variables, namely $a$ and $c$. Inside the string you place a markers in form of %_ where %g, %f, and %s inform Python that it is asked to display the variables a,c, and txt in the integer, float, and string format, respectively.

Drilling the topic one level deeper, in your code you can check the type of any variable applying a conditional verification of some expressions which evaluate to the boolean type, i.e. True (=1) or False (=0).

So, here we go with a new Python structure of if-elif-else:

18
19
20
21
22
23
24
25
26
test=False
c=str(float(y**(1/float(x))))
if isinstance(c, int):
    print 'c is an integer>'
elif(isinstance(c,str)):
    print 'c is a string'
    Test=True
else:
    print "c is probably of a float-type"

resulting in:

c is a string

where we formulated a logical English-language-style question checking first condition whether $c$ was an integer, if not then checking another condition of $c$ being of the string type, and if that had failed the code would display an information letting us know about $c$ being probably a float. Easy? At least extremely intuitive.

Notice some differences in notations. Python accepts string encapsulated in single or double quotation. That’s very kind of it. Whatever you like, use in Python! Secondly, white spaces doesn’t matter unless they violate the syntax or spoil the output you desire to get. Thirdly, note colons (:) at the end of the line #20, #22, and #25. Forgetting about them is like jotting down an integral $\int x^2$ without $dx$ at the end. Fourthly, we can notice in line #19, #20, and #22 how we apply the idea of function acting on the inner arguments which are evaluated first in the order of nesting: inside$\rightarrow$out.

Lastly, in Python we use indentation for many cases or fixed language structures like if-elif-else mentioned above. The indentation means putting 4 white spaces (never Tab!) and then typing (see e.g. line #21). Why it so important? It occurs that some scientists conducted experiments on programmers and code developers and came out with a stunning result. When we code our brain is nearly blind to spaces or curly braces but it is sharp as a morning sun to the way how and where we nest some lines of code which of course improve the readability. The results were literally so mind-blowing that the guys who oversee Python evolution established (minimal) 4-space indentation as a standard. Keep that story in mind.

At the end we can make use of a variable test we made partially alive in the line #18. It appears again in the block of statements during if-elif-else conditional verification (line #24) and changes its value to True if $c$ appears to be of string type. Since in our code this is happening we send an additional order to Houston, where the guys from NASA commence the countdown of the Python rocket using another important language structure of while-loop as follows:

28
29
30
31
32
33
x = 10
while x >0:
    print(x)
    x-=1 
    if(x==0):
        print("Lift off!")

sending it into space when the launch status is reached:

10
9
8
7
6
5
4
3
2
1
Lift off!

 

HOMEWORK

Write a code in Python for the following problems making use of knowledge from this lesson. Submit your results as a comment to this post. Challenge your brain. We will accelerate next time, Warp 2, so buckle up!

(a) Given $|x-3|>21$ solve for $x$.
(b) The monthly rate of return for the cash investment option is $r=0.367$% and is fixed over the course of next 12 months. Find the annualized rate of return for that option. Hint- use the following formula in your code but applying while structure:
$$ R = \left[ \prod_{i=1}^{12} (1+r) \right] -1 $$
(c) A grandpa and a grandma are all together 147 days old. The grandpa is twice as old as the grandma when she was as old as the grandpa is right now. Find how old is grandpa and grandma today?

 

VIDEO

A quick guide to the methods how one can run the Python code in MacOS/Unix-like environment:

YouTube Preview Image

 

Talking to Python: TextMate Code Editor

 

Programming in Python requires a decent text editor. A decent is a powerful word here. With a dramatic improvement in this field over past decade, anyone who wishes to start his adventure with coding should consider what sort of Integrated Development Environment would be most suitable for his needs. It’s pretty difficult to address all related questions when you are the beginner to programming in Python. A quick scan of the Web suggests one the following solutions for Mac: Vico, TextWrangler, EMacs, TextMate, Eclipse, etc. It is also possible to run Python project under Xcode (see here).


Within this Python for Quants course we will make use of fairly simple but dynamically evolving code editor of TextMate. There are no special criteria standing behind this editor but it does a hell of the job for us as we gain our expertise in Python. The best way to summarize its capability is to quote James Gray as follows. TextMate is a full-featured text editor available for Mac OS X that can greatly enhance your text manipulation skills. TextMate is actually a thin shell over a personalized team of robot ninjas ready to do your bidding. It uses built-in automations for Python. You can manage all the files in your projects, fly through your files with easy navigation techniques, master quick and dirty text editing with strong regular expression integration.

Sound appealing, convincing? Well, you need to start somewhere and this a good place to start.

Installation

One sentence should be enough at this point. Visit TextMate’s website and proceed with download and installation. But one sentence is always not enough. If you already have migrated to Mavericks OS X, the team recommends v2.0-alpha for this platform. You can download and start using it for free in a stripped configuration. This will be more than sufficient for us at the beginning. But if you think more seriously about coding as your new life’s calling, you should consider going for an upgrade.

Editor

We will need a plain sheet of electronic paper to write the code. As soon as we start writing the Python code, a good practice is to save the file at the very beginning. A world-wide convention among coders is to name all source Python listings with an extension of .py. To convince yourself about TextMate capabilities across different languages explore the Bundles from the main menu:

ed1

Now, if you type into editor the simplest two lines in Python:

# Your very first script
 
print 'Hello, world!'

and press ⌘R you should be able to execute the code. A side window pops up with the desired outcome:

ed2

Please note that coding in Python and code execution in TextMate is not available for interactive commands like input in the following example:

foo=input('Please enter a value:')

It may sound as a terrible experience but a professional coding nowadays eliminates this interactions to the absolute minimum. You can do it always from the command line python in the Terminal:

ed3

Additional Configuration

If you followed the configuration step from the previous Part 1 and installed all additional Python libraries in the folder of your choice, you may meet some difficulties in reaching for them in TextMate. It took me 15 minutes to find the way to correct that inconvenience which was not so obvious at the beginning.

Firstly, make sure the PATH is set properly and includes the access to the python executables. Select from Menu Preferences.. tab ‘Variables’ and edit properly, for instance:

ed4

Secondly, you will amend one line in the TextMate’s script responsible for running your codes while you press ⌘R. Go in menu to Bundles, next to Select Bundle Item… and in the search box of a new window type ‘Run Script’. You should see a limited option pointing at Python reference. Double click will take you to the ⌘R’s command editor.

ed5

Around line #13, by default TextMate looks for python installed in your OS X system and tries to execute:

TextMate::Executor.run(ENV["TM_PYTHON"] || "python", "-u", ENV["TM_FILEPATH"] ...

line. We just need to eliminate the inner reference to the variables ENV[“TM_PYTHON”] || “python” and replace them with a path to your python, e.g.:

TextMate::Executor.run("~/env/bash27/bin/python", "-u", ENV["TM_FILEPATH"] ...

That should resolve all major issues related to getting an access to the additional libraries installed by you manually.

Stay Tuned!

A new ebook on Python for Quants is coming this July! Don’t miss the full introductory course to Python with 300+ practical examples ready-to-rerun and use for your code building:

Python for Quants small

Setting up Python for Quantitative Analysis in OS X 10.9 Mavericks

Welcome!

 

Someone, not so long time ago, turned my attention from Matlab coding towards Python programming. He used a strong argument on the enormous flexibility of the language and equally dynamic capabilities as contrasted with Matlab. Well, not mentioning a zero-dollar cost and the power behind the art of computing. That got my attention. When I did my own in-depth research on Python’s applications to the quantitative finance and risk modeling, I came back to my friend saying: Pal, you got my attention but now you have my curiosity. It’s difficult to divide a heart between two women and love them both the same way. Matlab and Python. However, since Python enters the saloons of financial coding with a full splendour, it deserves more respect than initially one would consider.


With this article, my wish is to commence a new thread at Quant At Risk pertaining a tutorial for all quants and financial risk managers wishing to pick up the language’s fundamentals, tastes, and all ingredients backed by a number of useful examples on the way. As this is a new path of wisdom for me too, I hope we will enjoy this journey together.

Assuming that Apple, Inc. and its powerful computers will continue to keep the pace of becoming increasingly popular among financial applications, let me put an extra ground standing behind the motivation of this post. We will set up a complete quant programming Python environment in Apple’s OS X 10.9 Mavericks operating system.

Steps to Success

Let me provide you with a complete list of steps you need to take to download, install, and setup Python on Mac. I will limit all descriptions to a required minimum. The whole process should take no longer than thirty minutes depending on your Internet connection.

1.Download .dmg file with Python 2.7.5 for Mac OS X 64-bit/32-bit from here.
Execute and install on you hard drive.
2. XCode
If you don’t have it, just visit Mac App Store for a free download. It may require you to register first as a developer but that comes will all privileges for your programming career in the future. Once installed, open XCode, go to Preferences… and click on the Downloads tab. We will need to have downloaded and installed Command Line Tools, therefore if you don’t have it, follow that path.
Open Terminal for next steps.
3. sudo easy_install virtualenv
This command installs and allows us to set up a temporary environment assisting within the further installation process.

terminal2

4. mkdir -p ~/Python/env; virtualenv ~/Python/env
5. pip install mpmath
6. pip install numpy
7. pip install scipy
8. /usr/bin/ruby -e “$(curl -fsSkL raw.github.com/mxcl/homebrew/go)”
9. brew install pkg-config
10. brew install freetype
11. brew install libpng
12. brew install ffmpeg
13. pip install matplotlib
14. brew install zeromq
15. pip install pyzmq
16. pip install Tornado
17. pip install readline
18. pip install azure
19. pip install curses
20. pip install cython
21. pip install jinja2
22. pip install pexpect
23. pip install pygments
24. pip install pymongo
25. pip install sphinx
26. pip install sqlite3
27. pip install wx
28. pip install zmq
29. pip install sympy
30. pip install patsy
31. pip install scikit_learn
32. pip install statsmodels
33. pip install pandas
34. pip install ipython
All these packages we will be using extensively, separately or in a combined version. They all supplement each other and make us equipped with a powerful weapon of sophisticated coding in Python.
35.export PY_ENV_DIR=~/Python/env
Add this line into ~/.bash_profile file.
By typing a command which python we shoud get: ~/Python/env/bin/python. If so, it looks like that we went through the storm smoothly and we are ready to give go-and-go for launch!
In Terminal, type and execute the following command. It should give us an access to Python coding:
terminal3
In the next post, we will setup a nice and cosy Integrated Development Environment (IDE) which should be sufficient for our early stages of scripting in Python.
Contact Form Powered By : XYZScripts.com