Quantitative Analysis, Risk Management, Modelling, Algo Trading, and Big Data Analysis

## Predicting Heavy and Extreme Losses in Real-Time for Portfolio Holders (1)

The probability of improbable events. The simplicity amongst complexity. The purity in its best form. The ultimate cure for those who trade, for those who invest. Does it exist? Can we compute it? Is it really something impossible? In this post we challenge ourselves to the frontiers of accessible statistics and data analysis in order to find most optimal computable solutions to this enigmatic but mind-draining problem. Since there is no certainty, everything remains shrouded in the veil of probability. The probability we have to face and hope that something unlikely, determined to take place anyhow, eventually will not happen.

Our goal is to calculate the probability of a very rare event (e.g. a heavy and/or extreme loss) in the trading market (e.g. of a stock plummeting 5% or much more) in a specified time-horizon (e.g. on the next day, in one week, in one month, etc.). The probability. Not the certainty of that event.

In this Part 1, first, we look at the tail of an asset return distribution and compress our knowledge on Value-at-Risk (VaR) to extract the essence required to understand why VaR-stuff is not the best card in our deck. Next, we move to a classical Bayes’ theorem which helps us to derive a conditional probability of a rare event given… yep, another event that (hypothetically) will take place. Eventually, in Part 2, we will hit the bull between its eyes with an advanced concept taken from the Bayesian approach to statistics and map, in real-time, for any return-series its loss probabilities. Again, the probabilities, not certainties.

1. VaR (not) for Rare Events

In the framework of VaR we take into consideration $T$ days of trading history of an asset. Next, we drive a number (VaR) that describes a loss that is likely to take place with the probability of approximately $\alpha$. “To take place” does not mean here that it will take place. In this approach we try to provide some likelihood (a quantitative measure) of the rare event in a specified time-horizon (e.g. on the next day if daily return-series are under our VaR investigation; a scenario considered in this post).

If by $L$ we denote a loss (in percent) an asset can experience on the next day, then:
$$\mbox{Pr}(L \le -\mbox{VaR}_{1-\alpha}) = \alpha$$ would be the probability of a loss of $-L\times D$ dollars where $-L\times D\ge -\mbox{VaR}\times D$, equal, for instance, $\alpha=0.05$ (also referred to as $(1-\alpha)$% VaR measure; $\mbox{VaR}_{95}$, etc.) and $D$ is the position size (money invested in the asset in terms of physical currency, e.g. in dollars). In other words, the historical data can help us to find $\mbox{VaR}_{95}$ given $\alpha$ assuming 5% of chances that $\mbox{VaR}_{1-\alpha}$ will be exceeded on the next day.

In order to illustrate that case and its shortcomings when it comes to the analysis of rare events, let’s look at the 10 year trading history of two stocks in the NASDAQ market: highly volatile CAAS (China Automotive Systems, Inc.; of the market capital of 247M) and highly liquid AAPL (Apple Inc.; of the market capital of 750B). First, we fetch their adjusted close price-series from Yahoo! Finance and derive the corresponding daily return-series utilising Python:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 # Predicting Heavy and Extreme Losses in Real-Time for Portfolio Holders # (c) 2015 QuantAtRisk.com, by Pawel Lachowicz # # heavy1.py   import pandas.io.data as web import matplotlib.pyplot as plt import numpy as np from pyvar import findvar, findalpha     # ---1. Data Processing   # fetch and download daily adjusted-close price series for CAAS # and AAPL stocks using Yahoo! Finance public data provider caas = web.DataReader("CAAS", data_source='yahoo', start='2005-05-13', end='2015-05-13')['Adj Close'] aapl = web.DataReader("AAPL", data_source='yahoo', start='2005-05-13', end='2015-05-13')['Adj Close']   CAAScp = np.array(caas.values) AAPLcp = np.array(aapl.values)   f = file("data1.dat","wb") np.save(f, CAAScp) np.save(f, AAPLcp) f.close()   # read in the data from a file f = file("data1.dat","rb") CAAScp = np.load(f) AAPLcp = np.load(f) f.close()   # compute return-series retCAAS = CAAScp[1:]/CAAScp[:-1]-1 retAAPL = AAPLcp[1:]/AAPLcp[:-1]-1

The best way to understand the data is by plotting them:

39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 # plotting (figure #1) # adjusted-close price-series fig, ax1 = plt.subplots(figsize=(10, 6)) plt.xlabel("Trading days 13/05/2005-13/05/2015") plt.plot(CAAScp, '-r', label="CAAS") plt.axis("tight") plt.legend(loc=(0.02, 0.8)) plt.ylabel("CAAS Adj Close Price (US$)") ax2 = ax1.twinx() plt.plot(AAPLcp, '-', label="AAPL") plt.legend(loc=(0.02, 0.9)) plt.axis("tight") plt.ylabel("AAPL Adj Close Price (US$)")   # plotting (figure #2) # daily return-series plt.figure(num=2, figsize=(10, 6)) plt.subplot(211) plt.grid(True) plt.plot(retCAAS, '-r', label="CAAS") plt.axis("tight") plt.ylim([-0.25,0.5]) plt.legend(loc="upper right") plt.ylabel("CAAS daily returns") plt.subplot(212) plt.grid(True) plt.plot(retAAPL, '-', label="AAPL") plt.legend(loc="upper right") plt.axis("tight") plt.ylim([-0.25,0.5]) plt.ylabel("AAPL daily returns") plt.xlabel("Trading days 13/05/2005-13/05/2015")

We obtain the price-series

and return-series

respectively. For the latter plot, by fixing the scaling of both $y$-axes we immediately gain an chance to inspect the number of daily trades closing with heavy losses. Well, at least at first glance and for both directions of trading. In this post we will be considering the long positions only.

Having our data pre-processed we may implement two different strategies to make use of the VaR framework in order to work out the probabilities for tail events. The first one is based on setting $\alpha$ level and finding $-VaR_{1-\alpha}$. Let’s assume $\alpha=0.01$, then the following piece of code

73 74 75 76 77 78 79 80 # ---2. Compuation of VaR given alpha   alpha = 0.01   VaR_CAAS, bardata1 = findvar(retCAAS, alpha=alpha, nbins=200) VaR_AAPL, bardata2 = findvar(retAAPL, alpha=alpha, nbins=100)   cl = 100.*(1-alpha)

aims at computation of the corresponding numbers making use of the function:

def findvar(ret, alpha=0.05, nbins=100): # Function computes the empirical Value-at-Risk (VaR) for return-series # (ret) defined as NumPy 1D array, given alpha # (c) 2015 QuantAtRisk.com, by Pawel Lachowicz # # compute a normalised histogram (\int H(x)dx = 1) # nbins: number of bins used (recommended nbins>50) hist, bins = np.histogram(ret, bins=nbins, density=True) wd = np.diff(bins) # cumulative sum from -inf to +inf cumsum = np.cumsum(hist * wd) # find an area of H(x) for computing VaR crit = cumsum[cumsum <= alpha] n = len(crit) # (1-alpha)VaR VaR = bins[n] # supplementary data of the bar plot bardata = hist, n, wd return VaR, bardata

Here, we create the histogram with a specified number of bins and $-VaR_{1-\alpha}$ is found in an empirical manner. For many reasons this approach is much better than fitting the Normal Distribution to the data and finding VaR based on the integration of that continuous function. It is a well-know fact that such function would underestimate the probabilities in far tail of the return distribution. And the game is all about capturing what is going out there, right?

The results of computation we display as follows:

82 83 print("%g%% VaR (CAAS) = %.2f%%" % (cl, VaR_CAAS*100.)) print("%g%% VaR (AAPL) = %.2f%%\n" % (cl, VaR_AAPL*100.))

i.e.

99% VaR (CAAS) = -9.19% 99% VaR (AAPL) = -5.83%

In order to gain a good feeling of those numbers we display the left-tails of both return distributions

85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 # plotting (figure #3) # histograms of daily returns; H(x) # plt.figure(num=3, figsize=(10, 6)) c = (.7,.7,.7) # grey color (RGB) # # CAAS ax = plt.subplot(211) hist1, bins1 = np.histogram(retCAAS, bins=200, density=False) widths = np.diff(bins1) b = plt.bar(bins1[:-1], hist1, widths, color=c, edgecolor="k", label="CAAS") plt.legend(loc=(0.02, 0.8)) # # mark in red all histogram values where int_{-infty}^{VaR} H(x)dx = alpha hn, nb, _ = bardata1 for i in range(nb): b[i].set_color('r') b[i].set_edgecolor('k') plt.text(-0.225, 30, "VaR$_{%.0f}$ (CAAS) = %.2f%%" % (cl, VaR_CAAS*100.)) plt.xlim([-0.25, 0]) plt.ylim([0, 50]) # # AAPL ax2 = plt.subplot(212) hist2, bins2 = np.histogram(retAAPL, bins=100, density=False) widths = np.diff(bins2) b = plt.bar(bins2[:-1], hist2, widths, color=c, edgecolor="k", label="AAPL") plt.legend(loc=(0.02, 0.8)) # # mark in red all histogram bars where int_{-infty}^{VaR} H(x)dx = alpha hn, nb, wd = bardata2 for i in range(nb): b[i].set_color('r') b[i].set_edgecolor('k') plt.text(-0.225, 30, "VaR$_{%.0f}$ (AAPL) = %.2f%%" % (cl, VaR_AAPL*100.)) plt.xlim([-0.25, 0]) plt.ylim([0, 50]) plt.xlabel("Stock Returns (left tail only)") plt.show()

where we mark our $\alpha=$1% regions in red:

As you may notice, so far, we haven’t done much new. A classical textbook example coded in Python. However, the last figure reveals the main players of the game. For instance, there is only 1 event of a daily loss larger than 15% for AAPL while CAAS experienced 4 heavy losses. Much higher 99% VaR for CAAS takes into account those 4 historical events and put more weight on 1-day VaR as estimated in our calculation.

Imagine now that we monitor all those tail extreme/rare events day by day. It’s not too difficult to notice (based on the inspection of Figure #2) that in case of AAPL, the stock recorded its first serious loss of $L \lt -10$% approximately 650 days since the beginning of our “monitoring” which commenced on May 13, 2005. In contrast, CAAS was much more volatile and you needed to wait only ca. 100 days to record the loss of the same magnitude.

If something did not happen, e.g. $L \lt -10$%, the VaR-like measure is highly inadequate measure of probabilities for rare events. Once the event took place, the estimation of VaR changes (is updated) but decreases in time until a new rare event occurs. Let’s illustrate it with Python. This is our second VaR strategy: finding $\alpha$ given the threshold for rare events. We write a simple function that does the job for us:

def findalpha(ret, thr=1, nbins=100): # Function computes the probablity P(X<thr)=alpha given threshold # level (thr) and return-series (NumPy 1D array). X denotes the # returns as a rv and nbins is number of bins used for histogram # (c) 2015 QuantAtRisk.com, by Pawel Lachowicz # # compute normalised histogram (\int H(x)dx=1) hist, bins = np.histogram(ret, bins=nbins, density=True) # compute a default histogram hist1, bins1 = np.histogram(ret, bins=nbins, density=False) wd = np.diff(bins1) x = np.where(bins1 < thr) y = np.where(hist1 != 0) z = list(set(x[0]).intersection(set(y[0]))) crit = np.cumsum(hist[z]*wd[z]) # find alpha try: alpha = crit[-1] except Exception as e: alpha = 0 # count number of events falling into (-inft, thr] intervals nevents = np.sum(hist1[z]) return alpha, nevents

We call it in our main program:

126 127 128 129 130 131 132 133 134 135 136 137 138 # ---3. Computation of alpha, given the threshold for rare events   thr = -0.10   alpha1, ne1 = findalpha(retCAAS, thr=thr, nbins=200) alpha2, ne2 = findalpha(retAAPL, thr=thr, nbins=200)   print("CAAS:") print(" Pr( L < %.2f%% ) = %.2f%%" % (thr*100., alpha1*100.)) print(" %g historical event(s)" % ne1) print("AAPL:") print(" Pr( L < %.2f%% ) = %.2f%%" % (thr*100., alpha2*100.)) print(" %g historical event(s)" % ne2)

what returns the following results:

CAAS: Pr( L < -10.00% ) = 0.76% 19 historical event(s) AAPL: Pr( L < -10.00% ) = 0.12% 3 historical event(s)

And that’s great however all these numbers are given as a summary of 10 years of analysed data (May 13/2005 to May/13 2015 in our case). The final picture could be better understood if we could dynamically track in time the changes of both probabilities and the number of rare events. We achieve it by the following not-state-of-the-art code:

141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 # ---4. Mapping alphas for Rare Events   alphas = [] nevents = [] for t in range(1,len(retCAAS)-1): data = retCAAS[0:t] alpha, ne = findalpha(data, thr=thr, nbins=500) alphas.append(alpha) nevents.append(ne)   alphas2 = [] nevents2 = [] for t in range(1,len(retAAPL)-1): data = retAAPL[0:t] alpha, ne = findalpha(data, thr=thr, nbins=500) alphas2.append(alpha) nevents2.append(ne)   # plotting (figure #4) # running probability for rare events # plt.figure(num=4, figsize=(10, 6)) ax1 = plt.subplot(211) plt.plot(np.array(alphas)*100., 'r') plt.plot(np.array(alphas2)*100.) plt.ylabel("Pr( L < %.2f%% ) [%%]" % (thr*100.)) plt.axis('tight') ax2 = plt.subplot(212) plt.plot(np.array(nevents), 'r', label="CAAS") plt.plot(np.array(nevents2), label="AAPL") plt.ylabel("# of Events with L < %.2f%%" % (thr*100.)) plt.axis('tight') plt.legend(loc="upper left") plt.xlabel("Trading days 13/05/2005-13/05/2015") plt.show()

revealing the following picture:

It’s a great way of looking at and understanding the far left-tail volatility of the asset under your current investigation. The probability between two rare/extreme events decreases for the obvious reason: along the time axis we include more and more data therefore the return distribution evolves and shifts its mass to the right leaving left tail events less and less probable.

The question remains: if an asset has never experienced an extreme loss of given magnitute, the probability of such event, within our VaR framework, simply remains zero! For example, for losses $L \lt -20$%,

CAAS displays 2 extreme events while AAPL none! Therefore, our estimation of superbly rare daily loss of -20% (or more) for AAPL stock based on 10 years of data is zero or undefined or… completely unsound. Can we do better than this?

2. Classical Conditional Prediction for Rare Events

Let’s consider a case where we want to predict the probability of the asset/stock (CAAS; traded at NASDAQ exchange) falling down more than $-L$%. Previously we have achieved such estimation through the integration of its probability density function. An alternative way is via derivation of the conditional probability, i.e. that CAAS will lose more than $-L$% given that NASDAQ index drops more than $-L$% on the next day.

Formula? Well, Bayes’ formula. That all what we need:
$$\mbox{Pr}(B|R) = \frac{ \mbox{Pr}(R|B)\ \mbox{Pr}(B) } { \mbox{Pr}(R) } .$$
Great, now, the meaning of each term. Let $A$ denotes the event of the stock daily return to be between -L% and 0%. Let B is the event of the stock return to be $\lt -L$%. Let $R$ is the event of NASDAQ daily return to be $\lt -L$%. Given that, based on both CAAS and NASDAQ historical time-series, we are able to compute $\mbox{Pr}(A)$, $\mbox{Pr}(B)$, $\mbox{Pr}(R|A)$, $\mbox{Pr}(R|B)$, therefore
$$\mbox{Pr}(R) = \mbox{Pr}(R|A)\mbox{Pr}(A) + \mbox{Pr}(R|B)\mbox{Pr}(B)$$ as well. Here, $\mbox{Pr}(R|A)$ would stand for the probability of NASDAQ falling down more than $-L$% given the observation of the CAAS return to be in $(-L;0)$% interval and $\mbox{Pr}(R|B)$ would denote the probability of NASDAQ falling down more than $-L$% given the observation of the CAAS return also dropping more than $-L$% on the same day.

With Bayes’ formula we inverse the engineering, aiming at providing the answer on $\mbox{Pr}(B|R)$ given all available data. Ready to code it? Awesome. Here we go:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 # Predicting Heavy and Extreme Losses in Real-Time for Portfolio Holders # (c) 2015 QuantAtRisk.com, by Pawel Lachowicz # # heavy2.py   import pandas.io.data as web import matplotlib.pyplot as plt import numpy as np   # ---1. Data Processing   # fetch and download daily adjusted-close price series for CAAS stock # and NASDAQ index using Yahoo! Finance public data provider ''' caas = web.DataReader("CAAS", data_source='yahoo', start='2005-05-13', end='2015-05-13')['Adj Close'] nasdaq = web.DataReader("^IXIC", data_source='yahoo', start='2005-05-13', end='2015-05-13')['Adj Close']   CAAScp = np.array(caas.values) NASDAQcp = np.array(nasdaq.values)   f = file("data2.dat","wb") np.save(f,CAAScp) np.save(f,NASDAQcp) f.close() '''   f = file("data2.dat","rb") CAAScp = np.load(f) NASDAQcp = np.load(f) f.close()   # compute the return-series retCAAS = CAAScp[1:]/CAAScp[:-1]-1 retNASDAQ = NASDAQcp[1:]/NASDAQcp[:-1]-1

The same code as used in Section 1 but instead of AAPL data, we fetch NASDAQ index daily close prices. Let’s plot the return-series:

37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 # plotting (figure #1) # return-series for CAAS and NASDAQ index # plt.figure(num=2, figsize=(10, 6)) plt.subplot(211) plt.grid(True) plt.plot(retCAAS, '-r', label="CAAS") plt.axis("tight") plt.ylim([-0.25,0.5]) plt.legend(loc="upper right") plt.ylabel("CAAS daily returns") plt.subplot(212) plt.grid(True) plt.plot(retNASDAQ, '-', label="NASDAQ") plt.legend(loc="upper right") plt.axis("tight") plt.ylim([-0.10,0.15]) plt.ylabel("NASDAQ daily returns") plt.xlabel("Trading days 13/05/2005-13/05/2015") plt.show()

i.e.,

where we observe a different trading dynamics for NASDAQ index between 750th and 1000th day (as counted from May/13, 2005). Interestingly, the heaviest losses of NASDAQ are not ideally correlated with those of CAAS. That make this case study more exciting! Please also note that CAAS is not the part of the NASDAQ index (i.e., its component). Bayes’ formula is designed around independent events and, within a fair approximation, we may think of CAAS as an asset ticking off that box here.

What remains and takes a lot of caution is the code that “looks at” the data in a desired way. Here is its final form:

58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 # ---2. Computations of Conditional Probabilities for Rare Events   # isolate return-series displaying negative returns solely # set 1 for time stamps corresponding to positive returns nretCAAS = np.where(retCAAS < 0, retCAAS, 1) nretNASDAQ = np.where(retNASDAQ < 0, retNASDAQ, 1)   # set threshold for rare events thr = -0.065   # compute the sets of events A = np.where(nretCAAS < 0, nretCAAS, 1) A = np.where(A >= thr, A, 1) B = np.where(nretCAAS < thr, retCAAS, 1) R = np.where(nretNASDAQ < thr, retNASDAQ, 1) nA = float(len(A[A != 1])) nB = float(len(B[B != 1])) n = float(len(nretCAAS[nretCAAS != 1])) # n must equal to nA + nB # (optional) print(nA, nB, n == (nA + nB)) # check, if True then proceed further print(len(A), len(B), len(R)) print   # compute the probabilities pA = nA/n pB = nB/n   # compute the conditional probabilities pRA = np.sum(np.where(R+A < 0, 1, 0))/n pRB = np.sum(np.where(R+B < 0, 1, 0))/n   pR = pRA*pA + pRB*pB   # display results print("Pr(A)\t = %5.5f%%" % (pA*100.)) print("Pr(B)\t = %5.5f%%" % (pB*100.)) print("Pr(R|A)\t = %5.5f%%" % (pRA*100.)) print("Pr(R|B)\t = %5.5f%%" % (pRB*100.)) print("Pr(R)\t = %5.5f%%" % (pR*100.))   if(pR>0): pBR = pRB*pB/pR print("\nPr(B|R)\t = %5.5f%%" % (pBR*100.)) else: print("\nPr(B|R) impossible to be determined. Pr(R)=0.")

Python’s NumPy library helps us in a tremendous way by its smartly designed function of where which we employ in lines #69-72 and #86-87. First we test a logical condition, if it’s evaluated to True we grab the right data (actual returns), else we return 1. That opens for us a couple of shortcuts in finding the number of specific events and making sure we are still on the right side of the force (lines #73-79).

As you can see, in line #66 we specified our threshold level of $L = -6.5$%. Given that, we derive the probabilities:

(1216.0, 88.0, True) (2516, 2516, 2516)   Pr(A) = 93.25153% Pr(B) = 6.74847% Pr(R|A) = 0.07669% Pr(R|B) = 0.15337% Pr(R) = 0.08186%   Pr(B|R) = 12.64368%

The level of -6.5% is pretty random but delivers an interesting founding. Namely, based on 10 years of data there is 12.6% of chances that on May/14 2015 CAAS will lose more than -6.5% if NASDAQ drops by the same amount. How much exactly? We don’t know. It’s not certain. Only probable in 12.6%.

The outcome of $\mbox{Pr}(R|B)$ which is close to zero may support our assumption that CAAS has a negligible “influence” on NASDAQ itself. On the other side of the rainbow, it’s much more difficult to interpret $\mbox{Pr}(R)$, the probability of an “isolated” rare event (i.e., $L<-6.5$%) since its estimation is solely based on two players in the game: CAAS and NASDAQ. However, when computed for $N\ge 100$ stocks outside the NASDAQ index but traded at NASDAQ, such distribution of $\mbox{Pr}(R)$'s would be of great value as an another alternative estimator for rare events (I should write about it a distinct post). Now, back the business. There is one problem with our conditional estimation of rare event probabilities as outlined within this Section. A quick check for $L = -10$% reveals:

Pr(A) = 98.69632% Pr(B) = 1.30368% Pr(R|A) = 0.00000% Pr(R|B) = 0.00000% Pr(R) = 0.00000%   Pr(B|R) impossible to be determined. Pr(R)=0.

Let’s remind that $R$ is the event of NASDAQ daily return to be $\lt -L$%. A quick look at NASDAQ return-series tells us the whole story in order to address the following questions that come to your mind: why, why, why zero? Well, simply because there was no event in past 10 year history of the index that it slid more than -10%. And we are cooked in the water.

A visual representation of that problem can be obtained by executing the following piece of code:

104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 from pyvar import cpr # non-at-all PEP8 style ;)   prob = [] for t in range(2,len(retCAAS)-1): ret1 = retCAAS[0:t] ret2 = retNASDAQ[0:t] pBR, _ = cpr(ret1, ret2, thr=thr) prob.append(pBR)   # plotting (figure #2) # the conditional probability for rare events given threshold # plt.figure(num=2, figsize=(10, 6)) plt.plot(np.array(prob)*100., 'r') plt.ylabel("Pr(B|R) [%%]") plt.axis('tight') plt.title("L$_{thr}$ = %.2f%%" % (thr*100.)) plt.xlabel("Trading days 13/05/2005-13/05/2015") plt.show()

where we moved all Bayes’-based calculations (lines #58-102) into a function cpr (a part of pyvar.py local library; see Download section below) and repeated our previous experiment, however, this time, $\mbox{Pr}(R|B)$ changing in time given the threshold of $L=-6.5$%. The resulting plot would be:

Before ca. 800th day the NASDAQ index was lacking any event of $L\lt -6.5$% therefore the $\mbox{Pr}(R|B)$ could not be determined. After that period we got some heavy hits and the conditional probability could be derived. The end value (as for May 13, 2015) is 12.64%.

Hope amongst Hopelessness?

There is no better title to summarise the outcomes we derived. Our goal was to come up with some (running) probability for very rare event(s) that, in our case, would be expressed as a 1-day loss of $-L$% in trading for a specified financial asset. In the first attempt we engaged VaR-related framework and the estimation of occurrence of the rare event based on the PDF integration. In our second attempt we sought for an alternative solution making use of Bayes’ conditional prediction of the asset’s loss of $-L$% (or more) given the observation of the same event somewhere else (NASDAQ index). For the former we ended up the a running probability of $\mbox{Pr}(L \le L_{thr})$ while for the latter with $\mbox{Pr}(R|B)$ given $L_{thr}$ for both B and R events.

Assuming $L_{thr}$ to be equal to -6.5%, we can present the corresponding computations in the following chart:

Well, the situation looks hopeless. It ain’t get better even if, as discussed earlier, the probability of $\mbox{Pr}(R)$ has been added. Both methodologies seem to deliver the answer to the same question however, somehow, we are left confused, disappointed, misled, in tasteless despair…

Stay tuned as in Part 2 we will see the light at the end of the tunnel. I promise.

heavy1.py, heavy2.py, pyvar.py

## How to Find a Company Name given a Stock Ticker Symbol utilising Quandl API

Quandl.com offers an easy solution to that task. Their WIKI database contains 3339 major stock tickers and corresponding company names. They can be fetched via secwiki_tickers.csv file. For a plain file of portfolio.lst storing the list of your tickers, for example:

AAPL IBM JNJ MSFT TXN

you can scan the .csv file for the company name coding in Python:

# How to Find a Company Name given a Stock Ticker Symbol utilising Quandl API # (c) 2015 QuantAtRisk.com, by Pawel Lachowicz   import pandas as pd   df = pd.read_csv('secwiki_tickers.csv') dp = pd.read_csv('portfolio.lst',names=['pTicker'])   pTickers = dp.pTicker.values # converts into a list   tmpTickers = []   for i in range(len(pTickers)): test = df[df.Ticker==pTickers[i]] if not (test.empty): print("%-10s%s" % (pTickers[i], list(test.Name.values)[0]))

what returns:

AAPL Apple Inc. IBM International Business Machines Corporation JNJ Johnson & Johnson MSFT Microsoft Corporation TXN Texas Instruments Inc.

Please note that there is a possibility to combine more stocks from other Quandl’s resources in one file. For more information see Quandl.com‘s documentation online.

I will dedicate a separate Chapter on hacking the financial websites in my next book of Python for Quants. Volume II. in mid-2015.

## Applied Portfolio VaR Decomposition. (2) Impact vs Moving Elements.

Calculations of daily Value-at-Risk (VaR) for any $N$-asset portfolio, as we have studied it already in Part 1, heavily depend on the covariance matrix we need to estimate. This estimation requires historical return time-series. Often negligible but superbly important question one should ask here is: How long?! How long those return-series ought to be in order to provide us with fair and sufficient information on the daily return distribution of each asset held in the portfolio?

It is intuitive that taking data covering solely past 250 trading days and, for example, past 2500 days (moving element) would have a substantial impact on the calculation of current portfolio VaR. In this short post, we will examine this relation.

Case Study (Supplementary Research)

Using the example of 7-asset portfolio from Part 1, we modify our Matlab code that allows us to download the history of stock (adjusted) close prices covering past 10 calendar years (ca. 3600 days),

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 % Applied Portfolio VaR Decomposition. (2) Impact vs Moving Elements. % (c) 2015 QuantAtRisk.com, by Pawel Lachowicz     clear all; close all; clc; format short   % Read the list of the portfolio components fileID = fopen('portfolio.lst'); tmp = textscan(fileID, '%s'); fclose(fileID); pc=tmp{1}; % a list as a cell array   % fetch stock data for last 10 years since: t0=735976; % 12-Jan-2015 date2=datestr(t0,'yyyy-mm-dd'); % from date1=datestr(t0-10*365,'yyyy-mm-dd'); % to   %{ % create an empty array for storing stock data stockd={}; % scan the tickers and fetch the data from Quandl.com for i=1:length(pc) quandlc=['WIKI/',pc{i}]; fprintf('%4.0f %s\n',i,quandlc); % fetch the data of the stock from Quandl % using recommanded Quandl's command and % saving them directly into Matlab's FTS object (fts) fts=0; [fts,headers]=Quandl.get(quandlc,'type','fints', ... 'authcode','YourQuandlCode',... 'start_date',date1,'end_date',date2); stockd{i}=fts; % entire FTS object in an array's cell end save data2 %} load data2

Assuming the same initial holdings (position sizes) for each asset:

39 40 41 42 43 44 45 46 % an initial dollar expose per position d=[ 55621.00; ... 101017.00; ... 23409.00; ... 1320814.00; ... 131145.00; ... 321124.00; ... 1046867.00];

we aim at calculation of the portfolio VaR,
$$\mbox{VaR}_P = 1.65 \sigma_P C = 1.65\left(\boldsymbol{d}’ \boldsymbol{M}_2 \boldsymbol{d} \right)^{1/2}$$ in a function of time or, alternatively speaking, in a function of varying length of return time-series:

48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 % examine VaR_P as a function of historical data % taken into account e=[]; for td=(10*250):-1:60 % loop over trading days rs=[]; for i=1:length(pc) cp=fts2mat(stockd{i}.Adj_Close,0); rv=cp(2:end,1)./cp(1:end-1,1)-1; rs=[rs rv(end-td:end)]; end rs(isnan(rs))=0.0; % covariance matrix M2=cov(rs); % the portfolio volatility vol_P=sqrt(d'*M2*d); % diversified portfolio VaR VaR_P=1.65*vol_P; e=[e; td VaR_P]; end

The meaning of the loop is straightforward: How does $\mbox{VaR}_P$ change if we take only last 60 trading days into account, and what is its value when we include more and more data going back in time as far as 10 years?

The best way to illustrate the results is to compare running portfolio VaR with the stock market itself, here reflected by ETF of SPY,

68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 % download close price time-series for SPY (ETF) % tracking S&P500 Index [fts,headers]=Quandl.get('GOOG/NYSE_SPY','type','fints', ... 'authcode','YourQuandlCode',... 'start_date',date1,'end_date',date2); SPY_cp=fts2mat(fts.Close,0);   % plot results subplot(2,1,1) xtmp=(2501:-1:1)./250; plot(xtmp,SPY_cp(end-10*250:end,1)); xlim([min(xtmp) max(xtmp)]); set(gca,'xdir','reverse'); ylabel('SPY Close Price [$]') subplot(2,1,2) plot(e(:,1)./250,e(:,2),'r'); set(gca,'xdir','reverse'); ylabel('VaR_P [$]') xlabel('Time since 12-Jan-2015 [trading years]')

uncovering the big picture as follows:

And now all becomes clear! Firstly, $\mbox{VaR}_P$ of our 7-asset portfolio is a function of data. Secondly, $\mbox{VaR}_P$ changes in the way that we might expect as contrasted with the market behaviour. In this point, note an evident increase in $\mbox{VaR}_P$ value if we include not 6 but 7 years of data. Why? The market went down in 2008 and the impact of it had been imprinted in the daily losses of single stocks. Therefore this contribution enriched the left tail of their return distributions. An inclusion of this information in our covariance matrix in a natural way increases the level of $\mbox{VaR}_P$.

So, how long? How many data should we include to get fair estimation of $\mbox{VaR}_P$? It’s not so easy to address. I would go with a lower bound of at least 1 year. As for the upper limit, it depends. If you are optimistic that in 2015 stocks will keep their upward momentum, look back up to 5 years. If you are a cautious investor, include 7 to 10 years of data.

Footnote

Some of you may still wonder why Impact vs Moving Elements?! Well, I believe the inspiration for a good title may come from the most unexpected direction. It was 2013 when I heard that song for the first time and I loved it! Through ages, beautiful women have always inspired men, therefore let me share my inspiration with you. You can become inspired too ;-)

## Applied Portfolio VaR Decomposition. (1) Marginal and Component VaR.

Risk. The only ingredient of life that makes us growing and pushing outside our comfort zones. In finance, taking the risk is a risky business. Once your money have been invested, you need to keep your eyes on the ball that is rolling. Controlling the risk is the art and is the science: a quantitative approach that allows us to ease the pressure. With some proper tools we can manage our positions in the markets. No matter whether you invest in a long-term horizon or trade many times per day, your portfolio is a subject to the constantly changing levels of risk and asset volatility.

VaR. Value-at-Risk. The most hated and most adorable quantitative measure of risk. Your running security amongst the unpredictability of the floor. A drilling pain in the ass; a few sleepless nights you’ve been so familiarised with. The omnipresent and rarely confirmed assumptions of the assets returns to be normally distributed. Well, we all know the truth but, somehow, we love to settle for the analytical solutions allowing us to “see a less attractive girl” — more attractive.

In this post we will stay in the mystic circle of those assumptions as a starting point. Don’t blame me. This is what they teach us in the top-class textbooks on financial risk management and test our knowledge within FRM or CFA exams. You may pass the exam but do you really, I mean, really understand the effects of what you have just studied in real portfolio trading?

We kick off from a theory on VaR decomposition in the framework of an active $N$-asset portfolio when our initial capital of $C$ has been fully invested in the stock market (as an example). Next, we investigate the portfolio VaR and by computation of its marginal VaR we derive new positions in our portfolio that minimise the portfolio VaR.

The Analytical VaR for N-Asset Portfolio. The Beginning.

Let’s keep our theoretical considerations to the absolute but essential minimum. At the beginning you have $C$ dollars in hand. You decide to invest them all into $N$ assets (stocks, currencies, commodities, etc.). Great. Say, you pick up US equities and buy shares of $N=7$ stocks. The number of shares depends on stock price and each position size (in dollars) varies depending on your risk-return appetite. Therefore, the position size can be understood as a weighting factor, $w_i$, for $i=1,…,N$, such:
$$\sum_{i=1}^N w_i = 1 \ \ \mbox{or} \ \ 100\mbox{%}$$ or $d_i=w_iC$ in dollars and $\sum_i d_i=C$. Once the orders have been fulfilled, you are a proud holder of $N$-asset portfolio of stocks. Congratulations!

From now on, every day, when the markets close, you check a daily rate of return for individual assets, $r_{t,i} = P_{i,t}/P_{i,{t-1}}-1$ where by $P$ we denote the stock close price time-series. Your portfolio rate of return, day-to-day i.e. from $t$-1 to $t$, will be:
$$R_{p,t} = \sum_{i=1}^N w_ir_{i,t} = \boldsymbol{w}’\boldsymbol{r}$$ or alternatively:
$$R_{p,t} = \sum_{i=1}^N d_ir_{i,t} = \boldsymbol{d}’\boldsymbol{r}$$ what provides us with the updated portfolio value. Now, that differs from the portfolio expected return defined as:
$$\mu_P = E\left(\sum_{i=1}^{N} w_ir_i \right) = \sum_{i=1}^{N} w_i\mu_i = \boldsymbol{w}’\boldsymbol{\mu}$$ where the expected daily return of the $i$-th asset, $\mu_i$, needs to be estimated based on its historical returns (a return-series). This is the most fragile spot in the practical application of the Modern Portfolio Theory. If we estimate,
$$\mu_i = \sum_{t’={(t-1)}-T}^{{t-1}} r_{t’}/T$$ looking back at last $T$ trading days, it is obvious that we will derive different $\mu_i$’s for $T$=252 (1 year) and for $T$=1260 (5 years). That affects directly the computation of the portfolio covariance matrix thus, as we will see below, the (expected) portfolio volatility. Keep that crucial remark in mind!

The expected portfolio variance is:
$$\sigma_P^2 = \boldsymbol{w}’ \boldsymbol{M}_2 \boldsymbol{w}$$
where $\boldsymbol{M}_2$ is the covariance matrix $N\times N$ with the individual covariances of $c_{ij}=\mbox{cov}(\{r_i\},\{r_j\})$ between security $i$ and $j$ computed based on the return-series. Therefore,
$$\sigma_P^2 = [w_1,…,w_N] \left[ \begin{array}{cccc} c_{11} & c_{12} & … & c_{1N} \\ … & … & … & … \\ c_{N1} & c_{N2} & … & c_{NN} \end{array} \right] \left[ \begin{array}{cccc} w_{1} \\ … \\ w_{N} \end{array} \right]$$ what can be expressed in the terms of the dollar exposure as:
$$\sigma_P^2C^2 = \boldsymbol{d}’ \boldsymbol{M}_2 \boldsymbol{d} \ .$$ In order to move from the portfolio variance to the portfolio VaR we need to know the distribution of the portfolio returns. And we assume it to be normal (delta-normal model). Given that, the portfolio VaR at the 95% confidence level ($\alpha=1.65$) is:
$$\mbox{VaR}_P = 1.65 \sigma_P C = 1.65\left(\boldsymbol{d}’ \boldsymbol{M}_2 \boldsymbol{d} \right)^{1/2}$$ what can be understood as diversified VaR since we use our portfolio to reduce the overall risk of the investment. On the other hand, the 95% individual VaR describing individual risk of each asset in the portfolio is:
$$\mbox{VaR}_i = 1.65 |\sigma_i d_i|$$ where $\sigma_i$ represents the asset volatility over past period of $T$.

It is extremely difficult to obtain a portfolio with a perfect correlation (unity) among all its components. In such a case, we would talk about undiversified VaR possible to be computed as a sum of $\mbox{VaR}_i$ when, in addition as a required condition, there is no short position held in our portfolio. Reporting the risk for a portfolio currently alive is often conducted by the risk managers by providing a quotation of both diversified and undiversified VaR.

Now, imagine that we add one unit of an $i$-th asset to our portfolio and we aim at understanding the impact of this action on the portfolio VaR. In this case we talk about a marginal contribution to risk, possible to be derived through the differentiation of the portfolio variance with respect to $w_i$:
$$\frac{\mbox{d}\sigma_P}{\mbox{d}w_i} = \frac { \left[ \sum_{i=1}^{N} w_i^2\sigma_i^2 + \sum_{i=1}^N \sum_{j=1, j\ne i}^{N} w_iw_jc_{ij} \right]^{1/2} } { \mbox{d}w_i }$$
$$= \frac{1}{2} \left[ 2w_i\sigma_i^2 + 2 \sum_{j=1, j\ne 1}^{N} w_jc_{ij} \right] \times \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left[ \sum_{i=1}^{N} w_i^2\sigma_i^2 + \sum_{i=1}^N \sum_{j=1, j\ne i}^{N} w_iw_jc_{ij} \right]^{-1/2}$$
$$= \frac{w_i^2\sigma_i^2 + \sum_{j=1, j\ne 1}^{N} w_j c_{ij} } {\sigma_P} = \frac{c_{iP}}{\sigma_P}$$ where, again, $c_{ij}$ denotes the covariance between any $i$-th and $j$-th asset and $c_{iP}$ between $i$-th component and portfolio. That allows us to define the marginal VaR for $i$-th asset in the portfolio in the following way:
$$\Delta\mbox{VaR}_i = \frac{\mbox{dVaR}_P}{\mbox{d}d_i} = \frac{\mbox{dVaR}_P}{\mbox{d}w_iC} = \frac{\mbox{d}(\alpha\sigma_P C)}{\mbox{d}w_iC} = \alpha\frac{C}{C} \frac{\mbox{d}\sigma_P}{\mbox{d}w_i} = \alpha \frac{c_{iP}}{\sigma_P}$$ that has a close relation to the systematic risk of $i$-th asset as confronted with the portfolio itself, described by:
$$\beta_i = \frac{c_{iP}}{\sigma_P^2}$$ therefore
$$\Delta\mbox{VaR}_i = \alpha \sigma_P \beta_i = \beta_i \frac{ \mbox{VaR}_P } {C}$$ The best and most practical interpretation of the marginal VaR calculated for all positions in the portfolio would be: the higher $\Delta\mbox{VaR}_i$ the corresponding exposure of the $i$-th component should be reduced to lower the overall portfolio VaR. Simple as that. Hold on, we will see that at work in the calculations a little bit later.

Since the portfolio volatility is a highly nonlinear function of its components, a simplistic computation of individual VaRs and adding the up all together injects the error into portfolio risk estimation. However, with a help of the marginal VaR we gain a tool to capture the fireflies amongst the darkness of the night.

We define the component VaR as:
$$\mbox{cVaR}_i = \Delta\mbox{VaR}_i d_i = \Delta\mbox{VaR}_i w_i C = \beta_i \frac{ \mbox{VaR}_P w_i C } {C} = \beta_i w_i \mbox{VaR}_P$$ utilising the observation that:
$$\sigma_P = \sum_{i=1}^{N} w_i c_{iP} = \sum_{i=1}^{N} w_i \beta_i \sigma_P = \sigma_P \sum_{i=1}^{N} w_i \beta_i = \sigma_P$$ what very nicely leads us to:
$$\sum_{i=1}^{N} \mbox{cVaR}_i = \mbox{VaR}_P \sum_{i=1}^{N} w_i \beta_i = \mbox{VaR}_P \ .$$ Having that, we get the percent contribution of the $i$-th asset to the portfolio VaR as:
$$i = \frac{ \mbox{cVaR}_i } { \mbox{VaR}_P } = w_i \beta_i$$ It is possible to calculate the required changes in the positions sizes based on our VaR analysis. We obtain it by re-evaluation of the portfolio marginal VaRs meeting now the following condition:
$$\Delta\mbox{VaR}_i = \beta_i \frac{ \mbox{VaR}_P } { C } = \ \mbox{constant .}$$ a subject of finding a solution for a “Risk-Minimising Position” problem. All right. The end of a lecture. Time for coding!

Case Study

Do You like sex? I do! So, let’s do it in Matlab this time! First, we pick up randomly seven stocks among US equites (for the simplicity of this example), and their tickers are given in the portfolio.lst text file as follows:

AAPL DIS IBM JNJ KO NKE TXN

and next we download their 3 years of historical data from Quandl.com as the data provider:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 % Applied Portfolio VaR Decomposition. (1) Marginal and Component VaR. % % (c) 2015 QuantAtRisk.com, by Pawel Lachowicz   clear all; close all; clc; format short   % Read the list of the portfolio components fileID = fopen('portfolio.lst'); tmp = textscan(fileID, '%s'); fclose(fileID); pc=tmp{1}; % a list as a cell array   % fetch stock data for last 3 years since: t0=735976; % 12-Jan-2015 date2=datestr(t0,'yyyy-mm-dd'); % from date1=datestr(t0-3*365,'yyyy-mm-dd'); % to   % create an empty array for storing stock data stockd={}; for i=1:length(pc) % scan the tickers and fetch the data from Quandl.com quandlc=['WIKI/',pc{i}]; fprintf('%4.0f %s\n',i,quandlc); % fetch the data of the stock from Quandl % using recommanded Quandl's command and % saving them directly into Matlab's FTS object (fts) fts=0; [fts,headers]=Quandl.get(quandlc,'type','fints', ... 'authcode','YourQuandlCode',... 'start_date',date1,'end_date',date2); stockd{i}=fts; % entire FTS object in an array's cell end   % limit data to 3 years of trading days, select the adjusted % close price time-series and calculate return-series rs=[]; for i=1:length(pc) cp=fts2mat(stockd{i}.Adj_Close,0); rv=cp(2:end,1)./cp(1:end-1,1)-1; rs=[rs rv(end-(3*5*4*12):end)]; end rs(isnan(rs))=0.0;

The covariance matrix, $\boldsymbol{M}_2$, computed based on 7 return-series is:

44 45 % covariance matrix M2=cov(rs);

Let’s now assume some random dollar exposure, i.e. position size for each asset,

47 48 49 50 51 52 53 54 55 56 57 % exposure per position d=[ 55621.00; ... 101017.00; ... 23409.00; ... 1320814.00; ... 131145.00; ... 321124.00; ... 1046867.00]   % invested capital of C C=sum(d)

returning our total exposure of $C=\$$2,999,997 in the stock market which is based on the number of shares purchased and/or our risk-return appetite (in Part II of this series, we will witness how that works in a simulated trading). We turn a long theory on the VaR decomposition into numbers just within a few lines of code, namely: 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 % the portfolio volatility vol_p=sqrt(d'*M2*d); % diversified portfolio VaR VaR_P=1.65*vol_p; % volatility of assets (based on daily returns) v=sqrt(diag(cov(rs))); % individual and undiversified portfolio VaR VaRi=1.65*abs(v.*d); uVaR_P=sum(VaRi); % the portfolio betas beta=C*(M2*d)/vol_p^2; % the portfolio marginal VaR DVaR=1.65*(M2*d)/vol_p; % the component VaR [percent] cVaR=100*DVaR.*d/VaR_P; % initial positions of assets in portfolio [percent] orgpos=100*d/C The last column vector provides us with an information on the original (currently held) rate of exposure of our invested capital in a function of the asset number in portfolio: orgpos = 1.85 3.37 0.78 44.03 4.37 10.70 34.90 Before we generate our very first risk report, let’s develop a short code that would allow us to find a solution for “Risk-Minimising Position” problem. Again, we want to find new position sizes (different from orgpos) at the lowest possible level of the portfolio risk. We construct intelligently a dedicated function that does the job for us: % Finding a solution for the "Risk-Minimasing Position" problem % based on portfolio VaR % (c) 2015 QuantAtRisk, by Pawel Lachowicz function [best_x,best_dVaR,best_VaRp,best_volp]=rmp(M2,d,VaR_p) c=sum(d); N=length(d); best_VaRp=VaR_p; for i=1:100 % an arbitrary number (recommended >50) k=0; while(k~=1) d=random('uniform',0,1,N,1); d=d/sum(d)*c; vol_p=sqrt(d'*M2*d); % diversified VaR (portfolio) VaRp=1.65*vol_p; dVaR=1.65*(M2*d)/vol_p; test=fix(dVaR*100); % set precision here m=fix(mean(test)); t=(test==m); k=sum(t)/N; if(VaRp<best_VaRp) best_x=d; best_dVaR=dVaR; best_VaRp=VaRp; best_volp=vol_p; end end end end We feed it with three input parameters, namely, the covariance matrix of \boldsymbol{M}_2, the current dollar exposure per asset of \boldsymbol{d}, and the current portfolio VaR. Inside we generate randomly a new dollar exposure column vector, next re-calculate the portfolio VaR and marginal VaR until,$$ \Delta\mbox{VaR}_i = \beta_i \frac{ \mbox{VaR}_P } { C } = \ \mbox{constant} $$condition is met. We need to be sure that a new portfolio VaR is lower than the original one and somehow lowest among a large number of random possibilities. That is why we repeat that step 100 times. In this point, as a technical remark, I would suggest to use more than 50 iterations and to experiment with a precision that is required to find marginal VaRs (or \beta’s) to be (nearly) the same. The greater both golden numbers are the longer time of waiting for the final solution. It is also that apex of the day in daily bank’s portfolio evaluation process that assesses the bank’s market risk and global exposure. A risk manager knows how long it takes to find new risk-minimaised positions and how fast C/C++/GPU solutions need to be implemented to cut the time of those computations. Here, with our Matlab function of rmp, we can feel the pain of this game. Eventually, we finalise our computations: 84 85 86 [nd,nDVaR,nVaR_P,nvol_P]=rmp(M2,d,VaR_P); redVaR=100*(nVaR_P-VaR_P)/VaR_P; newpos=100*nd/C; followed by a nicely formatted report: 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 fprintf('\n============================================\n'); fprintf(' Portfolio Risk Report as of %s\n',datestr(t0)); fprintf('============================================\n\n'); fprintf('1-day VaR estimation at %1.0f%% confidence level\n\n',95.0) fprintf('Number of assets:\t\t %1.0f \n',length(d)) fprintf('Current exposure:\t\t %1s\n',bankformat(C)); fprintf('Portfolio VaR (undiversified):\t %1s\n',bankformat(uVaR_P)); fprintf('Portfolio VaR (diversified):\t %1s\n',bankformat(VaR_P)); fprintf('Component VaR [individual VaR]\n'); for i=1:length(d) fprintf(' %s\t%6.2f%% %-10s\t[%-10s]\n',pc{i},cVaR(i), ... bankformat(cVaRd(i)),bankformat(VaRi(i))); end fprintf('\nRisk-Minimising Position scenario\n'); fprintf('--------------------------------------------------------------'); fprintf('\n\t\t Original Marginal New Marginal\n'); fprintf('\t\t position VaR position VaR\n'); for i=1:length(d) fprintf('\t %-4s\t %6.2f%% %1.5f\t %6.2f%% %1.2f\n', ... pc{i},orgpos(i), ... DVaR(i),newpos(i),nDVaR(i)) end fprintf('div VaR\t\t %1s\t\t %1s\n',bankformat(VaR_P), ... bankformat(nVaR_P)); fprintf('\t\t\t\t\t%10.2f%%\n',redVaR); fprintf('ann Vol\t\t%10.2f%%\t\t%10.2f%%\n', ... sqrt(252)*vol_P/1e6*1e2,sqrt(252)*nvol_P/1e6*1e2); fprintf('--------------------------------------------------------------'); fprintf('\n\n'); where bankformat function you can find here. As a portfolio risk manager you are interested in numbers, numbers, and numbers that, in fact, reveal a lot. It is of paramount importance to analyse the portfolio risk report with attention before taking any further steps or decisions:  ============================================ Portfolio Risk Report as of 12-Jan-2015 ============================================ 1-day VaR estimation at 95% confidence level Number of assets: 7 Current exposure: 2,999,997.00 Portfolio VaR (undiversified): 54,271.26 Portfolio VaR (diversified): 33,027.94 Component VaR [individual VaR] AAPL 0.61% 202.99 [1,564.29 ] DIS 1.71% 564.13 [1,885.84 ] IBM 0.26% 84.43 [429.01 ] JNJ 30.99% 10,235.65 [17,647.63 ] KO 1.19% 392.02 [2,044.05 ] NKE 9.10% 3,006.55 [7,402.58 ] TXN 56.14% 18,542.15 [23,297.83 ] Risk-Minimising Position scenario -------------------------------------------------------------- Original Marginal New Marginal position VaR position VaR AAPL 1.85% 0.00365 6.02% 0.01 DIS 3.37% 0.00558 1.69% 0.01 IBM 0.78% 0.00361 10.79% 0.01 JNJ 44.03% 0.00775 37.47% 0.01 KO 4.37% 0.00299 27.36% 0.01 NKE 10.70% 0.00936 8.13% 0.01 TXN 34.90% 0.01771 8.53% 0.01 div VaR 33,027.94 26,198.39 -20.68% ann Vol 31.78% 25.21% -------------------------------------------------------------- From the risk report we can spot the difference of \$$21,243.32 between the undiversified and diversified portfolio VaR and the largest contribution of Texas Instruments Incorporated stock (TXN) to the overall portfolio VaR though it is Johnson & Johnson stock (JNJ) that occupies the largest dollar position in our portfolio. The original marginal VaR is the largest for TXN, NKE, and JNJ. Therefore, in order to minimise portfolio VaR, we should cut them and/or reduce their positions. On the other hand, the exposure should be increased in case of KO, IBM, AAPL, and DIS which display the lowest marginal VaR. Both suggestions find the solution in a form of derived new holdings that would reduce the portfolio VaR by$-$20.68% decreasing concurrently the annualised portfolio volatility by 6.57%. Indeed. Risk is like sex. The more the merrier! Unless, you are a risk manager and you calculate your “positions”. ## Gap-on-Open Profitable Trading Strategy After a longer while, QuantAtRisk is back to business. As an algo trader I have been always tempted to test a gap-on-open trading strategy. There were various reasons standing behind it but the most popular one was always omni-discussed: good/bad news on the stock. And what? The stock price skyrocketed/dropped down on the following days. When we approach such price patterns, we talk about triggers or triggered events. The core of the algorithm’s activity is the trigger identification and taking proper actions: to go long or short. That’s it. In both cases we want to make money. In this post we will design the initial conditions for our gap-on-open trading strategy acting as the triggers and we will backtest a realistic scenario of betting our money on those stocks that opened higher on the next trading day. Our goal is to find the most optimal holding period for such trades closed with a profit. Portfolio Our strategy can be backtested using any$N$-asset portfolio. Here, for simplicity, let us use a random subset of 10 stocks (portfolio.lst) being a part of a current Dow Jones Index: AXP CSCO DIS IBM JNJ KO NKE PG UTX XOM In Matlab, we fetch the stock prices from Google Finance data provider accessible via Quandl.com’s Matlab API (see this post for its setup in Matlab). We commence writing our main backtesting code as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 % Gap on Open Trading Strategy % Fetching stock prices via Quandl and Strategy Backtesting % % (c) 2014 by Pawel Lachowicz, QuantAtRisk.com clear all; close all; clc; fname=['portfolio.lst']; % Model's parameter #1 (years) parm1=1; ndays=parm1*365; lday=datenum('2014-08-05'); % fetching stock data [Top,Thp,Tlp,Tcp,N,ntdays]=FetchQuandl(fname,ndays,lday); where we use a pre-designed function of FetchQuandl to import 4 separate price-series of each stock’s open (Top), high (Thp), low (Tlp), and close (Tcp) daily prices: function [Top,Thp,Tlp,Tcp,N,ntdays]=FetchQuandl(fname,ndays,lday) % Read the list of Dow Jones components fileID = fopen(fname); tmp = textscan(fileID,'%s'); fclose(fileID); components=tmp{1}; % a list as a cell array % Read in the list of tickers and internal codes from Quandl.com [~,text,~] = xlsread('QuandlStockCodeListUS.xlsx'); quandlc=text(:,1); % again, as a list in a cell array quandlcode=text(:,3); % corresponding Quandl's Price Code % fetch stock data for last ‘ndays’ date2=datestr(lday,'yyyy-mm-dd'); % from date1=datestr(lday-ndays,'yyyy-mm-dd'); % to Rop={}; Tcp={}; % scan all tickers and fetch the data from Quandl.com for i=1:length(components) for j=1:length(quandlc) if(strcmp(components{i},quandlc{j})) fprintf('%4.0f %s\n',i,quandlc{j}); fts=0; [fts,headers]=Quandl.get(quandlcode{j},'type','fints', ... 'authcode','PutHereYourQuandlCode',... 'start_date',date1,'end_date',date2); cp=fts2mat(fts.Close,1); Tcp{i}=cp; % close price-series op=fts2mat(fts.Open,1); Top{i}=op; % open price-series hp=fts2mat(fts.High,1); Thp{i}=hp; % high price lp=fts2mat(fts.Low,1); Tlp{i}=lp; % low price %Rcp{i}=cp(2:end,2)./cp(1:end-1,2)-1; % return-series cp end end end N=length(components); ntdays=length(Tcp{1}); end Please note that in line #12 we specified number of years, i.e. how far our backtest should be extended backward in time (or number of calendar days; see line #13) from the day specified in line #14 (last day). Trading Model First, let us design the trading strategy. We scan concurrently four price-series for each stock separately. We define the strategy’s trigger as follows: i.e. if a stock open price on day$t$was higher than the close price on the day$t-1$and the lowest prices on day$t$was higher than the highest price on day$t-1$. Having that, we make a BUY LONG decision! We buy that stock on the next day at its market price (close price). This approach should remove the slippage bias effectively (see more on slippage in stock trading here). Now, we run the backtest on each stock and each open trade. We select the second parameter (parm2) to be a number of days, i.e. how long we hold the stock. In the following piece of code, let us allow to sell the stock after/between 1 to 21 calendar days ($\pm$weekend or public holidays time period): 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 % pre-defined matrix for backtest final results results=[]; for parm2=0:20 cR=[]; for i=1:N % just for a purpose of plotting of price-series if(i==1) % open (blue color) plot(Top{i}(:,1),Top{i}(:,2),'') hold on % close (red color) plot(Tcp{i}(:,1),Tcp{i}(:,2),'r') hold on % high (green color) plot(Thp{i}(:,1),Thp{i}(:,2),'g') % xlabel('Days'); ylabel('AXP Stock Prices [US$]'); end   Tbuy=[]; for t=2:ntdays % define indicators ind1=Tcp{i}(t-1,2); % cp on (t-1)day ind2=Thp{i}(t-1,2); % hp on (t-1)day ind3=Top{i}(t,2); % op on (t)day ind4=Tlp{i}(t,2); % lp on (t)day % detect trigger if(ind1<ind3)&&(ind2<ind4) % plotting only for AXP if(i==1) hold on; plot(Top{i}(t,1),Top{i}(t,2),'o'); end % date of a trigger tday=Top{i}(t,1); nextbusdate=busdate(tday,1); % find next trading date Tbuy=[Tbuy; nextbusdate]; end end Tsell=busdate(Tbuy+parm2,1);

Here, in lines #57 and #60 we constructed time array storing physical information on those days. Now, we will use them to check the price on trade’s open and close and derive profit and loss for each stock:

62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 R=[]; for k=1:length(Tbuy) j=find(Tbuy(k)==Tcp{i}(:,1)); pbuy=Tcp{i}(j,2); j=find(Tsell(k)==Tcp{i}(:,1)); psell=Tcp{i}(j,2); ret=(psell/pbuy-1); % return per trade R=[R; ret]; end   compR=prod(R+1)-1; % compound return per stock cR=[cR; compR];   end   results=[results cR];   end

In the inner loop (lines #24 to #75, i.e. tracking a number of stocks in portfolio; index $i$, here 1 to 10) we capture all trades per stock (lines #63-70) and calculate a multi-period compound return (line #72) as if we were trading that stock solely using our model.

For instance, for stock $i=1$ (AXP) from our portfolio, our code displays 1-year price-series:
where days meeting our trigger criteria have been denoted by open-circle markers. If you now re-run the backtest making a gentle substitution in line #24 now to be:

24 for i=1:1

we can find that by running through some extra lines of code as defined:

81 82 83 84 figure(2) stem((0:20),100*results) xlabel('Holding Period [days]'); ylabel('AXP: Compound Return [%]');

we obtain an appealing result:

The chart reveals that for AXP, over past 251 days (since Aug/4 2014 backwards), we had 16 triggers therefore 16 trades and, surprisingly, regardless of holding period, the compound return from all closed trades was highly positive (profitable).

This is not the case if we consider, for example, $i=4$, IBM stock:
This result points that for different holding periods (and different stocks of course) certain extra trading indicators should be applied to limit the losses (e.g. profit targets).

If we traded a whole portfolio using our gap-on-open model, we would end up with very encouraging result:
where for each holding period we displayed the averaged over 10 stocks compound return. Taking into account the global up-trend in the US stock markets between August of 2013 and 2014, this strategy is worth its consideration with any further modifications (e.g. considering short or both long and short triggers, FX time-series, etc.).

Someone wise once said: Sometimes you win. Sometimes you learn. In algo trading we all learn to win.

In next post…
Marginal Value-at-Risk for Portfolio Managers

WANT TO LEARN MORE ON PORTFOLIOS in MATLAB!?

## Visualisation of N-Asset Portfolio in Matlab

Many of you who wished to use the Matlab’s Financial Toolbox for portfolio visualization most probably realised that something was wrong. Not every time the annualised risk and return for different time-series matched with what was expected. Below you can find an improved piece of code which does the job correctly.

The input sampling of your financial time-series now can be specified directly as daily, weekly, monthly, or quarterly and the risk-return diagram will be displayed with annualised results. Otherwise, you have an option, instead of annualised value, specify another time-horizon. All smooth and easy.

% Specialised plotting function for visualisation of portfolios % based on the Financial Toolbox's portfolio objects % % Modified by Pawel Lachowicz, 2014/03/01 % % Remarks: Essential corrections introduced in the calculation and plotting % of the annualised risk and returns coded incorrectly by the team at % The MathWorks, Inc.   function plotportfolio(varargin)   plottitle = varargin{1}; % plot title, e.g. 'Efficient Frontier' plotfreq = varargin{2}; % frequency of raw data (dt; or 'daily', etc.) plotfreq2= varargin{3}; % e.g. ’annualised’ plotfreq3= varargin{4}; plotlegend = [];   switch plotfreq2 case 'annualised' switch plotfreq case 'daily' periods=252; case 'weekly' periods=52; case 'monthly' periods=12; case 'quarterly' periods=4; end otherwise periods=plotfreq3; end   for i = 5:nargin plotinfo = varargin{i};   plottype = plotinfo{1}; plotrsk = plotinfo{2}; plotret = plotinfo{3}; if numel(plotinfo) > 3 plotlabel = plotinfo{4}; else plotlabel = []; end if numel(plotinfo) > 4 plotstyle = plotinfo{5}; if isempty(plotstyle) plotstyle = 'b'; end else if strcmpi(plottype,'line') plotstyle = 'b'; else plotstyle = 'g'; end end if numel(plotinfo) > 5 plotline = plotinfo{6}; if isempty(plotline) plotline = 1; end else if strcmpi(plottype,'line') plotline = 2; else plotline = []; end end   % line plot if strcmpi(plottype,'line') hold on; for k = 1:size(plotrsk,2) plot(plotrsk(:,k)*sqrt(periods),((plotret(:,k)+1).^periods-1), ... plotstyle, 'LineWidth', plotline); if i == 2 && k == 1 hold on end if ~isempty(plotlabel) && ~isempty(plotlabel{k}) plotlegend = [ plotlegend, {plotlabel{k}} ]; end end   % scatter plot else if any(plotstyle == '.') scatter(plotrsk*sqrt(periods),((plotret(:,k)+1).^periods-1),'or','Filled'); else scatter(plotrsk*sqrt(periods),((plotret(:,k)+1).^periods-1),'og','Filled'); end if i==2 hold on end if ~isempty(plotlabel) for k = 1:numel(plotrsk) if ~isempty(plotlabel{k}) text(plotrsk(k)*sqrt(periods)+0.005, ... ((plotret(k)+1).^periods-1), plotlabel{k},'FontSize', 9); end end end end end   if ~isempty(plotlegend) legend(plotlegend,'Location','SouthEast'); end if(plotfreq2=='annualised') xlabel('Annualised Volatility'); ylabel('Annualised Expected Returns'); else xlabel('Volatility'); ylabel('Returns'); end grid on   end

Now, you can call it with various parameters of your interest, matching the data directly, for instance:

plotportfolio('Portfolio',dt,'annualised',[], ... {'line', prsk, pret}, ... {'scatter',CashRsk,CashRet, {'Cash'}}, ... {'scatter', sqrt(diag(p.AssetCovar)), p.AssetMean, p.AssetList,'.r'});

For some awesome practical application of this function in the process of construction and analysis of $N$-asset portfolio grab Applied Portfolio Optimization with Risk Management using Matlab ebook today.

## Asset Allocation for Tangent Portfolio with Risk-Free Asset in Python

Lesson 5>>

Studying a new programming language is an adventurous journey. It takes time in case of Python. Even for me. Not every concept is easy to grasp and testing a piece of code with its endless modifications may take a longer while. We have two choices: either to get a good book like upcoming Python for Quants or sit down and apply a copy-and-paste-and-test method. The latter approach is much longer but rewarding in the middle of our learning process. Hit and try. So, why not to try?

You cannot write the code which performs complex operations omitting necessary set of functions. There is no need to reinvent the wheel. In Python, the community works hard to deliver best programming solutions. In this lesson, we will accelerate by conducting an investigation of Python code aimed at finding optimised weights for a tangent portfolio problem.

In QaR ebook on Applied Portfolio Optimization with Risk Management using Matlab we discussed in great detail the theory and practical calculations for various cases of portfolios with different objective functions. Markowitz in 1952 underlined that the goal of portfolio choice was either to look for such portfolio which could deliver maximum return at a given level of risk or minimum risk for a given level of return. Based on the theoretical works of Sharpe in 1964, Lintner in 1965 and Tobin in 1958, the importance of the risk-free asset in the portfolio has been proved to equip the investor with a better control over risk.

We can split the budget into fractions of our capital designated for an investment in the risk-free option (e.g. the savings account in a bank) while the rest will be delegated to other assets with diversified risk levels. It was shown that for any portfolio with the risk-free component, the expected return is:
$$R_P = mw^T + (1+{\bf 1}w^T)r_f \ \ \ \mbox{at} \ \ \ \sigma_P = wCw^T$$ where by $C$, $m$, and $w$ the covariance matrix, individual asset expected return matrix, and asset weighting have been denoted accordingly. If we aim at variance, $\sigma_P$, to be minimised:
$$\min_{w} \ \ wCw^T$$ subject to $mw^T + (1+{\bf 1}^T)r_f = r_{\rm target}$, we formulate the minimum variance portfolio optimization problem. It occurs that all minimum variance portfolios are a combination of the risk-free asset and a given risky portfolio. The latter is often called the tangent portfolio and has been shown that it must contain of all assets available to investors (held in quantity to its market value relative to the total market value of all assets). That makes the second name of the tangent portfolio: the market portfolio. The objective function of the form:
$$\max_{w} \ \ \frac{mw^T-r_f}{\sqrt{wCw^T}}$$ subject to ${\bf 1}w^T=1$ called the Sharpe ratio corresponding to the market portfolio directly. The risk-free asset is connected with the tangent portfolio by the straight line therefore provides an investor with a good blend of risk-controlled portfolios. Modern Portfolio Theory tells us that the tangent portfolio is given by:
$${\bf w} = C^{-1}({\bf m} – {\bf 1}r_f)$$ where the vector ${\bf w}$ stores the computed weights for each asset in portfolio $P$. Since finding the tangent portfolio given $N$ assets is, luckily, an analytical problem (i.e. without employment of the solvers), that makes this task a straightforward problem to be coded in Python as follows.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 # Asset Allocation for Tangent Portfolio with Risk-Free Asset in Python # Accelerated Python for Quants Tutorial, Lesson 4 # (c) 2014 QuantAtRisk   from numpy import matrix, power from math import sqrt   def TangentPortfolio(m,C,rf): # find number of rows and columns for the covariance matrix (nr,nc)=C.shape A=matrix([[0.0] for r in xrange(nr)]) A=(1/C)*(m-rf) (nr,nc)=A.shape A=A/sum(A[r,0] for r in xrange(nr)) w=[A[r,0] for r in xrange(nr)] pret=mu.T*A prsk=power(A.T*(C*A),0.5) return matrix(w),pret,prsk

Here, we can see a new element of Python language which is a definition and usage of a function. We start its syntax with def, next write a desired function name with input parameters in the brackets, and end it with a colon. The body of the function must be always indented (min 4 space signs; don’t use tab!) and if some results are intended to be sent out of the function, return function should be specified at the end, listing all variables of interest.

In addition we make use of numpy module from which we import only two functions that we will be using. The first one is matrix that allows us to implement matrix or vector notation explicitly (we avoid Python’s lists or arrays at this stage). The dimensions of any matrix M can be return into tuple as shown in line #10.

Please note how Python eases our life. In line #15 we create a new one-row vector (matrix) referring directly to certain elements of other matrix by putting for…in loop inside the matrix of w itself. How brilliant it is! Lastly, using a function of power we take its first argument to the power of 1/2, i.e. we compute a square root.

To see some action, let us first define an exemplary covariance matrix and vector with expected returns corresponding to 3-assets in the portfolio:

20 21 22 23 24 cov=matrix([[0.04, 0.004, 0.02],[0.004, 0.09, 0.09],[0.02,0.09,0.16]]) mu=matrix([[0.13],[0.11],[0.19]]) rf=0.05   w,ret,rsk=TangentPortfolio(mu,cov,rf)

where investing at the risk-free rate of 5% has been added to complete the grand picture of the problem we discuss here. Line #24 reveals the way how we call our function and assign calculated values within the function to outer variables (their names can be different). We display the results on the screen by typing:

26 27 28 29 30 print("Portfolio weights") print(w.T)   print("Expected Portfolio Return and Risk") print ret,rsk

what returns:

Portfolio weights [[ 0.46364368] [ 0.4292997 ] [ 0.10705661]]   Expected Portfolio Return and Risk [[ 0.1278374]] [[ 0.19715402]]

what simply communicates that the expected portfolio return equals 12.8% at 19.7% of risk if we allocate 46%, 42%, and 10% in asset number 1, 2, and 3, respectively.

We find that Sharpe ratio,

32 33 sharpe=(ret-rf)/rsk print(sharpe)

equals 0.3948 which corresponds to the Sharpe ratio of the first asset:

35 for r in xrange(3): print((mu[r,0]-rf)/sqrt(cov[r,r]))
0.4 0.2 0.35

Finally, if we denote by $\zeta$ a fraction of capital we want to invest in risky assets, leaving $(1-\zeta)$ in the bank at $r_f=5\%$ rate, then the expected portfolio return will be:
$$\zeta wm+(1-\zeta)r_r \ \ \ \mbox{at} \ \ \ \zeta\sqrt{wCw^T}$$ of risk, therefore for two different cases, for example:

37 38 39 40 41 42 43 alpha=0.7 print(((matrix(alpha)*w)*mu)+(1-alpha)*rf) print(matrix(alpha)*power(w*cov*w.T,1))   alpha=0.25 print(((matrix(alpha)*w)*mu)+(1-alpha)*rf) print(matrix(alpha)*power(w*cov*w.T,1))

we get

[[ 0.10448618]] [[ 0.0272088]]   [[ 0.06945935]] [[ 0.00971743]]

what confirms that by putting 70% of our capital, for instance, into three stocks should result in 10.4% gain at 2.7% rate of risk, while an allocation of 75% of the capital in the bank promises 6.9% return, i.e. approaching earlier defined risk-free rate of 5% pa.

Hungry of Python? Want some more? Stay Tuned!

A new ebook Python for Quants is coming this July! Don’t miss the full introductory course to Python with many other practical examples ready-to-rerun and use for your projects:

## Roy’s Safety-First Criterion in Portfolio Optimization Problem

The main objective of portfolio selection is the construction of a portfolio that maximises expected return given a certain tolerance for risk. There is an intuitive problem with the portfolio variance as a measure of risk. Using the variance in the portfolio optimization context, it means that outcomes that are above the expected portfolio returned are deemed as risky as outcomes that are below the expected portfolio return. The use of portfolio variance as a risk measure has been a subject to many discussions. There’s a thin line we step onto.

Since the class of risk for financial portfolios has been dividing into two sub-branches, namely, dispersion and downside risk measures, the Roy’s safety-first criterion (after the paper of Arthur Roy, Safety First and the Holding of Assets, Econometrica 1952 July, 431–450) refers more to the investor’s control over investment in the way such the investor makes sure that a certain amount of the invested principal is preserved, so the investor tries to minimise the probability that the return earned is less than or equal to the threshold $t$. Therefore, the optimization problem for portfolio with $N$ assets and associated weights of $w$ is:
$$\min_{w} Pr(r_p\le t)$$ or
$$\min_{w} Pr\left( \sum_{i=1}^{N} w_ir_i \le t \right)$$ where, by a clever observation one can notice that Roy’s safety-first criterion is a special case of the lower-partial moment risk measure formula,
$$\left(E[\min\{r_p-t,0\}^q]\right)^{1/q} \equiv \int_{-\infty}^{t} (t-r)^k f(r) dr$$ when $k=0$. In financial risk management, Roy’s safety-first criterion is referred to as shortfall risk, and additionally, for $t=0$, the shortfall risk is called the risk of loss.

For practical application of this risk measure check QuantAtRisk’s ebook on Applied Portfolio Optimization and Risk Management available from QaR eBook Store soon!

## Extreme VaR for Portfolio Managers

Regardless of who you are, either an algo trader hidden in the wilderness of Alaska or an active portfolio manager in a large financial institution in São Paulo, your investments are the subject to sudden and undesired movements in the markets. There is one fear. We know it very well. It’s a low-probability, high-impact event. An extreme loss. An avalanche hitting your account balance hard. Really hard.

The financial risk management has developed a set of quantitative tools helping to rise an awareness of the worst case scenarios. The most popular two are Value-at-Risk (VaR) and Expected Shortfall (ES). Both numbers provide you with the amount of stress you put on your shoulders when the worst takes place indeed.

Quoting $VaR_{0.05}^{\rm 1-day}$ value we assume that in 95% of cases we should feel confident that the maximal loss for our portfolio, $C\times VaR_{0.05}^{\rm 1-day}$, will not be exceed in trading on the next day (1-day VaR) where $C$ is a current portfolio value (e.g. $\$$35,292.13). However if unforeseen happens, our best guess is that we will lose C\times ES_{0.05}^{\rm 1-day} of dollars, where ES_{0.05}^{\rm 1-day} stands for the 1-day expected shortfall estimate. Usually ES_{0.05}^{\rm 1-day} > VaR_{0.05}^{\rm 1-day}. We illustrated this case previously across VaR and Expected Shortfall vs. Black Swan article using IBM stock trading data. We also discussed a superbly rare case of extreme-loss in the markets on Oct 19, 1987 (-22%), far beyond any measure. In this post we will underline an additional quantitative risk measure, quite often overlooked by portfolio managers, an extreme value-at-risk (EVaR). Using Matlab and an exemplary portfolio of stocks, we gonna confront VaR, ES, and EVaR for a better understanding of the amount of risk involved in the game. Hitting hard is inevitable under drastic market swings so you need to know the highest pain threshold: EVaR. Generalised Extreme-Value Theory and Extreme VaR The extreme-value theory (EVT) plays an important role in today’s financial world. It provides a tailor-made solution for those who wish to deal with highly unexpected events. In a classical VaR-approach, the risk manager is more concentrated and concerned around central tendency statistics. We are surrounded by them every day. The mean values, standard deviations, \chi^2 distributions, etc. They are blood under the skin. The point is they are governed by central limit theorems, which, on a further note, cannot be applied to extremes. That is where the EVT enters the game. From the statistical point of view, the problem is simple. If we assume that X is random variable of the loss and X is independent and identically distributed (iid) by that we also assume a general framework suggesting us that X comes from an unknown distribution of F(x)=Pr(X\le x). EVT makes use of the fundamental Fisher-Trippett theorem and delivers practical solution for a large sample of n extreme random loss variables knows also as a generalised extreme-value (GEV) distribution defined as:$$ H(z;a,b) = \left\{ \begin{array}{lr} \exp\left[-\left( 1+ z\frac{x-b}{a} \right)^{-1/z} \right] & : z \ne 0\\ \exp\left[-\exp\left(\frac{x-b}{a} \right) \right] & : z = 0 \\ \end{array} \right. $$Under the condition of x meeting 1+z(x-b)/a > 0 both parameters a and b in GEV distribution ought to be understood as location and scale parameters of limiting distribution H. Their meaning is close but distinct from mean and standard deviation. The last parameter of z corresponds to a tail index. Tht tail is important in H. Why? Simply because it points our attention to the heaviness of extreme losses in the data sample. In our previous article on Black Swan and Extreme Loss Modeling we applied H with z=0 to fit the data. This case of GEV is known as the Gumbel distribution referring to F(x) to have exponential (light) tails. If the underlying data fitted with H model reveal z<0, we talk about Fréchet distribution. From a practical point of view, the Fréchet case is more useful as it takes into account heavy tails more carefully. By plotting standardised Gumbel and Fréchet distributions (a=1, b=0), you can notice the gross mass of the H contained between x values of -2 and 6 what translates into b-2a to b+6a interval. With z>0, the tail contributes more significantly in production of very large X-values. The remaining case for z<0, Weibull distribution, we will leave out of our interest here (the case of tails lighter than normal). If we set the left-hade side of GEV distribution to p, take logs of both sides of above equations and rearrange them accordingly, we will end up with a setup of:$$ \ln(p) = \left\{ \begin{array}{lr} -\left(1+zk \right)^{-1/z} & : z \ne 0\\ -\exp(-k) & : z = 0 \\ \end{array} \right. $$where k=(x-b)/a and p denotes any chosen cumulative probability. From this point we can derive x values in order to obtain quantiles associated with p, namely:$$ x = \left\{ \begin{array}{ll} b – \frac{a}{z} \left[1-(-\ln(p))^{-z}\right] & : ({\rm Fréchet}; z>0) \\ b – a\ln\left[-\ln(p)\right] & : ({\rm Gumbel}; z=0) \\ \end{array} \right. $$Note, when z\rightarrow 0 then Fréchet quantiles tend to their Gumbel equivalents. Also, please notice that for instance for the standardised Gumbel distribution the 95% quantile is -\ln[-\ln(0.95)] = 2.9702 and points at its location in the right-tail, and not in the left-tail as usually associated with heaviest losses. Don’t worry. Gandhi says relax. In practice we fit GEV distribution with reversed sign or we reverse sign for 0.95th quantile for random loss variables of x. That point will become more clear with some examples that follow. We allow ourselves to define the Extreme Value-at-Risk (EVaR) associated directly with fitted GEV distribution, H_n, to the data (n data points), passing$$ Pr[H_n < H'] = p = {Pr[X < H']}^n = [\alpha]^n $$where \alpha is the VaR confidence level associated with the threshold H', in the following way:$$ \mbox{EVaR} = \left\{ \begin{array}{ll} b_n - \frac{a_n}{z_n} \left[1-(-n\ln(\alpha))^{-nz_n}\right] & : ({\rm Fréchet}; z>0) \\ b_n – a_n\ln\left[-n\ln(\alpha)\right] & : ({\rm Gumbel}; z=0) \\ \end{array} \right. $$These formulas provide a practical way of estimating VaR based the analysis of the distribution and frequency of extreme losses in the data samples and are meant to work best for very high confidence levels. Additionally, as pointed by Dowd and Hauksson, the EVaR does not undergo the rule of square root of a holding period what is applicable for a standard VaR derived for (log-)normal distributions of return series. 1-day EVaR for Assets in your Portfolio All right. Time to connect the dots. Imagine, you hold a portfolio of 30 bluechips being a part of Dow Jones Industrial index. Before market opens, regardless of quantity and direction of positions you hold in each of those 30 stocks, you, as a cautious portfolio manager, want to know the estimation of extreme loss that can take place on the next trading day. You look at last 252 trading days of every stock, therefore you are able to extract a sample of N=30 largest losses (one per stock over last 365 calendar days). Given this sample, we can fit it with GEV distribution and find the best estimates for z, a and b parameters. However, in practice, N=30 is a small sample. There is nothing wrong to extract for each stock 5 worst losses that took place within last 252 trading days. That allows us to consider n=150 sample (see the code below). Let’s use Matlab and the approach described in Create a Portfolio of Stocks based on Google Finance Data fed by Quandl post, here re-quoted in a shorter form: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 % Extreme Value-at-Risk % % (c) 2014 QuantAtRisk.com, by Pawel Lachowicz close all; clear all; clc; % Read the list of Dow Jones components fileID = fopen('dowjones.lst'); tmp = textscan(fileID, '%s'); fclose(fileID); dowjc=tmp{1}; % a list as a cell array % Read in the list of tickers and internal code from Quandl.com [ndata, text, alldata] = xlsread('QuandlStockCodeListUS.xlsx'); quandlc=text(:,1); % again, as a list in a cell array quandlcode=text(:,3) % corresponding Quandl's Price Code % fetch stock data for last 365 days date2=datestr(today,'yyyy-mm-dd') % from date1=datestr(today-365,'yyyy-mm-dd') % to stockd={}; % scan the Dow Jones tickers and fetch the data from Quandl.com for i=1:length(dowjc) for j=1:length(quandlc) if(strcmp(dowjc{i},quandlc{j})) fprintf('%4.0f %s\n',i,quandlc{j}); fts=0; [fts,headers]=Quandl.get(quandlcode{j},'type','fints', ... 'authcode','YourQUANDLtocken',... 'start_date',date1,'end_date',date2); cp=fts2mat(fts.Close,0); % close price R{i}=cp(2:end)./cp(1:end-1)-1; % return-series end end end N=length(dowjc); where make use of external files: dowjones.lst (tickers of stocks in our portfolio) and QuandlStockCodeListUS.xlsx (see here for more details on utilising Quandl.com‘s API for fetching Google Finance stock data). The Matlab’s cell array of R holds 30 return-series (each 251 point long). Having that, we find a set of top 5 maximal daily losses for each stock ending up with a data sample of n=150 points as mentioned eariler. 40 41 42 43 44 45 Rmin=[]; for i=1:N ret=sort(R{i}); Mn=ret(1:5); % extract 5 worst losses per stock Rmin=[Rmin; Mn]; end Now, we fit the GEV distribution,$$ H(z;a,b) = \exp\left[-\left( 1+ z\frac{x-b}{a} \right)^{-1/z} \right] \ \ , $$employing a ready-to-use function, gevfit, from Matlab’s Statistics Toolbox: 47 48 49 50 51 % fit GEV distribution [parmhat,parmci] = gevfit(Rmin) z=parmhat(1); % tail index a=parmhat(2); % scale parameter b=parmhat(3); % location parameter which returns for date2 = 2014-03-01, parmhat = -0.8378 0.0130 -0.0326 parmci = -0.9357 0.0112 -0.0348 -0.7398 0.0151 -0.0304 id est, the best estimates of the model’s parameters:$$ z_{150} = -0.8378^{+0.0980}_{-0.0979}, \ \ a_{150} = 0.0130^{+0.0021}_{-0.0018}, \ \ b_{150} = -0.0326^{+0.0022}_{-0.0022} \ \ . $$At this point, it is essential to note that negative z indicates Fréchet distribution since we fitted data with negative signs. Plotting data with model, 53 54 55 56 57 58 59 60 61 62 63 64 65 66 % plot data with the GEV distribution for found z,a,b x=-1:0.001:1; hist(Rmin,-1:0.01:0); h=findobj(gca,'Type','patch'); set(h,'FaceColor',[0.7 0.7 0.7],'EdgeColor',[0.6 0.6 0.6]); h=findobj(gca,'Type','box'); set(h,'Color','k'); % GEV pdf pdf1=gevpdf(x,z,a,b); y=0.01*length(Rmin)*pdf1; line(x,y,'color','r'); box on; xlabel('R_{min}'); ylabel(['f(R_{min}|z,a,b)']); xlim([-0.15 0]); illustrates our efforts in a clear manner: Eventually, performing derivation, 68 69 z=-z; EVaR=b-(a/z)*(1-(-n*log(0.95))^(-n*z)) we calculate the best estimation of 1-day EVaR as discussed:$$ \mbox{EVaR} = b_{150} – \frac{a_{150}}{-z_{150}} \left[1-(-n\ln(0.95))^{nz_{150}}\right] = -0.0481 \ \ . $$This number tells us that among 30 bluechips in our portfolio we should expect extreme loss of 4.81% on the following day (estimated using last 252 days of trading history). Now, is 4.81% a lot or not? That reminds me an immortal question asked by one of my colleagues many years ago, a guy who loved studying philosophy: Is 4 kilograms of potatoes a lot or not? The best way to address it, is by comparison. Let us again quickly look at return series of 30 stocks in our portfolio (R cell array in the code). For each individual series we derive its normalised distribution (histogram), fit it with a normal probability density function, and calculate corresponding 95% VaR (var local variable) in the following way: 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 %--figure; VaR=[]; for i=1:N rs=R{i}; % create the histogram for return series [H,h]=hist(rs,100); sum=0; for i=1:length(H)-1 sum=sum+(H(i)*(h(i+1)-h(i))); end H=H/sum; %--bar(h,H,'FaceColor',[0.7 0.7 0.7],'EdgeColor',[0.7 0.7 0.7]) % fit the histogram with a normal distribution [muhat,sigmahat] = normfit([rs]'); Y=normpdf(h,muhat(1),sigmahat(1)); %--hold on; plot(h,Y,'b-'); % and plot the fit using a blue line % find VaR for stock 'i' sum=0; i=1; while (i<length(h)) sum=sum+(Y(i)*(h(i+1)-h(i))); if(sum>=0.05) break; end i=i+1; end var=h(i) %--hold on; plot(h(i),0,'ro','markerfacecolor','r'); % mark VaR_fit VaR=[VaR; var]; end The vector of VaR then holds all thirty 95% VaR. By running the last piece of code, 102 103 104 105 106 107 108 figure; [H,h]=hist(VaR*100,20); line=[EVaR*100 0; EVaR*100 6]; bar(h,H,'FaceColor',[0.7 0.7 0.7],'EdgeColor',[0.7 0.7 0.7]) hold on; plot(line(:,1),line(:,2),'r:'); xlim([-6 0]); ylim([0 6]); xlabel('95% VaR [percent]'); ylabel('Histogram'); we plot their distribution and compare them to EVaR of -4.81%, here marked by red dotted line in the chart: The calculated 1-day 95% EVaR of -4.81% places itself much further away from all classical 95% VaR derived for all 30 stocks. It’s an extreme case indeed. You can use 1-day EVaR to calculate the expected loss in your portfolio just by assuming that every single stock will drop by -4.81% on the next day. This number (in terms of currency of your portfolio) for sure will make you more alert. Richard Branson once said: If you gonna fail, fail well. Don’t listen to him. ## Pre-Processing of Asset Price Series for Portfolio Optimization Portfolio Optimization is a significant component of Matlab’s Financial Toolbox. It provides us with ready-to-use solution in finding optimal weights of assets that we consider for trading deriving them based on the historical asset performance. From a practical point of view, we can include it in our algorithmic trading strategy and backtest its applicability under different initial conditions. This is a subject of my next up-coming post. However, before we can enjoy the view from the peak, we need to climb the mountain first. In Matlab, the portfolio is created as a dedicated object of the same name. It doesn’t read the raw stock data. We need to feed that beast. Two major ingredients satisfy the input: a vector of the expected asset returns and a covariance matrix. Matlab helps us to estimate these moments but first we need to deliver asset data in a digestable form. In this post we will see how one can quickly download the stock data from the Internet based on our own stock selection and pre-process them for solving portfolio optimization problem in Matlab. Initial Setup for Portfolio Object Let’s say that at any point of time you have your own list of stocks you wish to buy. For simplicity let’s also assume that the list contains stocks traded on NYSE or NASDAQ. Since you have been a great fun of this game, now you are almost ready to buy what you jotted down on your ShoppingList.lst. Here, an example of 10 tech stocks: AAPL AOL BIDU GOOG HPQ IBM INTC MSFT NVDA TXN They will constitute your portfolio of stocks. The problem of portfolio optimization requires a look back in time in the space of returns obtained in trading by each stock. Based on them the Return Proxy and Risk Proxy can be found. The return matrix R of dimensions (N-1)\times M where N stands for number of historical prices (e.g. derived daily, or monthly, etc.) and M for the number of stocks in our portfolio, is required by Matlab as an input. We will see how does it work in next post. For now let’s solely focus on creation of this matrix. In the article Create a Portfolio of Stocks based on Google Finance Data fed by Quandl I discussed Quandl.com as an attractive data provider for US stocks. Here, we will follow this solution making use of Quandl resources to pull out the stock price series for our shopping list. Ultimately, we aim at building a function, here: QuandlForPortfolio, that does the job for us: % Pre-Processing of Asset Price Series for Portfolio Optimization in Matlab % (c) 2013, QuantAtRisk.com, by Pawel Lachowicz clear all; close all; clc; % Input Parameters n=1*365; tickers='ShoppingList.lst'; qcodes='QuandlStockCodeListUS.xlsx'; [X,Y,R,AssetList] = QuandlForPortfolio(n,tickers,qcodes); We call this function with three input parameters. The first one, n, denotes a number of calendar days from today (counting backwards) for which we wish to retrieve the stock data. Usually, 365 days will correspond to about 250-252 trading days. The second parameter is a path/file name to our list of stock (desired to be taken into account in the portfolio optimisation process) while the last input defines the path/file name to the file storing stocks’ tickers and associated Quandl Price Codes (see here for more details). Feeding the Beast The QuandlForPortfolio Matlab function is an extended version of the previously discussed solution. It contains an important correcting procedure for the data fetched from the Quandl servers. First, let’s have a closer look on the function itself: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 % Function assists in fetching Google Finance data from the Quandl.com % server for a given list of tickers of stocks traded on NYSE or % NASDAQ. Data are retrieved for last 'n' days with daily sampling. % % INPUT % n : number of calendar days from 'today' (e.g. 365 would % correspond to about 252 business days) % tickers : a path/file name of a text file listing tickers % qcodes : a path/file name of Excel workbook (.xlsx) containing a list % of tickers and Quandl Price Codes in the format of % [Ticker,Stock Name,Price Code,Ratios Code,In Market?] % OUTPUT % X0 : [Nx1] column vector with days % Y0 : [NxM] matrix with Close Prices for M stocks % R0 : [(N-1)xM] matrix of Retruns % AssetList : a list of tickers (cell array) % % (c) 2013, QuantAtRisk.com, by Pawel Lachowicz function [X0,Y0,R0,AssetList0] = QuandlForPortfolio(n,tickers,qcodes) fileID = fopen(tickers); tmp = textscan(fileID, '%s'); fclose(fileID); AssetList=tmp{1}; % a list as a cell array % Read in the list of tickers and internal Quandl codes % [~,text,~] = xlsread(qcodes); quandlc=text(:,1); % again, as a list in a cell array quandlcode=text(:,3); % corresponding Quandl's Price Code date1=datestr(today-n,'yyyy-mm-dd'); % from date2=datestr(today,'yyyy-mm-dd'); % to % Fetch the data from Quandl.com % QData={}; for i=1:length(AssetList) for j=1:length(quandlc) if(strcmp(AssetList{i},quandlc{j})) fprintf('%4.0f %s\n',i,quandlc{j}); fts=0; [fts,headers]=Quandl.get(quandlcode{j},'type','fints', ... 'authcode','x',... 'start_date',date1,'end_date',date2,'collapse','daily'); QData{i}=fts; end end end % Post-Processing of Fetched Data % % create a list of days across all tickers TMP=[]; for i=1:length(QData) tmp=fts2mat(QData{i},1); tmp=tmp(:,1); TMP=[TMP; tmp]; end ut=unique(TMP); % use that list to find these days that are not present % among all data sets TMP=[]; for i=1:length(QData) tmp=fts2mat(QData{i},1); tmp=tmp(:,1); TMP=[TMP; setdiff(ut,tmp)]; end ut=unique(TMP); % finally, extract Close Prices from FTS object and store them % in Y0 matrix, plus corresponding days in X0 X0=[]; Y0=[]; for i=1:length(QData) tmp=fts2mat(QData{i},1); cp=[]; for j=1:size(tmp,1) [r,~,~]=find(ut==tmp(j,1)); if(isempty(r)) cp=[cp; tmp(j,5)]; % column 5 corresponds to Close Price if(i<2) % create a time column vector listing days % common among all data sets X0=[X0; tmp(j,1)]; end end end Y0=[Y0 cp]; end % transform Close Prices into Returns, R(i)=cp(i)/cp(i-1)-1 R0=tick2ret(Y0); AssetList0=AssetList'; end The main bottleneck comes from the fact that Matlab’s portfolio object demands an equal number of historical returns (N-1) in the matrix of R for all M assets. We design the function in the way that it sets the common timeframe for all stocks listed on our shopping list. Of course, we ensure that all stocks were traded in the markets for about n last days (rough estimation). Now, the timeframe of n last days should be understood as a first approximation. We fetch the data from Quandl (numeric date, Open, High, Low, Close, Volume) and save them in the cell array QData (lines #37-49) for each stock separately as FTS objects (Financial Time-Series objects; see Financial Toolbox). However, it may occur that not every stock we fetched displays the same amount of data. That is why we need to investigate for what days and for what stocks we miss the data. We achieve that by scanning each FTS object and creating a unique list of all days for which we have data (lines #54-60). Next, we loop again over the same data sets but now we compare that list with a list of all dates for each stock individually (lines #63-69), capturing (line #67) those dates that are missing. Their complete list is stored as a vector in line #69. Eventually, given that, we are able to compile the full data set (e.g. Close Prices; here line #80) for all stocks in our portfolio ensuring that we will include only those dates for which we have prices across all M assets (lines #70-91). Beast Unleashed We test our data pre-processing simply by running the block of code listed above engaging QuandlForPortfolio function and we check the results in the Matlab’s command window as follows: >> whos X Y R AssetList Name Size Bytes Class Attributes AssetList 1x10 1192 cell R 250x10 20000 double X 251x1 2008 double Y 251x10 20080 double what confirms the correctness of dimensions as expected. At this stage, the aforementioned function can be used two-fold. First, we are interested in the portfolio optimisation and we look back at last n calendar days since the most current one (today). The second usage is handy too. We consider our stocks on the shopping list and fetch for their last, say, n=7\times365 days with data. If all stocks were traded over past 7 years we should be able to collect a reach data set. If not, the function will adjust the beginning and end date to meet the initial time constrains as required for R matrix construction. For the former case, we can use 7-year data sample for direct backtesting of algo models utilizing Portfolio Optimization. Stay tuned as we will rock this land in the next post! Any Questions? Share them across QuantCove.com – the official Forum of QuantAtRisk. ## Information Ratio and its Relative Strength for Portfolio Managers The risk and return are the two most significant quantities investigated within the investment performance. We expect the return to be highest at the minimum risk. In practice, a set of various combinations is always achieved depending on a number of factors related to the investment strategies and market behaviours, respectively. The portfolio manager or investment analyst is interested in the application of most relevant tool in order to described the overall performance. The arsenal is rich but we need to know the character of outcomes we wish to underline prior to fetching the proper instruments from the workshop. In general there are two schools of investment performance. The first one is strictly related to the analysis of time-series on the tick-by-tick basis. All I mean by tick is a time scale of your data, e.g. minutes or months. The analyst is concerned about changes in investments and risk tracked continuously. A good example of this sort of approach is measurement of Value-at-Risk and Expected Shortfall, Below Target Risk or higher-moments of Coskewness and Cokurtosis. It all supplements the n-Asset Portfolio Construction with Efficient Frontier diagnostic. The second school in the investment analysis is a macro-perspective. It includes the computation on n-period rolling median returns (or corresponding quantities), preferably annualized for the sake of comparison. This approach carries more meaning behind the lines rather than a pure tick time-series analysis. For the former, we track a sliding changes in accumulation and performance while for the latter we focus more closely on tick-by-tick risks and returns. For the sake of global picture and performance (market-oriented vantage point), a rolling investigation of risk and return reveals mutual tracking factors between different considered scenarios. In this article I scratch the surface of the benchmark-oriented analysis with a special focus on the Information Ratio (IR) as an investment performance-related measure. Based on the market data, I test the IR in action and introduce a new supplementary measure of Information Ratio Relative Strength working as a second dimension for IR results. 1. Benchmark-oriented Analysis Let’s assume you a portfolio manager and you manage over last 10 years 11 investment strategies. Each strategy is different, corresponds to different markets, style of investing, and asset allocation. You wish to get a macro-picture how well your investment performed and are they beating the benchmark. You collect a set of 120 monthly returns for each investment strategy plus their benchmarks. The easiest way to kick off the comparison process is the calculation of n-period median (or mean) returns. Annualized Returns The rolling nY (n-Year) median return is defined as:$$ \mbox{rR}_{j}(n\mbox{Y}) = \left[ \prod_{i=1}^{12n} (r_{i,j}+1)^{1/n} \right] -1 \ \ \ \ \ \mbox{for}\ \ \ j=12n,…,N\ \ (\mbox{at}\ \Delta j =1) $$where r are the monthly median returns meeting the initial data selection criteria (if any) and N denotes the total number of data available (in our case, N=120). Any pre-selection (pre-filtering) of the input data becomes more insightful when contrasted with the benchmark performance. Thereinafter, by the benchmark we can assume a relative measure we will refer to (e.g. S\&P500 Index for US stocks, etc.) calculated as the rolling nY median. Therefore, for instance, the 1Y (rolling) returns can be compared against the corresponding benchmark (index). The selection of a good/proper benchmark belongs to us. Active Returns We define active returns on the monthly basis as the difference between actual returns and benchmark returns,$$ a_i = r_i – b_i , $$as available at i where i denotes a continuous counting of months over past 120 months. Annualized Tracking Risk (Tracking Error) We assume the widely accepted definition of the tracking risk as the standard deviation of the active returns. For the sake of comparison, we transform it the form of the annualized tracking risk in the following way:$$ \mbox{TR}_{j}(n\mbox{Y}) = \sqrt{12} \times \sqrt{ \frac{1}{12n-1} \sum_{i=1}^{12n} \left( a_{i,j}-\langle a_{i,j} \rangle \right)^2 } $$where the square root of 12 secures a proper normalization between calculations and j=12n,…,N. Annualized Information Ratio (IR) The information ratio is often referred to as a variation or generalised version of Sharpe ratio. It tells us how much excess return (a_i) is generated from the amount of excess risk taken relative to the benchmark. We calculate the annualized information ratio by dividing the annualised active returns by the annualised tracking risk as follows:$$ \mbox{IR}_j(n\mbox{Y}) = \frac{ \left[ \prod_{i=1}^{12n} (a_{i,j}+1)^{1/n} \right] -1 } { TR_j(n\mbox{Y}) } \ \ \ \ \ \mbox{for}\ \ \ j=12n,…,N $$We assume the following definition of the investment under-performance and out-performance, P, as defined based on IR:$$ P_j = \begin{cases} P_j^{-} =\mbox{under-performance}, & \text{if } \mbox{IR}_j(n\mbox{Y}) \le 0 \\ P_j^{+} =\mbox{out-performance}, & \text{if } \mbox{IR}_j(n\mbox{Y}) > 0 . \end{cases} $$Information Ratio Relative Strength of Performance Since P_j marks in time only those periods of time (months) where a given quantity under investigation achieved better results than the benchmark or not, it does not tell us anything about the importance (weight; strength) of this under/over-performance. The case could be, for instance, to observe the out-performance for 65\% of time over 10 years but barely beating the benchmark in the same period. Therefore, we introduce an additional measure, namely, the IR relative strength defined as:$$ \mbox{IRS} = \frac{ \int P^{+} dt } { \left| \int P^{-} dt \right| } . $$The IRS measures the area under \mbox{IR}_j(n\mbox{Y}) function when \mbox{IR}_j(n\mbox{Y}) is positive (out-performance) relative to the area cofined between \mbox{IR}_j(n\mbox{Y}) function and \mbox{IR}_j(n\mbox{Y})=0. Thanks to that measure, we are able to estimate the revelance of$$ \frac { n(P^{+}) } { n(P^{-}) + n(P^{+}) } $$ratio, i.e. the fraction of time when the examined quantity displayed out-performance over specified period of time (e.g. 10 years of data within our analysis). Here, n(P^{+}) counts number of P_j^{+} instances. 2. Case Study Let’s demonstrate the aforementioned theory in practice. Making use of real data, we perform a benchmark-oriented analysis for all 11 investment strategies. We derive 1Y rolling median returns supplemented by the correspoding 1Y tracking risk and information ratio measures. The following figure presents the results for 1Y returns of for the Balanced Options: Interestingly to notice, the Investment Strategy-2 (Inv-2) displays an increasing tracking risk between mid-2007 and mid-2009 corresponding to the Credit Global Financial Crisis (GFC). This consistent surge in TR is explained by worse than average performance of the individual options, most probably dictated by higher than expected volatility of assets within portfolio. The 1Y returns of Inv-2 were able to beat the benchmark over only 25% of time, i.e. about 2.25 years since Jun/30 2004. We derive that conclusion based on the quantitative inspection of the Information Ratio plot (bottom panel). In order to understand the overall performance for all eleven investment strategies, we allow ourselves to compute the under-performance and out-performance ratios,$$ \frac { n(P^{-}) } { n(P^{-}) + n(P^{+}) } \ \ \ \mbox{and} \ \ \ \frac { n(P^{+}) } { n(P^{-}) + n(P^{+}) } , $$and plot these ratios in red and blue colour in the following figure, respectively: Only two investment strategies, Inv-7 and Inv-11, managed to keep the out-performance time longer than their under-performance. Inv-8 denoted the highest ratio of under-performance to out-performance but also Inv-2 (consider above)performed equally badly. The 3rd dimension to this picture delivers the investigation of the IR Relative Strength measure for all investment strategies. We derive and display the results in next figure: The plot reveals close-to-linear correlation between two considered quantities. Inv-7 and Inv-11 strategies confirm their relative strength of out-performance while Inv-2 and Inv-8 their weakness in delivering strong returns over their short period of out-performance. It is of great interest to compare Inv-3, Inv-4, and Inv-10 strategies. While relative strength of out-performance of Inv-4 remains lower as contrasted with the latter ones (and can be explained by low volatility within this sort of investment strategy), Inv-10 denotes much firmer gains over benchmark than Inv-3 despite the fact that their out-performance period was nearly the same (\sim40%). Only the additional cross-correlation of IR Relative Strength outcomes as a function of the average return year-over-year is able to address the drivers of this observed relation. Discuss this topic on QaR Forum You can discuss the current post with other quants within a new Quant At Risk Forum. ## Create a Portfolio of Stocks based on Google Finance Data fed by Quandl There is an old saying Be careful what you wish for. Many quants and traders wished for a long time for a better quality of publicly available data. It wasn’t an easy task to accomplish but with a minimum of effort, it has become possible to get all what you truly dreamt of. When I started my experience with daily backtesting of my algo strategies I initially relied on Yahoo! data. Yeah, easily accessible but of low quality. Since the inception of Google Finance the same dream had been reborn. More precise data, covering more aspects of trading, platforms, markets, instruments. But how to get them all? Quandl.com provides us with an enormous access to an enormous database. Just visit the webpage to find out what I mean by barley putting my thoughts all together in one sentence. In fact, the portal’s accessibility via different programming languages (e.g. Matlab, C++, Python, etc.) makes that database more alive than ever. The space of parameters the quants get an access to becomes astronomical in number! Alleluja! Let’s ride this horse. Enough of watching the online charts changing a direction of our investing moods with a one-mintute time resolution. We were born to be smarter than stochastic processes taking control over our daily strategies. In this short post, making use of the Matlab R2013a platform, I want to show you how quickly you can construct any portfolio of US stocks taken from the Dow Jones Index universe (as an example). We will use the Google Finance stock data fed by Quandl.com. Google Finance Stock Data for Model Feeding Similar to Yahoo! Finance’s resources, Quandl.com has an own system of coding the names of all US stocks. Therefore, we need to know it before doing any further step. The .csv file containing all of the codes for Quandl’s stock data can be obtained via API for Stock Data webpage. The file contains the header of the form: Ticker Stock Name Price Code Ratios Code In Market? linking Google Finance ticker with Quandl’s code (Price Code). Doing some extra job with data filtering, I created the pre-processed set of all tickers excluding those non-active in the markets and removing all invalid data. You can download it here QuandlStockCodeListUS.xlsx as an Excel workbook. The second list we need to have is a list of potential stock we want to deal with in our portfolio. As for today, the current list of Dow Jones Index’s components you can download here: dowjones.lst. Let’s start with it: 1 2 3 4 5 6 7 8 9 10 11 12 13 % Create a Portfolio from Dow Jones Stocks based on Google Finance % Data fed by Quandl.com % % (c) 2013 QuantAtRisk.com, by Pawel Lachowicz close all; clear all; clc; % Read the list of Dow Jones components fileID = fopen('dowjones.lst'); tmp = textscan(fileID, '%s'); fclose(fileID); dowjc=tmp{1}; % a list as a cell array The guys from Mathworks suggest to make a new and good practice of using textscan command (lines 10-12) instead of dataread for reading data from the text files. The latter will be removed from the next Matlab release. Next, we need to import the current list of tickers for US stocks and their corresponding codes in the Quandl’s database. We will make the use of my pre-processed data set of QuandlStockCodeListUS.xlsx as mentioned earlier in the following way: 15 16 17 % Read in the list of tickers and internal code from Quandl.com [ndata, text, alldata] = xlsread('QuandlStockCodeListUS.xlsx'); quandlc=text(:,1); % again, as a list in a cell array For the simplicity of our example, we will be considering all 30 stocks as a complete sample set of stocks in our portfolio. A portfolio construction is not solely a selection of stocks. It usually links a time period of the past performance of each individual stock. In the following move, we will collect daily stock returns over last 365 calendar year (corresponding to about 252 trading days). This framework is pretty handful. Why? If you build or run live a model, most probably your wish is to update your database when US markets are closed. You fetch Quandl.com for data. Now, if your model employs risk management or portfolio optimisation as an integrated building-block, it’s very likely, you are interested in risk(-return) estimation, e.g. by calculating Value-at-Risk (VaR) or similar risk measures. Of course, the choice of the time-frame is up to you based on your strategy. Having that said, 19 20 21 % fetch stock data for last 365 days date2=datestr(today,'yyyy-mm-dd') % from date1=datestr(today-365,'yyyy-mm-dd') % to what employs an improved Matlab’s command of datestr with formatting as a parameter. It’s really useful when it comes to data fetching from Quandl.com as the format of the date they like to see in your code is yyyy-mm-dd. The above two commands return in Matlab the following values: date2 = 2013-11-03 date1 = 2012-11-03 as for the time of writing this article (string type). Get an Access to Quandl’s Resources Excellent! Let’s catch some fishes in the pond! Before doing it we need to make sure that Quandl’s software is installed and linked to our Matlab console. In order to download Quandl Matlab package just visit this website and follow the installation procedure at the top of the page. Add proper path in Matlab environment too. The best practice is to register yourself on Quandl. That allows you to rise the daily number of data call from 50 to 500. If you think you will need more data requests per day, from the same webpage you can send your request. When registered, in your Profile section on Quandi, you will gain an access to your individual Authorization Token, for example: You will need to replace ‘yourauthenticationtoken’ in the Matlab code (see line 35) with your token in order to get the data. Fetching Stock Data From this point the road stops to wind. For all 30 stocks belonging to the current composition of Dow Jones Index, we get their Open, High, Low, Close prices (as retrieved from Google Finance) in the following way: 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 % create an empty array for storing stock data stockd={}; % scan the Dow Jones tickers and fetch the data from Quandl.com for i=1:length(dowjc) for j=1:length(quandlc) if(strcmp(dowjc{i},quandlc{j})) fprintf('%4.0f %s\n',i,quandlc{j}); % fetch the data of the stock from Quandl % using recommanded Quandl's command and % saving them directly into Matlab's FTS object (fts) fts=0; [fts,headers]=Quandl.get(quandlcode{j},'type','fints', ... 'authcode','yourauthenticationtoken',... 'start_date',date1,'end_date',date2); stockd{i}=fts; % entire FTS object in an array's cell end end end % use 'save' command to save all actual variables from memory on the disc, % if you need it for multiple code running % save data A code in line 35 to 37 is an example how you call Quandi for data and fetch them into FINTS (financial time series object from the Financial Toolbox in Matlab). So easy. If you wish to change the time frequency, apply transformation or get the data not in FINTS but .csv format, all is available for you as a parameter at the level of calling Quandl.get() command (explore all possibilities here). Having that, you can easily plot the Google Finance data, for instance, of IBM stock straight away: % diplay High-Low Chart for IBM highlow(stockd{11}) title('IBM'); ylabel('Stock Price ()'); to get: or prepare a return-risk plot for your further portfolio optimisation ret=[]; for i=1:length(stockd) % extract the Close Price tmp=fts2mat(stockd{i}.Close,0); % calculate a vector of daily returns r=tmp(2:end)./tmp(1:end-1)-1; ret=[ret r]; end % calculate the annualized mean returns for 30 stocks mret=mean(ret)*sqrt(250); % and corresponding standard deviations msig=std(ret)*sqrt(250); % plot return-risk diagram p=100*[msig; mret]' plot(p(:,1),p(:,2),'b.'); xlim([0 30]); xlabel('Annualized Risk of Stock (%)') ylabel('Annualized Mean Return (%)'); providing us with Forum: Further Discussions You can discuss the current post with other quants within a new Quant At Risk Forum (Data Handling and Pre-Processing section). ## Simulation of Portfolio Value using Geometric Brownian Motion Model Having in mind the upcoming series of articles on building a backtesting engine for algo traded portfolios, today I decided to drop a short post on a simulation of the portfolio realised profit and loss (P&L). In the future I will use some results obtained below for a discussion of key statistics used in the evaluation of P&L at any time when it is required by the portfolio manager. Assuming that we trade a portfolio of any assets, its P&L can be simulated in a number of ways. One of the quickest method is the application of geometric brownian motion (GBM) model with a drift in time of \mu_t and the process standard deviation of \sigma_t over its total time interval. The model takes its form as follows:$$ dS_t = \mu_t S_t dt + \sigma_t S_t dz $$where dz\sim N(0,dt) and the process has variance equal to dt (the process is brownian). Let t is the present time and the portfolio has an initial value of S_t dollars. The target time is T therefore portfolio time horizon of evaluation is \tau=T-t at N time steps. Since the GBM model assumes no correlations between the values of portfolio on two consecutive days (in general, over time), by integrating dS/S over finite interval we get a discrete change of portfolio value:$$ \Delta S_t = S_{t-1} (\mu_t\Delta t + \sigma_t\epsilon \sqrt{\Delta t}) \ . $$For simplicity, one can assume that both parameters of the model, \mu_t and \sigma_t are constant over time, and the random variable \epsilon\sim N(0,1). In order to simulate the path of portfolio value, we go through N iterations following the formula:$$ S_{t+1} = S_t + S_t(\mu_t\Delta t + \sigma_t \epsilon_t \sqrt{\Delta t}) $$where \Delta t denotes a local volatility defined as \sigma_t/\sqrt{N} and t=1,…,N. Example Let’s assume that initial portfolio value is S_1=\10,000 and it is being traded over 252 days. We allow the underlying process to have a drift of \mu_t=0.05 and the overall volatility of \sigma_t=5% constant over time. Since the simulation in every of 252 steps depends on \epsilon drawn from the normal distribution N(0,1), we can obtain any number of possible realisations of the simulated portfolio value path. Coding quickly the above model in Matlab, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 mu=0.05; % drift sigma=0.05; % std dev over total inverval S1=10000; % initial capital () N=252; % number of time steps (trading days) K=1; % nubber of simulations dt=sigma/sqrt(N); % local volatility St=[S1]; for k=1:K St=[S1]; for i=2:N eps=randn; S=St(i-1)+St(i-1)*(mu*dt+sigma*eps*sqrt(dt)); St=[St; S]; end hold on; plot(St); end xlim([1 N]); xlabel('Trading Days') ylabel('Simulated Portfolio Value ()'); lead us to one of the possible process realisations, quite not too bad, with the annual rate of return of about 13%. The visual inspection of the path suggests no major drawbacks and low volatility, therefore pretty good trading decisions made by portfolio manager or trading algorithm. Of course, the simulation does not tell us anything about the model, the strategy involved, etc. but the result we obtained is sufficient for further discussion on portfolio key metrics and VaR evolution. ## VaR and Expected Shortfall vs. Black Swan It is one of the most fundamental approaches in measuring the risk, but truly worth revising its calculation. Value-at-Risk (VaR). A magical number quoted at the end of the day by the banks’ financial risk managers, portfolios’ risk managers, and risk managers simply concerned about the expected risk threshold not to be, hopefully, exceeded on the next day. What is so magical about it? According to definition, given a specific time horizon and taking into account last N days of our trading activity, we can always calculate a number which would provide us with a simple answer to the question: if something really unexpected happened on the next day, what would be the loss margin we could for sure suffer? Well, welcome to the world of VaR! More formally, given a daily (or, weekly, monthly, etc.) distribution of returns (of, for example, a single stock, a portfolio of assets, etc.), we are interested in finding the value of VaR_\alpha of, say, \alpha=0.05 (five percent) which would say to as that there is 5% of chances that the value of VaR_{0.05} would be exceeded in trading on the next day. This value is located in the left tail of the distribution and, by convention, it is given as positive number. Therefore, we define VaR_\alpha as:$$ P[X\le VaR_\alpha] = 1 – \alpha $$where X is a random variable and can, as in our assumed case, represent a 1-day return on a traded asset or a portfolio of assets. Beautiful! We have the risk threshold provided as a number. But in practice, how should we calculate VaR_\alpha? Here come into spotlight some of the shortcomings of VaR in general. VaR depends on the time-frame of the daily returns we examine: therefore VaR_\alpha as a number will be different if you include last 252 trading days in estimation comparing to last 504 trading days (2 years). Intuitively, the more data (more days, years, etc.) we have the better estimation of VaR_\alpha, right? Yes and no. Keep in mind that the dynamic of the trading market changes continuously. In early 80s there was no high-frequency trading (HFT). It dominated trading traffic after the year of 2000 and now HFT plays a key role (an influential factor) in making impact on prices. So, it would be a bit unfair to compare VaR_\alpha estimated in 80s with what is going on right now. VaR_\alpha assumes that the distribution of returns is normal. Not most beautiful assumption but the most straightforward to compute VaR_\alpha. Should we use VaR_\alpha? Yes, but with caution. Okay then, having VaR_\alpha calculated we know how far the loss could reach at the 1-\alpha confidence level. The next super important question in risk management every risk manager should ask or at least be interested in reporting is: if VaR_\alpha event occurs on the next day, what is the expected loss we might expect to suffer (say, in dollars)? VaR_\alpha is the threshold. Now we are interested in the expected value of loss given the exceedance. It is the definition of expected shortfall and is based on the concept of conditional probability as follows:$$ E[X\ |\ X>VaR_\alpha] = ES \ . $$In general, if our daily distribution of returns can be described by a function f(x) which would represent a power density function (pdf), then:$$ ES = \frac{1}{\alpha} \int_{-\infty}^{VaR_\alpha} xf(x)dx \ . $$Given any data set of returns, R_t\ (t=1,…,N), calculated from our price history,$$ R_t = \frac{P_{t+1}}{P_t} -1 $$both numbers, VaR_\alpha and ES can be, in fact, calculated in two ways. The first method is based on the empirical distribution, i.e. using data as given:$$ VaR_\alpha = h_i^{VaR} \ \ \mbox{for}\ \ \sum_{i=1}^{M-1} H_i(h_{i+1}-h_i) \le \alpha $$where H represents the normalized histogram of R_t (i.e., its integral is equal 1) and M is the number of histograms bins. Similarly for ES, we get:$$ ES = \sum_{i=1}^{h_i^{VaR}} h_iH_i(h_{i+1}-h_i) \ . $$The second method would be based on integrations given the best fit to the histogram of R_t using f(x) being a normal distribution. As we will see in the practical example below, both approaches returns different values, and an extra caution should be undertaken while reporting the final risk measures. Case Study: IBM Daily Returns in 1987 The year of 1987 came into history as the most splendid time in the history of stock trading. Why? Simply because of so-called Black Monday on Oct 19, where markets denoted over 20% losses in a single trading day. Let’s analyse the case of daily returns and their distribution for IBM stock, as analysed by a risk manager after the close of the market on Thu, Dec 31, 1978. The risk manager is interested in the estimation of 1-day 95% VaR threshold, i.e. VaR_0.05, and the expected shortfall if on the next trading day (Jan 4, 1988) the exceedance event would occur. Therefore, let’s assume that our portfolio is composed solely of the IBM stock and we have the allocation of the capital of C dollars in it as for Dec 31, 1987. Using a record of historical trading, IBM.dat, of the form: IBM DATES OPEN HIGH LOW CLOSE VOLUME 03-Jan-1984 0.000000 0.000000 0.000000 30.437500 0.000000 04-Jan-1984 0.000000 0.000000 0.000000 30.968750 0.000000 05-Jan-1984 0.000000 0.000000 0.000000 31.062500 0.000000 06-Jan-1984 0.000000 0.000000 0.000000 30.875000 0.000000 ... 04-Mar-2011 0.000000 0.000000 0.000000 161.830000 0.000000 07-Mar-2011 0.000000 0.000000 0.000000 159.930000 0.000000 containing the close price of IBM stock, in Matlab, we extract the data for the year of 1987, then we construct a vector of daily returns R_t on the stock, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 % Reading in the data fn=['IBM.dat']; ftsD=ascii2fts(fn,1,2); range=[datenum('02-Jan-1987'),datenum('31-Dec-1987')]; drr=[datestr(range(1)),'::',datestr(range(2))]; fts=ftsD(drr); data=fts2mat(fts,1); % R_t vector rdata=data(2:end,5)./data(1:end-1,5)-1; % Plotting figure(1); subplot(2,1,1); plot(data(:,1)-datenum('2-Jan-1987'),data(:,5)); xlim([0 365]); ylabel('IBM Close Price ()'); subplot(2,1,2); plot(data(2:end,1)-datenum('2-Jan-1987'),rdata); xlim([0 365]); ylabel('R_t') xlabel('t (days)') and present the data graphically: Having our data, let’s find VaR_{0.05} and corresponding ES, 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 % Construct the normalized histogram [H,h]=hist(rdata,100); hist(rdata,100); sum=0; for i=1:length(H)-1 sum=sum+(H(i)*(h(i+1)-h(i))); end figure(2) H=H/sum; bar(h,H,'FaceColor',[0.7 0.7 0.7],'EdgeColor',[0.7 0.7 0.7]) % Calculate VaR 95 based on empirical distribution sum=0; i=1; while (i<length(H)) sum=sum+(H(i)*(h(i+1)-h(i))); if(sum>=0.05) break; end i=i+1; end VaR=h(i) hold on; plot(h(i),0,'ro'); % mark VaR in the plot % Fit the normal distribution to R_t data [muhat,sigmahat] = normfit([rdata]'); Y = normpdf(h,muhat(1),sigmahat(1)); hold on; plot(h,Y,'b-'); % and plot the fit using blue line % Find for the fitted N(muhat,sigmahat) VaR 95% sum=0; i=1; while (i<length(h)) sum=sum+(Y(i)*(h(i+1)-h(i))); if(sum>=0.05) break; end i=i+1; end VaR_fit=h(i) hold on; plot(h(i),0,'ro','markerfacecolor','r'); % mark VaR_fit ivar=i; % Calculate the Expected Shortfall % based on fitted normal distribution sum=0; i=1; while (i<=ivar) sum=sum+(h(i)*Y(i)*(h(i+1)-h(i))); i=i+1; end ES_fit=sum/0.05 hold on; plot(ES_fit,0,'ko','markerfacecolor','k'); % Add numbers to the plot text(-0.23,25,['1-Day VaR and Expected Shortfall']); text(-0.23,23.5,['as for 31/12/1987']); text(-0.23,20,['VaR = ',num2str(VaR*-1,3),' (',num2str(VaR*-100,2),'%)']); text(-0.23,18,['ES = ',num2str(ES_fit*-1,3),' (',num2str(ES_fit*-100,2),'%)']); text(-0.23,13,['\mu = ',num2str(muhat,2)]); text(-0.23,11,['\sigma = ',num2str(sigmahat,2)]); xlabel('R_t'); marking all numbers in resulting common plot: In the plot, filled red and black markers correspond to VaR_{0.05} and ES, respectively, and are calculated using the fitted normal distribution. Both values have been displayed in the plot as well. In addition to that, the plot contains a red open circle marker denoting the value of VaR_{0.05} obtained from the empirical histogram discrete integration. Please note how the choice of the method influences the value of VaR. The derived numbers tell the risk manager that in 95% of cases one should feel confident that the value of VaR_{0.05} will not be exceed in trading of IBM on Jan 4, 1988 (next trading day). C\times VaR_{0.05} = 0.028C would provide us with the loss amount (in dollars) if such undesired event had occurred. And if it was the case indeed, the expected loss (in dollars) we potentially would suffer, would be C\times ES = 0.058C. Now it easy to understand why the Black Swan event of Oct 19, 1987 falls into the category of super least likely moves in the markets. It places itself in the far left tail of both empirical and fitted distributions denoting an impressive loss of 23.52% for IBM on that day. If we assume that the daily price changes are normally distributed then probability of a drop of at least 22 standard deviations is$$ P(Z\le -22) = \Phi(-22) = 1.4 \times 10^{-107} \ . $$This is astronomically rare event! They say that a lightening does not strike two times, but if a 1-day drop in the market of the same magnitude had occurred again on Jan 4, 1988, our long position in IBM would cause us to suffer not the loss of 0.058C dollars but approximately 0.23C. Ahhhuc! ## Coskewness and Cokurtosis Computation for Portfolio Managers Suppose you are responsible for the management of a current company’s portfolio P consisting of N securities (e.g. equities) traded actively at NASDAQ. As a portfolio manager you probably are interested in portfolio performance, new asset allocation, and risk control. Therefore, your basic set of statistics should include portfolio expected return, \mu_P, and portfolio standard deviation, \sigma_P, as two key numbers you want to calculate every time you evaluate your portfolio performance. Coskewness, s_P, and cokurtosis, k_P, are two additional multivariate higher moments (co-moments) important in asset allocation process and portfolio management. Similarly to the fundamental meaning of a sample’s skewness and kurtosis, the co-skewness and co-kurtosis provides a portfolio manager with an ability to test the same portfolio under different composition in order to facilitate changes required to be introduced (e.g. an elimination of some allocations in badly performing securities causing more pronounced negative coskewness). In this post I will provide with the basic matrix algebra covering the computational solution for four portfolio statistics with a Matlab code example as a practical implementation of mathematics in action. Expected Return on Portfolio Let P is our portfolio containing N securities K_i such that P=\{K_1,K_2,…,K_N\} and i=1,…,N. Each security has a trading history for last T days (or months, or years) and we are looking for its T-1 rates of return r_t for t=1,…,T-1. Therefore, let us denote by K_i a column vector of dimensions (T-1,1) such that:$$ K_i = [r_1,r_2,…,r_{T-1}]^T\ , $$and portfolio matrix, P, containing all securities as:$$ P = [K_1\ |\ K_2\ |\ …\ |\ K_N ] = \left[ \begin{array}{cccc} r_{1,1} & r_{1,2} & … & r_{1,N} \\ r_{2,1} & r_{2,2} & … & r_{2,N} \\ … & … & … & … \\ r_{T-1,1} & r_{T-1,2} & … & r_{T-1,N} \end{array} \right] \ . $$Our portfolio can be described in term of their weights:$$ w_i = \frac{y_iS_i(0)}{V(0)} $$where numerator describes the product of the number of shares and stock purchase price in ratio to the amount initially invested in the portfolio. Rewriting it into one-row matrix we have:$$ w = [\ w_1\ \ w_2\ \ … \ \ w_N\ ] $$where from obvious reason the sum of individual weights is one or 1=uw^T. For each security we can calculate its expected return \mu_i=E(K_i) and form a vector:$$ m = [\ \mu_1\ \ \mu_2\ \ … \ \ \mu_N\ ] \ . $$Finally, the expected return on portfolio P is given as:$$ \mu_P = mw^T \ $$what follows:$$ \mu_P = E(K_i) = E\left(\sum_{i=1}^{N} w_iK_i \right) = \sum_{i=1}^{N} w_i\mu_i = mw^T . $$Portfolio Variance By evaluation of variance, we are able to show that the expected portfolio variance can be expressed as:$$ \sigma_P = w M_2 w^T $$where M_2 is the covariance matrix (N,N) with the individual covariances denoted by c_{ij}=Cov(K_i,K_j). At this point, each element of M_2 could be calculated as c_{ij}=\rho_{ij}\sigma_i\sigma_j, i.e. with a knowledge of coefficient of correlation, \rho_{ij}, between the securities i and j, and corresponding standard deviations calculated from the history provided by P matrix. Just as an educational remark, let me remind you that elements c_{ij} can be obtained in the following way:$$ c_{ij} = \frac{1}{[(T-1)-1)]} \sum_{i,j=1}^{N} \sum_{t=1}^{T-1} (K_{t,i}-\mu_i)(K_{t,j}-\mu_j) \ , $$therefore$$ M_2 = \left[ \begin{array}{cccc} c_{11} & c_{12} & … & c_{1N} \\ c_{21} & c_{22} & … & c_{2N} \\ … & … & … & … \\ c_{N1} & c_{N2} & … & c_{NN} \end{array} \right] . $$Portfolio Coskewness It can calculated as a product of:$$ s_P = w M_3 (w^T\otimes w^T) $$where the symbol of \otimes denotes the Kronecker product between w and w, and$$ M_3 = E[(r-\mu)(r-\mu)^T\otimes(r-\mu)^T] = \{a_{ijk}\} \ , \\ a_{ijk} = E[(r_i-\mu_i)(r_j-\mu_j)(r_k-\mu_k)] \ \ \mbox{for}\ \ i,j,k=1,…,N. $$The co-skewness matrix of M_3 of dimensions (N,N^2) can be represented by N A_{ijk} matrixes (N,N) such that:$$ M_3 = [\ A_{1jk}\ A_{1jk}\ …\ A_{Njk}\ ] $$where j,k=1,…,N as well as for an index i. The individual elements of the matrix can be obtained as:$$ a_{ijk} = \frac{1}{T-1} \sum_{i,j,k=1}^{N} \sum_{t=1}^{T-1} (K_{t,i}-\mu_i)(K_{t,j}-\mu_j)(K_{t,k}-\mu_k) \ . $$Portfolio Cokurtosis Cokurtosis can be obtained by calculation of the following products:$$ k_P = w M_4 (w^T\otimes w^T\otimes w^T) $$where$$ M_4 = E[(r-\mu)(r-\mu)^T\otimes(r-\mu)^T\otimes(r-\mu)^T] = \{b_{ijkl}\} \ , \\ b_{ijkl} = E[(r_i-\mu_i)(r_j-\mu_j)(r_k-\mu_k)(r_l-\mu_l)] \ \ \mbox{for}\ \ i,j,k,l=1,…,N. $$The co-kurtosis matrix M_4 of dimensions (N,N^3) can be represented by N B_{ijkl} matrixes (N,N) such that:$$ M_4 = [\ B_{11kl}\ B_{12kl}\ …\ B_{1Nkl}\ |\ B_{21kl}\ B_{22kl}\ …\ B_{2Nkl}\ |\ …\ |\ B_{N1kl}\ B_{N2kl}\ …\ B_{NNkl}\ ] $$where k,l=1,…,N. The individual elements of the matrix can be obtained as:$$ b_{ijkl} = \frac{1}{T-1} \sum_{i,j,k,l=1}^{N} \sum_{t=1}^{T-1} (K_{t,i}-\mu_i)(K_{t,j}-\mu_j)(K_{t,k}-\mu_k)(K_{t,l}-\mu_l) \ . $$Example Below I present a simple Matlab code which computes discussed above portfolio key four portfolio statistics based on N=3 securities traded within given P. The choice of portfolio content is not solely limited to equities, and P can contain any asset class under management, or even their mixture. Since the expected portfolio return and risk, \mu_P and \sigma_P, respectively, are assumed to be expressed as percent, their transformation to a physical unit of money can be straightforwardly obtained. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 % Program demonstrates the calculattion of portfolio four key statistics: % the expected portfolio return (muP), portfolio standard deviation (sigP), % portfolio co-skewness (sP), and portfolio co-kurtosis (kP). % % (c) 2013, QuantAtRisk.com, by Pawel Lachowicz % % Assumptions: * portfolio comprises 252 trading days, i.e. T=252-1 values % of daily returns % * daily returns are simulated from the normal distribution % with zero mean and unit standard deviation (in practice % one can use real market data) % * N=3, number of securities in portfolio P % % Remark: the code can be easily modified for the need of any N securities % considered in your portfolio P; it is not difficult to note that % the calcution of all four statistics can be performed by risk % managers who manage the porfolio P of portfolios P_i such that % P=[P_1 P_2 ... P_N] clear all; clc; close all; T=252; % Simulated returns for 3 securities X,Y,Z expressed in [%] X=random('normal',0,1,T-1,1); Y=random('normal',0,1,T-1,1); Z=random('normal',0,1,T-1,1); % Portfolio matrix P=[X Y Z]; % Portfolio weights for all securities (sum=1) w=[0.2 0.5 0.3]; % m-vector containing the expected returns of securities m=mean([P(:,1) P(:,2) P(:,3)]); % Computation of M2, the covariance matrix S=[]; for i=1:size(P,2) for j=1:size(P,2) u=0; for t=1:T-1 u=u+((P(t,i)-mean(P(:,i)))*(P(t,j)-mean(P(:,j)))); end S(i,j)=u/((T-1)-1); end end M2=S; % % Computation of M3, the co-skewness matrix M3=[]; for i=1:size(P,2) S=[]; for j=1:size(P,2) for k=1:size(P,2) u=0; for t=1:T-1 u=u+((P(t,i)-mean(P(:,i)))*(P(t,j)-mean(P(:,j))) ... *(P(t,k)-mean(P(:,k)))); end S(j,k)=u/(T-1); end end M3=[M3 S]; end % Computation of M4, the co-kurtosis matrix M4=[]; for i=1:size(P,2) for j=1:size(P,2) S=[]; for k=1:size(P,2) for l=1:size(P,2) u=0; for t=1:T-1 u=u+((P(t,i)-mean(P(:,i)))*(P(t,j)-mean(P(:,j)))* ... (P(t,k)-mean(P(:,k)))*(P(t,l)-mean(P(:,l)))); end S(k,l)=u/(T-1); end end M4=[M4 S]; end end % Results muP=m*w' % expected return on a portfolio [%] stdP=sqrt(w*M2*w') % portfolio standard deviation [%] sK=w*M3*kron(w',w') % portfolio coskewness sP=w*M4*kron(kron(w',w'),w') % portfolio cokurtosis ## Measure of Risk of any Security in Equilibrium The standard Capital Asset Pricing Model (CAPM) is an equilibrium model which construction allows us to determine the relevant measure of risk for any asset and the relationship between expected return and risk for asset when market are in equilibrium. Within CAPM the expected return on security i is given by:$$ \bar{R}_i = R_F + \beta_i(\bar{R}_M-R_F) $$where \beta_i is argued to be the correct measure of a security’s risk in a very well-diversified portfolios. For such portfolios the non-systematic risk tends to go to zero and the reminder – the systematic risk – is measured by \beta_i. For the Market Portfolio its Beta is equal one. Therefore, the aforementioned equation defines the security market line. It is possible to go one step further and write the same CAPM formula as follows:$$ \bar{R}_i = R_F + \left( \frac{\bar{R}_M-R_F}{\sigma_M} \right) \frac{\sigma_{iM}}{\sigma_M} $$what keeps its linear relationship between the expected return but in \sigma_{iM}/\sigma_M space. Recall that the term in braces is the market price of the risk at riskless rate of interest given by R_F. \sigma_{iM} denotes the measure of the risk of security i but if left defined as \sigma_{iM}/\sigma_M is provide us with more intuitive understanding itself as the measure of how the risk on a security affects the risk of the market portfolio. In other words, \sigma_{iM}/\sigma_M is the measure of risk of any security in equilibrium and, as we will show further below, it is equal:$$ \frac{\sigma_{iM}}{\sigma_M} = \frac{X_i^2\sigma_i^2 + \sum_{j=1,\\ j\ne 1}^{N} X_j \sigma_{ij}} {\sigma_M} $$We may get that performing a calculation of the first derivative of the standard deviation of the market portfolio \sigma_M, i.e. for$$ \sigma_M^2 = \sum_{i=1}^{N} X_i^2\sigma_i^2 + \sum_{i=1}^N \sum_{j=1, j\ne i}^{N} X_iX_j \sigma_{ij} $$the risk of a security is the change in the risk of the market portfolio. as the holdings of that security are varied:$$ \frac{d\sigma_M}{dX_i} = \frac { \left[ \sum_{i=1}^{N} X_i^2\sigma_i^2 + \sum_{i=1}^N \sum_{j=1, j\ne i}^{N} X_iX_j\sigma_{ij} \right]^{1/2} } { dX_i }  = \frac{1}{2} \left[ 2X_i\sigma_i^2 + 2 \sum_{j=1, j\ne 1}^{N} X_j\sigma_{ij} \right] \times \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left[ \sum_{i=1}^{N} X_i^2\sigma_i^2 + \sum_{i=1}^N \sum_{j=1, j\ne i}^{N} X_iX_j\sigma_{ij} \right]^{-1/2}  = \frac{X_i^2\sigma_i^2 + \sum_{j=1, j\ne 1}^{N} X_j \sigma_{ij} } {\sigma_M} = \frac{\sigma_{iM}}{\sigma_M} $$Above, \sigma_{ij} denotes a covariance between any i and j security in the market portfolio and X_i is the share of capital invested in i’s security. Interestingly, the only and remaining question is: how do we know what is best choice of the market portfolio? I will try to address a separate post around the research one may conduct covering that topic. As for now, one may assume that S&P500 or S&P900 universe of stocks would serve here as a quite good example. If so, that would also mean that in order to find \sigma_{iM}/\sigma_M for any security we invested our money in, we need to find \sigma_M. Is it hard? Well, if we assume the market to be S&P500 then for any i its X_i=1/500. Now, all we need to care is to have an idea about the security’s variance \sigma_i^2 and the measure of covariance, i.e. how two securities like each other in out market portfolio. A clever reader would ask immediately: But what about a time-window? And that’s where the devil conceals itself. ## Final Frontier of Efficient Frontier Have you ever considered why in the risk-return space the efficient frontier takes its shape which resembles some of the Bézier curves? And also, why everybody around suggests you that if you approach the efficient frontier from the right-hand side, your selection of a new investment could be the best choice? Well, as for the latter question the answer seems to be easy. First, you plot all your stocks or the combination of stocks into a number of various portfolios (i.e. you derive the expected return and volatility of a given stock or portfolio, respectively; check out my article on 2-Asset Portfolio Construction for some fundamentals in that field) in the expected risk-return space. Your result can be similar to that one that follows: where red dots correspond to specific positions of securities which you actually consider for a future investment. The data in the plot are annualized, what simply means we are looking for a potential return and risk (volatility) in a 1-year time horizon only. Secondly, from the plot you can read out that UTX, based on historical data, offers the highest potential return but at quite high risk. If you are not a risk-taker, the plot suggest immediately to choose MMM or JNJ but at lower expected return. So far so good. The blue line represent so called efficient frontier in the risk-return space. The investor wants to choose the portfolio on the efficient frontier becasue – for each volatility – there is portfolio on the efficient frontier that has a higher rate of return than any other portfolio with the same volatility. In addition, by holding a portfolio on the efficient frontier the investor benefits from the diversification (see also Riskless Diversification). That brings us to an attempt of answering the very first question we posed at the beginning: about the shape, therefore about: THE MATHEMATICS OF EFFICIENT FRONTIER As we will show below, the efficient frontier is a solution to the problem of efficient selection between two securities. It is sufficient to conduct the basic calculation based on two assets considered together. For the higher number of potential securities or portfolios of securities (as presented in the plot above), the finding of the efficient frontier line is the subject to more advanced optimization. Let’s start from the simple case of two securities, A and B, respectively. In our consideration we will omit the scenario where short sales are allowed. The expected portfolio return would be:$$ \bar{R_P} = x_A\bar{R_A} + x_B\bar{R_B} $$where x_A+x_B=1 represents the sum of shares (weights), therefore x_B=1-x_A simply means that the our whole initial capital has been fully invested into both securities, and we firstly decided to put a part of x_A of our money into security A (say, x_A=0.75 or 75%). By rewriting the above we get:$$ \bar{R_P} = x_A\bar{R_A} + (1-x_A)\bar{R_B} . $$The volatility (standard deviation) of that portfolio is therefore equal:$$ \sigma_P = [x_A^2\sigma_A^2 + (1-x_A)^2\sigma_B^2 + 2\sigma_A\sigma_B\rho_{AB}x_A(1-x_A)]^{1/2} $$where by \rho_{AB} we denoted a linear correlation between the expected rates of return for both securities, also possible to denote as corr(\bar{R_A},\bar{R_B}). Please also note the the third term can be jotted down as:$$ 2\sigma_A\sigma_B\rho_{AB}x_A(1-x_A) = 2x_A(1-x_A)cov(\bar{R_A},\bar{R_B}) $$where the relation between covariance and correlation holds as follows:$$ cov(\bar{R_A},\bar{R_B}) = \sigma_A\sigma_B corr(\bar{R_A},\bar{R_B}) . $$The correlation can be a free parameter if we are given from the historical data of securities A and B the following parameters: \sigma_A, \sigma_B and \bar{R_A}, \bar{R_B}. This is so important to keep it in mind! Below, we will consider four cases leading us to the final mathematical curvature of the efficient frontier. CASE 1: Perfect Positive Correlation (\rho_{AB}=+1) Considering two assets only, A and B, respectively, the perfect positive correlation of \rho_{AB}=+1 leads us to the portfolio volatility:$$ \sigma_P = [x_A^2\sigma_A^2 + (1-x_A)^2\sigma_B^2 + 2\sigma_A\sigma_B\times 1 \times x_A(1-x_A)]^{1/2} $$where we recognize a^2+b^2+2ab=(a+b)^2 what allows us to rewrite it as follows:$$ \sigma_P = [x_A\sigma_A + (1-x_A)\sigma_B] \ \ \mbox{for}\ \ \rho_{AB}=+1 . $$Solving for x_A we get:$$ x_A = \frac{\sigma_P-\sigma_B}{\sigma_A-\sigma_B} \ \ \mbox{i.e.}\ \ x_A \propto \sigma_P . $$Substituting that into the formula for the expected portfolio return of 2-assets, we arrive at:$$ \bar{R_P} = x_A\bar{R_A} + (1-x_A)\bar{R_B} \\ = \frac{\sigma_P-\sigma_B}{\sigma_A-\sigma_B}\bar{R_A} + \left(1-\frac{\sigma_P-\sigma_B}{\sigma_A-\sigma_B}\right)\bar{R_B} \\ … \\ = \sigma_P\left(\frac{\bar{R_A}-\bar{R_B}}{\sigma_A-\sigma_B}\right) – \sigma_B\left(\frac{\bar{R_A}-\bar{R_B}}{\sigma_A-\sigma_B}\right)+\bar{R_B} . $$Therefore, the solution for that case can be summarized by the linear relation holding between the expected portfolio return and the portfolio volatility if the portfolio is constructed based on two assets perfectly positively correlated:$$ \bar{R_P}(\sigma_P) \propto \sigma_P . $$CASE 2: Perfect Negative Correlation (\rho_{AB}=-1) In this case, we can denote the portfolio volatility as:$$ \sigma_P = [x_A^2\sigma_A^2 + (1-x_A)^2\sigma_B^2 + 2\sigma_A\sigma_B\times (-1) \times x_A(1-x_A)]^{1/2} $$where similarly as in Case 1, we recognize a^2+b^2-2ab=(a-b)^2 what lead us to double possible solutions, namely:$$ \sigma_P = x_A\sigma_A – (1-x_A)\sigma_B \ \ \ \ \ \ \ \mbox{or} \\ \sigma_P = -x_A\sigma_A + (1-x_A)\sigma_B \ . $$Since \sigma_P must always be greater than zero, thus one of the solutions holds anytime. Please also note that \sigma_P(Case 2) is always less than \sigma_P(Case 1) for all x_A\in\langle 0,1\rangle. That allows us to formulate a very important note: The risk on the portfolio with \rho_{AB}=-1 is smaller than for the portfolio with \rho_{AB}=+1. Interesting, isn’t it? Hold on and check this out! It is possible to obtain a portfolio with zero risk by setting \sigma_P(Case 2) to zero and solving for x_A (i.e. what fraction of our money we need to invest in asset A to minimize the risk) as follows:$$ x_A\sigma_A – (1-x_A)\sigma_B = 0 \\ x_A^2\sigma_A – \sigma_B – x_A\sigma_B =0 \\ x_A(\sigma_A+\sigma_B) = \sigma_B $$what provide us with a solution of:$$ x_A = \frac{\sigma_B}{\sigma_A+\sigma_B} . $$It simply means that for such derived value of x_A the minimal risk of the portfolio occurs, i.e. \sigma_P=0. CASE 3: No Correlation between returns on the assets (\rho_{AB}=0) In this case the cross-term vanishes leaving us with the simplified form of the expected portfolio volatility:$$ \sigma_P = [x_A^2\sigma_A^2 + (1-x_A)^2\sigma_B^2 ]^{1/2} . $$The graphical presentation is as follows: The point marked by a dot is peculiar. This portfolio has a minimum risk. It is possible to derive for what sort of value of x_A this is achievable. In order to do it, for a general form of \sigma_P,$$ \sigma_P = [x_A^2\sigma_A^2 + (1-x_A)^2\sigma_B^2 + 2x_A(1-x_A)\sigma_A\sigma_B\rho_{AB}]^{1/2} , $$we find x_A by calculating the first derivative,$$ \frac{d{\sigma_P}}{d{x_A}} = \left(\frac{1}{2}\right) \frac{2x_A\sigma_A^2-2\sigma_B^2+2x_A\sigma_B^2+2\sigma_A\sigma_B\rho_{AB}-4x_A\sigma_A\sigma_B\rho_{AB}} {[x_A^2\sigma_A^2+(1-x_A)^2\sigma_B^2+2x_A(1-x_A)\sigma_A\sigma_B\rho_{AB}]^{1/2}} \\ = \left(\frac{1}{2}\right) \frac{2x_A\sigma_A^2-2\sigma_B^2+2x_A\sigma_B^2+2\sigma_A\sigma_B\rho_{AB}-4x_A\sigma_A\sigma_B\rho_{AB}}{\sigma_P} , $$and by setting it to zero:$$ \frac{d{\sigma_P}}{d{x_A}} = 0 $$we get:$$ x_A = \frac{\sigma_B^2-\sigma_A\sigma_B\rho_{AB}}{\sigma_A^2+\sigma_B^2-2\sigma_A\sigma_B\rho_{AB}} $$what in case of \rho_{AB}=0 reduces to:$$ x_A = \frac{\sigma_B^2}{\sigma_A^2+\sigma_B^2} . $$This solution is also know as the minimum variance portfolio when two assets are combined in a portfolio. CASE 4: All other correlations between two assets (e.g. \rho_{AB}=0.4) Given \sigma_A and \sigma_B at \rho_{AB} not equal to zero, -1 or 1, one can find that the formula for the expected variance of 2-asset portfolio will take, in general, the following form:$$ \sigma_P = \sqrt{c_1x_A^2 + c_2} $$where c_1 and c_2 denote some coefficients and it can be presented in the graphical form as follows: There is particular point of interest here and the shape of the curve is simply concaved. Since the investor is always to choose the asset which offers a bigger rate of return (say, \bar{R}_A\gt\bar{R}_B) at lower risk (\sigma_B\lt\sigma_A), therefore the best combination of portfolios we get every time following these rules are called as efficient frontier, in general represented by the Case 4’s line. ## 2-Asset Portfolio Construction When you start your journey in the world of finance and money investment, sooner or later, you hit the concept of a portfolio. The portfolio is simply a term used to describe shortly a set of assets you are or your are planning to hold in order to make a profit in a given horizon of time. If you are a fan of stocks, your desire is to purchase some shares of the company you like or you anticipate it will bring you a profit. Make a note here. I said- the company. Single, not plural. If so, having some initial capital set aside for a particular choice of yours, say C=\100,000, you decided to buy IBM stock for \100. Great! You own a fraction of the IBM company. Regardless the fact whether you are a beginner in investing/trading or you are more advanced in that field, your purchase of IBM shares is now called a long position, i.e. you expect the value of the stock to go up. We will skip the scenario of short-selling (i.e. making the profit when the price of the stock goes down) for the simplicity of this article. All right, you waited a couple of days and, luckily, two wonderful events took place in meanwhile. Firstly, the IBM stock is now not worth \100 any longer (the demand for that stock was so high that there were more buyers wanting to purchase a piece of IBM that their buying demand pushed the stock price higher), and now IBM is worth \120 per share. And guess what?! You can’t believe it! You’re smiling just because now you can sell your part (share) of the IBM company to someone else (a buyer who is now willing to pay \120 for the same share of IBM, i.e. \20 more!). So, you do it! Secondly, in meanwhile, the company of IBM decided that a part of its net income would be distributed (given to, divided among) all shareholders in the form of dividends that value had been announced by IBM to be \5 per one share hold. As a consequence of these two events, your net return occurred to be equal:$$ R = \frac{S_t+D_t}{S_{t-1}} – 1 = \frac{\$120+$5}{\$100} – 1 = 0.25 \ \ \mbox{or} \ \ 25\%
$$what in terms of your capital gain simply means you became richer by extra \25,000 according to$$
\mbox{Capital updated} \ = C(1+R) = \$100,000(1+0.25) = \$125,000 .
$$To be completely honest, this situation would take place under one condition of the market’s operations of buying and selling the IBM stock. You may ask- what’s that? It is commonly denoted and referred to as the situation of perfect financial market(s) – i.e. an assumption of no taxes, no transaction costs, no costs of writing and enforcing contracts, no restrictions on investment in securities, no differences in information across investors, investors take prices as given because they are too small to affect them by their action of buying/selling (i.e. there exists a good liquidity of the stock in the market) – is taken as default. Oh, it’s a rough assumption! It is, indeed. But it has been assumed with a clear purpose of simplifying the basic understanding of what you can gain/lose in the processes of trading. For a curious reader, I refer to my article of Number of Shares for Limit Orders which provides with a broader feeling on how extra fees of trading operations influence your initial capital, C, that you wish to invest in a single stock. Let’s now consider that you want to split your initial capital of C between two choices: IBM and NVDA (NVIDIA Corporation). From some resources you have found out that the latter stock is more volatile than IBM (i.e. its price jumps more rapidly; it’s more risky) therefore you decided to invest: 1. \75,000 in IBM (0.75 share of portfolio) 2. \25,000 in NVDA (0.25 share of portfolio) . Now, also having information that the expected return in 1-year horizon is equal 13% and 26% for IBM and NVDA, respectively, and their corresponding volatilities are 30% and 60%, firstly we may calculate the fundamental portfolio measures based on these two assets (stocks) we wish to hold (invest in and hold). The Expected Portfolio (P) Return$$
E(R_P) = \sum_{i=1}^{N} w_iE(R_i)
$$what in our case study returns the following expectation for two assets (N=2),$$
E(R_P) = 0.75\times 0.13+0.25\times 0.26 = 0.1625 \ \ \mbox{or} \ \ 16.25\% .
$$The Volatility of the Return of the Portfolio Before reaching the exact formula for the volatility of the expected return of our 2-asset portfolio, we need to consider the basic mathematics standing for it. Firstly, have in mind that for any variance for a random variable R_i (here, the expected return of the asset i),$$
Var(w_iR_i) = w_i^2 Var(R_i) .
$$Secondly, keep in mind a high-school formula of (a+b)^2=a^2+b^2+2ab. Got it? Great! Let’s move forward with the variance of the sum of two variables, say a and b:$$
Var(a+b) = E[a+b-E(a+b)]^2 \\
= E[a+b-E(a)-E(b)]^2 \\
= E[a-E(a)+b-E(b)]^2 \\
= E[a-E(a)]^2 + E[b-E(b)]^2 + 2E[a-E(a)]E[b-E(b)] .
$$Where does it take us? Well, to the conclusion of the day, i.e.:$$
= Var(a) + Var(b) + 2Cov(a,b)
$$where Cov(a,b) is a covariance that measures how a and b move together. At this point, we wish to define correlation between two variables a and b as:$$
Cov(a,b) = Vol(a)\times Vol(b)\times Corr(a,b)
$$what for two same assets (variables) perfectly correlated with each other (i.e. corr(a,a)=1) would return us with:$$
Cov(a,b)=Vol(a)\times Vol(b)\times 1 = Var(a) .
$$Keeping all foregoing mathematics in mind, we may define the variance of the portfolio return as follows:$$
Var(P) = Var\left(\sum_{i=1}^{N} w_iR_i\right) = \sum w_i^2Var(R_i)+\sum_{i=1}^{N}\sum_{i\ne j,\ j=1}^{N} w_iw_jCov(R_i,R_j) .
$$where again,$$
Cov(R_i,R_j) = Var(R_i)Var(R_j)Corr(R_i,R_j) ,
$$describes the relation between the covariance of expected returns of an asset i and j, respectively, and their corresponding variances and mutual (linear) correlation (factor). The latter to be within [-1,1] interval. The formula for Var(P) is extremely important to memorize as it provides the basic information on whether a combination of two assets is less or more risky than putting all our money into one basket? Therefore, 2-asset portfolio construction constitutes the cornerstone for a basic risk management approach for any investments which require, in general, a division of the investor’s initial capital C into N number of assets (investments) he or she wishes to take the positions in. Coming back to our case study of IBM and NVDA investment opportunities combined into the portfolio of two assets, we find that:$$
Var(P) = (0.75)^2(0.30)^2 + (0.25)^2(0.60)^2 + 2(0.75)(0.25)(0.30)(0.60)\times \\ Corr(\mbox{IBM},\mbox{NVDA}) = 0.11
$$at assumed, ad hoc, correlation of 0.5 between these two securities. Since,$$
Vol(P)=\sqrt{Var(P)} ,
$$eventually we find that our potential investment in both IBM and NVDA companies – the 2-asset portfolio – given their expected rates of returns at derived volatility levels for a 1-year time-frame is:$$
E(P) \ \ \mbox{at}\ \ Vol(P) = 16.25\% \ \ \mbox{at}\ \ 33.17\% .
$$It turns to be more rewarding (16.25\%) than investing solely in IBM (the expected return of 13\%) at a slightly higher risk (3\% difference). We will come back to the concept of 2-asset portfolio during the discussion of risk-reward investigation for N-asset investment case. ## Riskless Diversification Diversification can be viewed by many as an attractive tool for risk limitation for a given portfolio currently held by investor. Dealing with numbers one can show that if one holds N securities that are uncorrelated completely, the volatility of such portfolio Vol(P) will decrease with an increasing number of securities N as follows:$$
Vol(P) = \left[\sum_{i=1}^{N} \left(\frac{1}{N}\right)^2 Var(y_i)^2 \right] = \frac{Var(y)}{\sqrt{N}} .

What remains unspoken but important to keep in mind is that all securities $i$ where $i=1,…,N$ have the same volatility and the same expected return. Therefore, by holding uncorrelated securities, one can eliminate portfolio volatility if one holds sufficiently many of these securities.

That is a theory. The investor’s reality is usually far from this ideal situation as portfolios usually contain a number of investments mutually connected or dependable upon themselves as the feedback to the market movements or sentiments. Thus, the reminder of diversifiable risk which cannot be eliminated by the diversification process becomes a systematic risk.