This part is awesome. Trust me! Previously, in Part 1, we examined two independent methods in order to estimate the probability of a very rare event (heavy or extreme loss) that an asset could experience on the next day. The first one was based on the computation of the tail probability, i.e.: given the upper threshold $L_{thr}$ find the corresponding $\alpha$ such that $ Pr(L < -L_{thr}) = \alpha$ where a random variable $L$ was referred to as a daily asset return (in percent).

In this case, we showed that this approach was strongly dependent on the number of similar events in the past that would contribute to the changes of $\alpha$’s in time. The second method was based on a classical concept of the conditional probability. For example, given that the index (benchmark, etc.) drops by $L’$ what is the probability that an asset will lose $L\le L’$? We found that this methods fails if the benchmark never experienced a loss of $L’$.

In both cases, the lack of information on the occurrence of a rare event was the major problem. Therefore, is there any tool we can use to **predict an event that has not happened yet**?!

This post addresses an appealing solution and delivers a black-box for every portfolio holder he or she can use as an integrated part of the risk management computer system.

**1. Bayesian Conjugates**

Bayesian statistics treats the problem of finding the probability of an event a little bit differently than what we know from school. For example, if we flip a fair coin, in long run, we expect the same proportion of heads and tails. Formally, it is known as a frequentist interpretation. Bayesian interpretation assumes nothing about the coin (is it fair or weighted) and with every toss of the coin, it builds a grand picture on the underlying probability distribution. And it’s fascinating because we can start with an assumption that the coin is made such that we should observe a number of heads more frequently whereas, after $N$ tosses, we may find that the coin is fair.

Let me skip the theoretical part on that aspect of Bayesian statistics which has been beautifully explained by Mike Halls-Moore within his two articles (Bayesian Statistics: A Beginner’s Guide and Bayesian Inference Of A Binomial Proportion – The Analytical Approach) that I recommend you to study before reading this post further down.

Looking along the return-series for any asset (spanned over $M$ days), we may denote by $x_i = 1$ ($i=1,…,M$) a day when a rare event took place (e.g. an asset lost more than -16% or so) and by $x_i = 0$ a lack of such event. Treating days as a series of Bernoulli trials with unknown probability $p$, if we start with an **uninformed prior**, such as Beta($\alpha, \beta$) with $\alpha=\beta=1$ then the concept of **Bayesian conjugates** can be applied in order to update a prior belief. The update is given by:

$$

\alpha’ = \alpha + n_e = \alpha + \sum_{i=1}^{N\le M} x_i \\

\beta’ = \beta + N – n_e = \beta + N – \sum_{i=1}^{N\le M} x_i

$$ what simply means that after $N$ days from some initial moment in the past, by observing a rare event $n_e$ times, the probability distribution is given by Beta($\alpha’, \beta’$). Therefore, a **posterior predictive mean** for Bernoulli likelihood can be interpreted as a new probability of an event to take place on the next day:

$$

\mbox{Pr}(L < -L_{thr}) = \frac{\alpha'}{\alpha' + \beta'}
$$ And that's the key concept we are going to use now for the estimation of probability for heavy or extreme losses that an asset can experience in trading.

As a quick visualisation, imagine you analyse the last 252 trading days of the AAPL stock. The stock never lost more than 90% in one day. No such extreme event took place. According to our new method, the probability that AAPL will plummet 90% (or more) in next trading session is not zero(!) but

$$

\mbox{Pr}(L < -90\%) = \frac{\alpha'}{\alpha' + \beta'} = \frac{1+0}{1+252-0} = 0.3953\%
$$ with 95% confidence interval:
$$
\left[ B^{-1}(0.05, \alpha', \beta');\ B^{-1}(0.95, \alpha', \beta') \right]
$$ where $B^{-1}$ denotes the percent point function (quantile function) for Beta distribution.

The estimation of a number of events corresponding to the loss of $L < -L_{thr}$ during next 252 trading day, we find by: $$ E(n_e) = \left\lceil \frac{252}{[B^{-1}(0.95, \alpha', \beta')]^{-1}} \right\rceil $$ where $\lceil \rceil$ denotes the operation of rounding.

**2. Analysis of Facebook since its IPO**

Having a tool that includes information whether an event took place or not is powerful. You may also notice that the more information we gather (i.e. $N\to\infty$) the more closer we are to the “true” probability function. It also means that the spread around a new posterior predictive mean:

$$

\sigma = \sqrt{ \frac{\alpha’ \beta’} { (\alpha’+\beta’)^2 (\alpha’+\beta’+1) } } \ \to 0 .

$$ It is logical that the analysis of an asset traded daily for the past 19 years will deliver “more trustful” estimation of $\mbox{Pr}(L < -L_{thr})$ rather than a situation where we take into account only a limited period of time. However, what if the stock is a newbie in the market?

Let’s investigate the case of Facebook (NASDAQ:FB). The information we have the record of, spans a bit over 3.5 years back, when Facebook’s initial public offering (IPO) took place on May 18th, 2012. Since that day till today (December 6, 2015) the return-series (daily data) is $M=892$ points long. Using the analytical formulations as given above and writing a Python code:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | # Predicting Heavy and Extreme Losses in Real-Time for Portfolio Holders (2) # (c) 2015 Pawel Lachowicz, QuantAtRisk.com import numpy as np from scipy.stats import beta import matplotlib.pyplot as plt import pandas_datareader.data as web # Fetching Yahoo! Finance for Facebook data = web.DataReader("FB", data_source='yahoo', start='2012-05-18', end='2015-12-04')['Adj Close'] cp = np.array(data.values) # daily adj-close prices ret = cp[1:]/cp[:-1] - 1 # compute daily returns N = len(ret) # Plotting IBM price- and return-series plt.figure(num=2, figsize=(9, 6)) plt.subplot(2, 1, 1) plt.plot(cp) plt.axis("tight") plt.ylabel("FB Adj Close [USD]") plt.subplot(2, 1, 2) plt.plot(ret, color=(.6, .6, .6)) plt.axis("tight") plt.ylabel("Daily Returns") plt.xlabel("Time Period [days]") #plt.show() # provide a threshold Lthr = -0.21 # how many events of L < -21% occured? ne = np.sum(ret < Lthr) # of what magnitude? ev = ret[ret < Lthr] # avgloss = np.mean(ev) # if ev is non-empty array # prior alpha0 = beta0 = 1 # posterior alpha1 = alpha0 + ne beta1 = beta0 + N - ne pr = alpha1/(alpha1+beta1) cl1 = beta.ppf(0.05, alpha1, beta1) cl2 = beta.ppf(0.95, alpha1, beta1) ne252 = np.round(252/(1/cl2)) print("ne = %g" % ne) print("alpha', beta' = %g, %g" % (alpha1, beta1)) print("Pr(L < %3g%%) = %5.2f%%\t[%5.2f%%, %5.2f%%]" % (Lthr*100, pr*100, cl1*100, cl2*100)) print("E(ne) = %g" % ne252) |

first we visualise both the price- and return-series:

and while the computations we find:

ne = 0 alpha', beta' = 1, 893 Pr(L < -21%) = 0.11% [ 0.01%, 0.33%] E(ne) = 1 |

There is no record of an event within 892 days of FB trading, $n_e = 0$, that the stock lost 21% or more in a single day. However, the probability that such even may happen at the end of the next NASDAQ session is 0.11%. Non zero but nearly zero.

A quick verification of the probability estimate returned by our Bayesian method we obtain by changing the value of a threshold from -0.21 to *Lthr* = 0. Then, we get:

alpha', beta' = 422, 472 Pr(L < 0%) = 47.20% [44.46%, 49.95%] E(ne) = 126 |

i.e. 47.2% that on the next day the stock will close lower. It is in an excellent agreement with what can be calculated based on the return-series, i.e. there were 421 days with the negative daily returns, thus $421/892 = 0.4719$. Really cool, don’t you think?!

Now, imagine for a second that after first few days since the Facebook IPO we recalculate the posterior predictive mean, day-by-day, eventually to reach the most current estimate of the probability that the stock may lose -21% or more at the end of the next trading day — such a procedure allows us to plot the change of $\mbox{Pr}(L < -L_{thr})$ during the whole trading history of FB. We achieve that by running a modified version of the previous code:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | import numpy as np from scipy.stats import beta import matplotlib.pyplot as plt import pandas_datareader.data as web data = web.DataReader("FB", data_source='yahoo', start='2012-05-18', end='2015-12-05')['Adj Close'] cp = np.array(data.values) # daily adj-close prices returns = cp[1:]/cp[:-1] - 1 # compute daily returns Lthr = -0.21 Pr = CL1 = CL2 = np.array([]) for i in range(2, len(returns)): # data window ret = returns[0:i] N = len(ret) ne = np.sum(ret < Lthr) # prior alpha0 = beta0 = 1 # posterior alpha1 = alpha0 + ne beta1 = beta0 + N - ne pr = alpha1/(alpha1+beta1) cl1 = beta.ppf(0.05, alpha1, beta1) cl2 = beta.ppf(0.95, alpha1, beta1) # Pr = np.concatenate([Pr, np.array([pr*100])]) CL1 = np.concatenate([CL1, np.array([cl1*100])]) CL2 = np.concatenate([CL2, np.array([cl2*100])]) plt.figure(num=1, figsize=(10, 5)) plt.plot(Pr, "k") plt.plot(CL1, "r--") plt.plot(CL2, "r--") plt.axis("tight") plt.ylim([0, 5]) plt.grid((True)) plt.show() |

what displays:

i.e. after 892 days we reach the value of 0.11% as found earlier. The black line tracks the change of probability estimate while both red dashed lines denote the 95% confidence interval. The black line is smooth, i.e. because of a lack of -21% event in the past, a recalculated probability did not changed its value “drastically”.

This is not the case when we consider the scenario for *Lthr* = -0.11, delivering:

what stays in agreement with:

ne = 1 alpha', beta' = 2, 892 Pr(L < -11%) = 0.22% [ 0.04%, 0.53%] E(ne) = 1 |

and confirms that only one daily loss of $L < -11\%$ took place while the entire happy life of Facebook on NASDAQ so far.

Since the computer can find the number of losses to be within a certain interval, that gives us an opportunity to estimate:

$$

\mbox{Pr}(L_1 \le L < L_2)
$$ in the following way:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | import numpy as np from scipy.stats import beta import matplotlib.pyplot as plt import pandas_datareader.data as web data = web.DataReader("FB", data_source='yahoo', start='2012-05-18', end='2015-12-05')['Adj Close'] cp = np.array(data.values) # daily adj-close prices ret = cp[1:]/cp[:-1] - 1 # compute daily returns N = len(ret) dL = 2 for k in range(-50, 0, dL): ev = ret[(ret >= k/100) & (ret < (k/100+dL/100))] avgloss = 0 ne = np.sum((ret >= k/100) & (ret < (k/100+dL/100))) # # prior alpha0 = beta0 = 1 # posterior alpha1 = alpha0 + ne beta1 = beta0 + N - ne pr = alpha1/(alpha1+beta1) cl1 = beta.ppf(0.05, alpha1, beta1) cl2 = beta.ppf(0.95, alpha1, beta1) if(len(ev) > 0): avgloss = np.mean(ev) print("Pr(%3g%% < L < %3g%%) =\t%5.2f%%\t[%5.2f%%, %5.2f%%] ne = %4g" " E(L) = %.2f%%" % \ (k, k+dL, pr*100, cl1*100, cl2*100, ne, avgloss*100)) else: print("Pr(%3g%% < L < %3g%%) =\t%5.2f%%\t[%5.2f%%, %5.2f%%] ne = " \ "%4g" % (k, k+dL, pr*100, cl1*100, cl2*100, ne)) |

By doing so, we visualise **the distribution of potential losses** (that may occur on the next trading day) along with their magnitudes:

Pr(-50% < L < -48%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-48% < L < -46%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-46% < L < -44%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-44% < L < -42%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-42% < L < -40%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-40% < L < -38%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-38% < L < -36%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-36% < L < -34%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-34% < L < -32%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-32% < L < -30%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-30% < L < -28%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-28% < L < -26%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-26% < L < -24%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-24% < L < -22%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-22% < L < -20%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-20% < L < -18%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-18% < L < -16%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-16% < L < -14%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-14% < L < -12%) = 0.11% [ 0.01%, 0.33%] ne = 0 Pr(-12% < L < -10%) = 0.34% [ 0.09%, 0.70%] ne = 2 E(L) = -11.34% Pr(-10% < L < -8%) = 0.67% [ 0.29%, 1.17%] ne = 5 E(L) = -8.82% Pr( -8% < L < -6%) = 0.89% [ 0.45%, 1.47%] ne = 7 E(L) = -6.43% Pr( -6% < L < -4%) = 2.91% [ 2.05%, 3.89%] ne = 25 E(L) = -4.72% Pr( -4% < L < -2%) = 9.84% [ 8.26%, 11.53%] ne = 87 E(L) = -2.85% Pr( -2% < L < 0%) = 33.11% [30.54%, 35.72%] ne = 295 E(L) = -0.90% |

For example, based on a complete trading history of FB, there is a **probability** of 0.34% that the stock will close with a loss to be between -12% and -10%, and if so, the “expected” loss would be -11.3% (estimated based solely on 2 events).

Again. The **probability**, not certainty.