Monday 18 November 2013

VaR II: Parametric normal and lognormal

Let us see how to calculate VaR in practice. We'll start with the simplest cases (which are used from time to time in practice), and work our way up. First, let's again collect definitions of return
\begin{equation} \label{linret_did}
 \mathrm{r}_t:= \frac{p_{t+1}}{p_t} -1.
\end{equation}
of logreturn
\begin{equation} \label{logret_did}
\ell_t := \log\left(\frac{p_{t+1}}{p_t}\right).
\end{equation}
and what we figured out about calculating VaR based on return,
\begin{equation} \label{VaRcont}VaR_{99\%} = F_{\mathrm{r}_t}^{-1}(1\%) = r_{1\%},\end{equation}
based on logreturn
\begin{equation}\label{VaRlogret}
 VaR_{99\%} = r_{1\%} = e^{F_{\ell_t}^{-1}(1\%)} -1.
\end{equation}
all beautifully explained in the previous post on VaR VaR I: What is it? Basic setup.

Normally distributed asset
If the underlying asset price \(p_{t+1}\) is normally distributed then, because of the definition of return~\eqref{linret_did}, the return is also normally distributed and this makes the math easy. Is this a reasonable assumption? Not really, it says that negative prices are possible! I've never seen a cow that cost \(-\$100\)*. But for short time horizons, such as when \(t\) is counting days, it can be a reasonable approximation.

Using Historical data** (or perhaps implied information about the future***), let's say that we have fit a normal distribution with average \(\mu\) and standard deviation \(\sigma\) to our return, thus we're assuming that \( \mathrm{r}_t \sim \mathcal{N}(\mu, \sigma^2).\) Fortunately, there always exists an \(r_{1\%}\) such that \(r_{1\%}= F_{ \mathrm{r}_t}^{-1}(1\%)\) for the normal c.d.f is a surjective function over real numbers.  Thus we can use~\eqref{VaRcont} to calculate VaR. The problem is that most mathy software systems, like Matlab or Mathematica, only have the inverse cumulative of the standard normal variable \(Z \sim \mathcal{N}(0,1).\) We need to figure out what our inverse is.  Panic not, unknown reader. You can build any normal variable by appropriate summing and multiplying constants to the standard normal variable \(Z \sim \mathcal{N}(0,1).\) In our case, \(\mathrm{r}_t\) can be written as
\[ \mathrm{r}_t = \mu + \sigma Z.\]
Because of this, you can calculate \(F_{ \mathrm{r}_t}(r)\) using the standard normal cumulative function \(\Phi(r) = \mathbb{P}\left(Z \leq z\right).\) Observe
\begin{align*}
F_{ \mathrm{r}_t}(r) &= \mathbb{P}\left( \mathrm{r}_t \leq r\right) \\
&= \mathbb{P}\left( \mu + \sigma Z \leq r\right)\\
&= \mathbb{P}\left( Z \leq \frac{r-\mu}{\sigma}\right) \quad \mbox{(Subtract by \(\mu\), divide by \(\sigma\) inside)}\\
& = \Phi\left(\frac{r-\mu}{\sigma}\right).  \quad \mbox{(Definition of the standard normal cumulative)}
\end{align*}
To find the inverse of \(F_{ \mathrm{r}_t}(r)\), let's name  \(y = F_{ \mathrm{r}_t}(r)\) as is customary in calculus books and apply
  \(\Phi^{-1}\), the inverse of the standard cdf, to both sides of the above equation
\[  \Phi^{-1}(y) = \frac{r-\mu}{\sigma}\]
and isolating \(r\), we find that
\begin{align}\label{norminv}
F_{ \mathrm{r}_t}^{-1}(y) = r = \Phi^{-1}(y)\sigma + \mu.
\end{align}
With this we can plug in \(r_{1\%}\) to find that \(y = F_{ \mathrm{r}_t}(r_{1\%}) = 1\%\) and
\begin{equation}\label{VARnorm1}
VaR_{99\%} = r_{1\%}  = \Phi^{-1}(1\%)\sigma + \mu.
\end{equation}
Where \(\Phi^{-1}(1\%)\) is just some number that can be calculated, and is approximately  \(-2.3263\).

Lognormally distributed asset
This is a bit more reasonable. At least only positive prices are plausible, for allowing negative prices was positively preposterous! You see, lognormal random variables can only assume positive values, and the lognormal distribution sort of fits that of past asset returns. Now it's more convenient to work directly with the lognormal return~\eqref{logret_did}, for if \(p_{t+1}\) is lognormal then, by definition the definition of lognormal random variables, \(\ell_t\) is normal random variable. Say we have fit a normal distribution with mean \(\mu\) and standard deviation \(\sigma\)  for our random variable \(\ell_t\), i.e. \(\ell_t \sim \mathcal{N}(\mu,\sigma).\)  We just saw what is the inverse of a normal variable~\eqref{norminv}, thus we can apply~\eqref{VaRlogret} directly to find
\begin{equation}\label{VARlog1}
VaR_{99\%} = r_{1\%}  = e^{\Phi^{-1}(1\%)\sigma + \mu} -1.
\end{equation}
Interesting fact: The formula for VaR based on normal returns~\eqref{VARnorm1} is equivalent to the first-order Taylor approximation of the VaR formula for normal logreturn~\eqref{VARlog1}. So what? Well, if you assume that lognormal distribution of our asset is the  ``true'' model, then using~\eqref{VARnorm1} as a proxy is ok as long as \(\sigma\) and \(\mu\) are very small.

VaR for general and sample distributions
Calculating VaR for a general distribution poses an extra challenge: Cumulative distribution functions are not invertible, in general. In other words, there simply might not exist a unique value \(r_{1\%}\) that sets it's \(F_{r_t}\) to \(1\%\). This often occurs in practice because.  Why? Because instead of using a known distribution for our return, people often make a nifty change to historical returns to build a sample distribution. Is this ideal? No, but it's also not mere witch-doctery and it's an improvement over our previous normal and lognormal assumptions. For before, not only did we use past data to build our future distribution, but we also imposed a very specific distribution on our returns (normal or lognormal). I will go into detail in a future post on exactly how to do this, but for now assume we have built a sample cdf function such as the one in this figure:


Note that based on the figure, any \(r \in [-0.3694, -0.2294]\) is such that \(F(r) = 1\%\). This is the problem: we can't invert \(F(r).\) So which \(r\) do we choose to be our VaR? People who work in risk are inherently pessimistic, so we choose the worst out of these suitable returns to be our VaR. In other words, VaR of \(99\%\) is the lowest return for which the \(1\%\) lowest returns are still below it. Formally
\[VaR_{99\%} := \min\{r \in \mathbb{R} \, | \,  F(r) \geq 1\% \}.\]
In the above example this would be \(VaR_{99\%} = -0.3694\), ergo, with 99\% chance, we will not lose more then \(36.94\%\) of our assets.

Similarly, for any given confidence level \(\beta \in [0, 1]\) we can define
\[VaR_{\beta} := \min\{r \in \mathbb{R} \, | \,  F(r) \geq 1- \beta\}.\]
Though we have focused on what to do with a sample distribution, this is in fact the general definition of VaR. For the more mathy type, ask yourself, why is it that VaR given as such is always defined? Does this minimum always exist?

Next we'll look at what really matters: Calculating the VaR of a portfolio of assets.

* Ignore possible storage or disposing costs that make negative prices a possibility.
**I would suggest using Exponentially Weighted Moving Average to do this.
***Say what? Interested, ask for it on the blog

Sunday 17 November 2013

VaR I: What is it? Basic setup



Will I go bankrupt tomorrow? This question bothers many a business. When your a big business, it can be hard to come to grips with all that is you. This may sound strange for those who own a small business, say, a doughnut stand. But when your business is a sizable conglomerate, then it becomes hard to see how likely it is that failures across the different parts results in a failure of the whole company. The exact question really is: How much liquid freed-up cash do I have to keep in hand so that I can survive bad shit happening tomorrow.

How do you go about answering this? Here's an idea: let's try and conjure-up what's the worst thing that can happen tomorrow, then we'll check to see if the biz could survive. You think some more, and realize that the worst thing possible is anything from earthquakes, to a rather unlikely quantum event that teleports all of your assets into outer space. Not very useful. So let's try and rule out these truly extreme events. Instead, let's focus on the worst possible event, removing the \(1\%\) worst event. That's the idea behind Value at Risk also abbreviated to VaR (which is unfortunate as variance has already called dibs on the abbreviation VAR.) We abbreviate as \(VaR_{99\%}\) the worst possible outcome ignoring the worst \(1\%\).

A few of you, with hippie tendencies, might already be saying ``but....ohhhhhhh....the \(1\%\) worst cases is where shit really gets messed-up, world wide crisis style, and this silly VaR thing ignores it!''. Yes, VaR has no place measuring the occurrence of massively bad things happening. Instead, VaR just prepares you to deal with the next day when business is as usual. Think of a bank putting aside cash everyday so that it's clients can get their grubby hands on it through cash machines. Well, how much should the b-man put aside everyday? Everything just in case everybody wants some bling bling? But then the big-b can't invest. VaR of \(99\%\) of the amount of cash withdrawals is probably safe enough (ok, \(1\%\) of the time, some douche is not gonna get his dough).

I will address the basic concepts behind the VaR of a single asset, then move onto aggregating VaR across assets and practical methods for calculating it.

VaR of a single asset
 We start by naming things. Let \(p_t\) be the price of your single asset at time \(t\), thus \(p_t \in \mathbb{R}^+\). Let \(t\) be today*. Furthermore, let the return \(\mathrm{r}_t\) from today to tomorrow be defined by
\begin{equation*}
p_t(1+ \mathrm{r}_t) = p_{t+1}.
\end{equation*}

Thus the future value \(p_{t+1}\) can be completely determined by knowing the price today and today’s daily return. For this reason, VaR concepts focus on return instead of absolute price. It also puts things in perspective, for stuff has values of different order, e.g, a cow can be cost $1000 while a single olive can be quite cheap. Return, on the other-hand, is something typically between \(-100\%\) and \(100\%\). Isolating return in~(\ref{linret_did}), we have
\begin{equation} \label{linret_did}
 \mathrm{r}_t= \frac{p_{t+1}}{p_t} -1.
\end{equation}

A few of you might read this and argue: Hey, that's not how I define return. Yes, return can be defined in a number of ways that gives us this notion of ``how much did the value of my stuff change''. In fact, let's also define the logreturn, which, as it's name sort of suggests
\begin{equation} \label{logret_did}
\ell_t := \log\left(\frac{p_{t+1}}{p_t}\right) \quad (\mbox{same as saying})\quad p_{t+1} = p_te^{\ell_t}.
\end{equation}

To determine the worst possible return tomorrow, excluding the \(1\%\) worst, we need to assume some sort of distribution for \(\mathrm{r}_t\) as it is unknown to us. We will model this uncertainty with standard probability theory.  In this model, we need a probability measure \(\mathbb{P}\left(\right)\) that says how probable an event is, and a probability distribution for \(\mathrm{r}_t\)**. The question of picking a distribution for \(\mathrm{r}_t\) is in itself a tricky one, but assume for now we are given the c.d.f (cumulative distribution function)  \(F_{\mathrm{r}_t}: \mathbb{R} \rightarrow \mathbb{R}^+\) which is defined for each \(r \in \mathbb{R}\) as
\[F_{\mathrm{r}_t}(r) := \mathbb{P}\left( \mathrm{r}_t \leq r\right),\]
so it's a function that given a possible return \(r\), it tells us how likely is it that today’s return is at most \(r\). \(VaR_{99\%}\) is the cut-off return \(r_{1\%}\), such that the \(1\%\) worst returns are  below \(r_{1\%}.\)  In other words,  we want to find \(r_{1\%} \in \mathbb{R}\) such that \(F_{\mathrm{r}_t}(r_{1\%}) = 1\%.\) Thus \(VaR_{99\%}\) can be defined precisely as
\begin{equation} \label{VaRcont}VaR_{99\%} = F_{\mathrm{r}_t}^{-1}(1\%) = r_{1\%},\end{equation}
where \(F_{\mathrm{r}_t}^{-1}\) is the inverse of the cdf of \(\mathrm{r}_t.\)

Sometimes we will not be working directly with the return \(\mathrm{r}_t\), but instead, we will know the distribution of the logreturn~\eqref{logret_did}. So let's figure out what is \(\mathbb{P}\left(\mathrm{r}_t \leq r\right)\) in terms of logreturn. First we substitute \(\mathrm{r}_t\) for it's definition~\eqref{linret_did} to find
\begin{align}
\mathbb{P}\left( \mathrm{r}_t \leq r\right)  & = \mathbb{P}\left( p_{t+1}/p_t -1 \leq r\right) \quad \left(\mbox{Sum \(1\) to both sides of inner inequality}\right) \nonumber \\
& =  \mathbb{P}\left( \log(p_{t+1}/p_t) \leq \log(r+1)\right) \quad (\mbox{Apply log inside }\mathbb{P}\left(\right)) \nonumber\\
 & =  \mathbb{P}\left( \ell_t \leq \log(r+1)\right) \quad (\mbox{Using definition~\eqref{logret_did}}) \nonumber \\
& = F_{\ell_t} \left( \log(r+1) \right),\label{cdflogret}
\end{align}
where \(F_{\ell_t}\) is the cumulative distribution function of the logreturn \(\ell_t\). Again, remember that VaR of \(99\%\) is equal to a certain \(r_{1\%}\) that sets \(\mathbb{P}\left( \mathrm{r}_t \leq r_{1\%}\right) = 1\%.\) From~\eqref{cdflogret}, this is equivalent to finding
\[F_{\ell_t} \left( \log(r_{1\%}+1) \right) = 1\%. \]
Applying the inverse of \(F_{\ell_t}\) to both sides (and assuming we can do this!) we find

\[
\log(r_{1\%}+1)  = F_{\ell_t}^{-1}(1\%).
\]
Isolating \(r_{1\%}\) we find
\begin{equation}\label{VaRlogret}
 VaR_{99\%} = r_{1\%} = e^{F_{\ell_t}^{-1}(1\%)} -1.
\end{equation}


 But this is not always possible for all possible distribution functions and will depend on what exactly are the cdfs \(F_{\ell_t}\) and \(F_{\mathrm{r}_t}.\) Let's break this down by what we are assuming about our asset so that we may calculate VaR. Now move onto  VaR II: Parametric normal and lognormal


* We'll think in terms of days, but really we can choose the refinement of time, say nano-seconds you trader junkies?
** Formally, we would need the whole measure space set-up, i.e., name a space of events for \(\mathbb{P}\) that has enough events. If you would like a more measure theoretical rigor, comment on the post.

Saturday 20 April 2013

Jumping into Black-Scholes III: Calculating the Expected Value

We take off from where Jumping into Black-Scholes II: Know your contract and Asset left off,
in that we have established a probability distribution for our future asset price \(S(T)\). With this, let's take a look at the expected value of our options payoff. Let \(p(Z)\) be the pdf (probability density function) of \(Z\). Thus, using our prob-know-how, we have
\begin{eqnarray}
\mathbf{E}\left[  \max\left\{K-S_T,0\right\}\right] &= &
\mathbf{E}\left[  \max\left\{K-S_0 e^{(r-\sigma^2/2)T+\sigma\sqrt{T} Z},0\right\} \right] \nonumber \\
 &= &  \int_{-\infty}^{\infty} \max\left\{K-S_0 e^{(r-\sigma^2/2)T+\sigma\sqrt{T} Z},0\right\} p(Z) dZ \nonumber \\
&= & \int^{-d_2}_{- \infty} \left(K-S_0 e^{(r-\sigma^2/2)T+\sigma\sqrt{T} Z}\right) p(Z) dZ. \label{eq:BS1}
\end{eqnarray}

In this last step suddenly a \(d_2\) appeared. This constant \(-d_2\) has to be chosen so that for \(Z \leq -d_2\)  implies
\begin{equation} \label{findd2} \left(K-S_0 e^{(r-\sigma^2/2)T+\sqrt{T} Z}\right) \geq 0, \end{equation}
 so we could remove that infernal \(\max(\cdot)\) function. Well that's easy enough to calculate, by equating the above~(\ref{findd2}) to zero, our variable \(Z\) would be forced to equal \[d_2 = \frac{\ln(S_0/K) +(r-\sigma^2/2)T}{\sigma \sqrt{T}}.\]
I chose to call it \(d_2\) because that's what everyone else calls it. Why, out of the universe of greek letters and symbols ``they'' chose \(d_2\)? I haven't the foggiest idea, but to not be a notation anarchist, let's stick with the same symbol.


Turning back to~(\ref{eq:BS1}). Remembering that integration is a linear function, in that you can break up sums, throw constants on the "outside" of the integral so
\begin{eqnarray}
\int^{-d_2}_{- \infty} \left(K-S_0 e^{(r-\sigma^2/2)T+\sqrt{T} Z}\right) p(Z) dZ &= &
K\int^{-d_2}_{- \infty} p(Z)dZ -S_0e^{(r-\sigma^2/2)T} \int^{-d_2}_{- \infty} e^{\sigma\sqrt{T} Z} p(Z) dZ \nonumber \\
&= & \underbrace{K \Phi(-d_2)}_I -S_0e^{(r-\sigma^2/2)T} \underbrace{\int^{-d_2}_{- \infty} e^{\sigma\sqrt{T} Z} p(Z) dZ}_{II} \nonumber  \label{eq:BS2} .
\end{eqnarray}
In the I part, \(\Phi\) is the cumulative distribution function of the normal distribution. The last thing we need to deal with is the integral denoted by II. Scratching your noodle and/or looking at wiki, we see that the pdf of a standard normal distribution \(p(Z) = e^{-Z^2/2}\). With this we can solve the integral II, check out the following steps on how to do this
\begin{eqnarray*}
\int^{-d_2}_{- \infty} e^{\sigma\sqrt{T} Z} p(Z) dZ &= & \int^{d_2}_{- \infty} e^{\sigma\sqrt{T} Z} e^{-Z^2/2} dZ\\
&= & \int^{-d_2}_{- \infty} e^{-(Z -\sigma\sqrt{T})^2/2 +\sigma^2 T/2} dZ \quad [\mbox{Completing the square}] \\
&= & e^{\sigma^2 T/2} \int^{-d_2+\sigma\sqrt{T}}_{-\infty} e^{-y^2/2} dy  \quad [\mbox{Changing variables } y = Z-\sigma\sqrt{T}] \\
&= & e^{\sigma^2 T/2} \Phi\left(-d_2+\sigma\sqrt{T}\right) \quad [\mbox{Boom! We're done.}]
\end{eqnarray*}

Returning to~(\ref{eq:BS1}), we have reached the grand finale. The price of our put option is the expected value of its payoff, discounted in time by multiplying \(e^{-rT}\),
\[e^{-rT}\mathbf{E}\left[ \mbox{payoff}\right] =  K e^{-rT} \Phi(-d_2)+S_0 \Phi(-d_2+\sigma\sqrt{T}).\]
All these calculations may have hurt. But now you can price all sorts of options using a completely analogous argument. Naturally, if you work or are looking to work as a Quant, no one will ask you deduce the price of a Plain Vanilla option. But what happens when some Trader/Salesman/(Front desk Dude) decides, in a frenzy, to sell an option to a client that has a payoff that depends on two strikes \(K_1\) and \(K_2\) with
\[
\mbox{payoff} =
\begin{cases}
K_1 & \mbox{if }S_T \leq K_1,\\
S_T & \mbox{ if } K_1 \leq S_T \leq K_2, \\
0 & \mbox{ if } K_2 \leq S_T.
\end{cases}
\]
Well how much does that cost? Go on, aren't you the Quant, didn't we hire you for this? You can either blankly stare at your boss, jaw open as a sparkle of drool develops in the corner of your mouth, or you can sit down and confidently calculate the expectancy of this payoff. Get to it and post me your solution.

This approach works in general for options whose payoff depends on one time frame \(T\), known as European options, e.g, Basket Options, Binary Options, Quanto Options...etc. This approach can also  be adapted  when there are a handful of time frames. Things are not so simple when the payoff is time dependent, such as American Options. Then you really need to take the dive into stochastic calculus.

Jumping into Black-Scholes II: Know your contract and Asset


  Know your Contract

The first step in pricing these financial things, is to understand the contract in detail. Let S0 and ST be the price today and in T years of our underlying asset (a cow), respectively. Well we know S0 while ST must be a random variable.1 For now, let's play make believe and pretend we know what is ST, and figure out the payoff (How much you will pocket in the future) from our Plain Vanilla Option. Say we have the right to sell this asset in the future for the price K, in which case we say it is a put option. If at the maturity date T, ST is more then K then we will not bother exercising our right to sell the asset for K given that one can sell it for more then that on the market. Now if ST is less then K, selling it for K will earn you (K−ST) more then you could of sold it on the market. Thus you earn (K−ST) in this scenario. Therefore the payoff of our option is the function
payoff = max{K−ST,0}.
But we don't know ST. What can we do? We can calculate how much you would expect the payoff to be, which in math symbols is E[ payoff]. Furthermore, this payoff is money you will receive in the future, and money in time makes more money. Thus to know how much this is worth today, one must discount this expected cash quantity in time from the maturity date T to today using a reference market interest rate r. You can try to understand this r as the rate such that erT is how much one would earn by investing 1 in a risk-free investment after T years (note that it's greater than 1).  Using continuous time compounding 2, we can discount the expected payoff through time to give the price today which is
price = e−rT E[ payoff].
(1)
To calculate this expectancy, we need to know what is the distribution of the payoff, which in turn inherits it's distribution from the underlying asset ST. This basic setup is common to all financial contracts that exchange a quantity of money at a fixed future date.

  Guess your asset distribution

Yes, guess it. We don't know the real distribution of the asset3, we can only make an educated guess. But the situation is not a dire as it first may appear. We're not going to try and accurately guess the future value ST, instead, the focus is how much can the value S0 disperse through time. If S0 is so volatile that we have no idea what it will be in the future, then insurance based on this asset will be expensive. On the other hand, if we know, with a certain confidence, that S0 will change at most 2% in value from now to T, then such an insurance will be cheap.
Think about the possible future values of an asset. First, it must be positive4. This already rules out things like normal distributions, which stretch infinitely into the negative direction, in other words, way off. A simple random variable that only takes on positive values is eX, where X is a normal variable. This is a log-normal random variable, and it is completely determined by the mean and variance of the normal variable X.
And it just so happens, that by getting historical data of most assets prices, the lognormal distribution "seems" to fit. The x-axis below are different prices a unnamed asset had over a year. So roughly between 2 and 14 ching chings. The y-axis was how frequent this price appeared over the year. The red line, is the probability density function of the lognormal variable that was fit to the historical data.


This is the chosen distribution in the Black-Scholes setting. It is in this small step that I want you to take biggest leap of faith throughout the text. Notoriously, extreme events are more common in assets then the log-normal distribution would say, thus "real" distributions tend to have "fatter tails". Moving on. Let X(t)  ∼ N(tμ,tσ2), thus it's a normal variable with mean tμ and variance tσ2. Though we have defined X(t) for every t > 0, ultimately we are only interested in X(T), so for now, don't worry about how this thing evolves through time, just acknowledge that X(T), frozen in time T, is a normal variable.
The blatant guessing ends here, for now we can estimate μ and σ by either using historical data (How volatile were cow prices this time last year?) or using implicit market data, i.e., how much is this future volatility according to the aggregated opinion of everybody else? Let's say that through one of these methods, we are given σ.
All that is missing is the mean μ. Not to worry, another hypothesis injected by the Black-Scholes model is that there is no free meal, also known as the nonexistence of arbitrage opportunities. This translates into the fact that, the expected increase in value of our asset from now until T will be the same as the risk-free investment. In other words, investing S0 in the risk-free investment, we would earn S0erT. This must also be true of the expected value of buying S0 of our asset and clinging to it over time, in other words,
E[ eX(T)] = S0 erT.
(2)
From properties of the lognormal distribution5, we also know that
E[ eX(T)] =e(μ+ σ2/2)T.
(3)

Thus (2) and (3) must be equal, which in turn says that
μ = ln(S0) +(r−σ2/2)T,
so we have μ as a function of other things we know. Finally, there is a convenient way to re-write X(T) by calling upon a few properties of the normal distribution, namely
X(T) = ln(S0)+ (r−σ2/2)T + √TσZ
(4)
where Z  ∼ N(0,1) is the standard normal variable. If this last step made no sense, I suggest you take the expectancy and variance of both sides of (4) and check that they are equal. Remember, a normal variable is completely determined by it's expectancy and variance. This brings us to the conclusion of this section, simply that
ST = eX(T) = S0 e(r−σ2/2)T+√T Z,
is a reasonable choice to model our asset. Well sort of. With this the modelling phase comes to an end. All that remains now is to compute the option price (1) using some notions of probability.
[1] One of my least favorite names in mathematics, for it is not a variable nor is it random.
[2] Don't know it? See Continuous compounding. Not good enough? Persuade me to write something on this.
[3] This sentence hardly makes sense, is there such thing as a real distribution?.
[4] Well...you could own a megaton of milk that has gone sour. Remembering there is no use in crying over spilt milk, you pay someone to get rid of it, thus it has a ``negative value''. But let's rule out this case.
[5] Don't know it, wiki it..
-->

Friday 19 April 2013

Jumping into Black-Scholes I: Motivation

--> No Title
Have you vaguely heard of Black-Scholes and want a concise explanation? As in, this name reminds you of pricing options and not, for example, a shoal of color deficient fish? Do you know your basics of Probability? Then you have come to the right place. I'll demonstrate the common idea behind pricing an option using one of the simplest of option: The european options. First, let me motivate you.
Let's say you are a farmer and want to sell an asset (possibly a cow) in T number of years from now on a fixed date (known as the maturity date). Your target cow market is offshore, where it's all settled  in a currency foreign to the one you use to buy bread. You know the price changes from day to day, and depending on the price T years from now, it might be worth to keep the cows for milking instead.
Furthermore, often you see the paper and read about the volatility of foreign exchange and the financial sector as a whole. The stress starts to pile on. You drop your morning read of "Seeds and Hot Tractors" for "Financial Times". Questions start to emerge in your countryside head: How can I plan my future budget when I don't know how much my cow sale will pull in?

Fortunately, Black-Scholes comes to the rescue. You find out that you can fix this future sell price (known as the strike K) today, to a particular buyer. Better yet, you can buy the right to sell at the price K, and only exercise it if it's packing moo (worth it, in farmer tongue). How much does this right cost? This right is known as a Plain Vanilla option, and the mystery of how much this option should cost boggled many a mind for many a decade. Black and Scholes come along, and, to the horror of many an economist, use some continuous mathematics with probability to give a simple, transparent model to price this thing. For the farmer, it costs a small percentage of the price K.

Finally you can rest at ease. You know exactly how many bucks you could get for your moos T years from now. You drop Financial Times and it's preposterously pompous highly adjectivised text, and pick up something far more earthy, like "Big Udders". Your attention turns to further specializing yourself in your field (of knowledge and land) and planning the purchase of expensive machinery to up your productivity. Who said Finance was futile and not fertile?

After germinating purpose to the pricing of an option, let's bring in precision. Next, we carefully deduce the Black-Scholes price for this option. Though this model is no longer in use, instead one typically would use the Market price, i.e., aggregated opinion of everybody else, it still serves a purpose. It teaches us how to formally deduce prices of such financial stuff. It is also employed in pricing other complicated financial contracts (look-up Phoenix rainbow basket options), when more technically sophisticated models become intractable, and one must fall back on Black-Scholes framework.