%pylab inline
lecture = 11
import sys
sys.path.append("lib")
import fmt
import sympy as sp
import pandas as pd
from IPython.display import display
assert sp.__version__ == "0.7.5", "Need sympy version 0.7.5 to render properly"
sp.init_printing(use_latex = True)
Populating the interactive namespace from numpy and matplotlib
Consistency, Stability and Convergence
Concepts of Stiffness and Stability Region
The Runge-Kutta and Multistep methods
Numerical Examples
$$ y' = f(t,y), \;\; y(t_0) = y_0 $$
$$ y(t) = y_0 + \int_{t_0}^t f(s,y(s)) \; ds $$
$$ \renewcommand{bs}{\boldsymbol} \renewcommand{by}{\boldsymbol{y}} \renewcommand{bf}{\boldsymbol{f}} \by' = \bf(t, \by), \;\; \by(t_0) = \by_0 $$
$$ y^{(n)}(t) = f(t,y(t),y'(t), \cdots, y^{(n-1)}(t)) $$
$$ \bs{z}' = \bs{g}(t, \bs{z}) $$
$\hspace{0.3in}$ where
$$ \begin{matrix} \bs{z} = \left[ \begin{matrix} y(t) \\ y'(t) \\ \vdots \\ y^{(n-1)}(t) \end{matrix} \right], \;\; \mbox{and} \;\; \end{matrix} \begin{matrix} \bs{g}(t, \bs{z}) = \left[ \begin{matrix} y'(t) \\ y''(t) \\ \vdots \\ y^{(n-1)}(t) \\ f(t,y(t),y'(t), \cdots, y^{(n-1)}(t)) \end{matrix} \right] \end{matrix} $$
Typical things we want to know after formulating a mathematical problem:
1) Is there a solution to the problem?
2) Is the solution unique?
3) Is the solution NOT sensitive to the input/dependent data?
4) And finally, how to solve for the solution? Analytically? Numerically?
Picard's theorem:
If $f(t,y)$ is uniformly Lipschitz continuous in $y$ in a neighborhood of $(t_0, y(t_0))$, then the IVP of the ODE has a unique solution in the neighborhood.
$$ C_t = \frac{1}{2}\sigma^2S^2C_{SS} + rSC_S - rC $$
Define the Laplace transform of option price function $C(t,S)$ by
$$ \renewcommand{hC}{\hat{C}} \hC(z,\cdot) := \mathcal{L}[C](z) = \int_{0}^{\infty}C(t,\cdot)e^{-zt} \; dt $$
Taking the Laplace transform of the Black-Scholes equation,
$$ z\hC = \frac{1}{2}\sigma^2S^2 \hC_{SS} + rS \hC_S - r \hC + C_0 $$
Here $C_0$ is the time reversed payoff condition (*).
So one way to solve the Black-Scholes eqn is to solve the above ODE and then apply the inverse Laplace Transform.
Assume the short rate is affine under the risk-neutral measure
$$ dr_t = \kappa(\theta-r_t)dt + \sqrt{\sigma_1 +\sigma_2 r_t}\;dW_t $$
Then bond prices are solution to the PDE:
$$ \frac{1}{2}P_{rr}(\sigma_1 +\sigma_2 r) + P_r\kappa(\theta - r)+ P_t -rP = 0 $$
with $P(T,T) = 1$. Looking for solution in the form
$$ P(r,t,T) = e^{A(T-t) - B(T-t)r}, $$
we find that $A(\cdot), B(\cdot)$ satisfy the following system of ODEs (known as Ricatti equations)
\begin{aligned} -B' & = \frac{1}{2}\sigma_2B^2 + \kappa B - 1 \\ A' & = \frac{1}{2}\sigma_1B^2 - \kappa \theta B \end{aligned} with $A(0) = B(0) = 0$.
Employing finite difference method typically means:
1) Generate a grid of points where we want to find the solutions
2) Substitute the derivatives in ODE/PDE with finite difference schemes, which converts the ODE/PDE into a system of algebraic equations.
3) Solve the system of algebraic equations.
4) Implement and debug the computer code.
5) Perform sanity check, error analysis, sensitity analysis, etc, by any available means: intuitively, analytically or numerically.
$$ \mathcal{D}_+y(t) = \frac{y(t+h)-y(t)}{h} = y'(t) + O(h) $$
$$ \frac{y^{n+1} - y^n}{h} = f(t^n, y^n); $$
$$ y^{n+1} = y^n + h f(t^n, y^n). $$
$$ \frac{y^{n+1} - y^n}{h} = f(t^{n+1}, y^{n+1})\; \Longrightarrow \; y^{n+1} = y^n + h f(t^{n+1}, y^{n+1}). $$
$$ \frac{y^{n+1} - y^n}{h} = \frac{1}{2}\left( f(t^n, y^n) + f(t^{n+1}, y^{n+1}) \right)\; \Longrightarrow \; y^{n+1} = y^n + \frac{h}{2}\left( f(t^n, y^n) + f(t^{n+1}, y^{n+1}) \right). $$
$$ y^{n+1} = y^n + \int_{t^n}^{t^{n+1}} f(s,y(s)) \; ds $$
For any FDM that are employed to solve practical problems, we should ask
1) How accurate is the method?
2) Does it converge?
3) What is the best choice of step size?
\begin{aligned} \mathcal{N}_h y(t^n) & = y(t^{n+1}) - y(t^n) - h f(t^n, y(t^n)) \\ & = y(t^{n}+h) - y(t^n) - h f(t^n, y(t^n)) \\ & = \left( y(t^n) + hy'(t^n) + \frac{h^2}{2}y''(t^n)+ \cdots \right) - y(t^n) - h f(t^n, y(t^n)) \\ & = \frac{h^2}{2}y''(t^n) + O(h^3). \end{aligned}
$$ y^{n+1} = y^n + \lambda y^n h = ( 1 + \lambda h) y^n = ( 1 + \lambda h)^2 y^{n-1} = \cdots \bs{\leadsto} $$ $$ y^n = ( 1 + \lambda h)^n $$
$$ |y^n - z^n| \leq K\left\{ |y^0 - z^0| + \max_{1 \leq j \leq N} | \mathcal{N}_h y(t^j) - \mathcal{N}_h z(t^j) | \right\} $$ for $1 \leq n \leq N$.
$$ |y^n - y(t^n)| = O(h^p), \;\; \forall 0 \leq n \leq T/h. $$
$$ \mbox{consistency + stability = convergence} $$
$$ |1+\lambda h| \leq 1 \Longrightarrow 0 < h < -\frac{2}{\lambda }. $$
$$ |1+\lambda h| = |1+z| = |z - (-1)| \leq 1, $$
amounts to a unit disk with radius 1 and center $(-1,0)$.
The method is implicit: if $f(t,y)$ is nonlinear, we would have to solve a nonlinear equation to get $y^{n+1}$.
For the Backward Eulers method: the local truncation error is
\begin{aligned} \mathcal{N}_h y(t^n) & = y(t^{n+1}) - y(t^n) - h f(t^{n+1}, y(t^{n+1})) \\ & = y(t^{n+1}) - y(t^{n+1}-h) - h f(t^{n+1}, y(t^{n+1})) \\ & = -\frac{h^2}{2}y''(t^{n+1}) + O(h^3). \end{aligned}
$$ y^{n+1} = y^n + h\lambda y^{n+1}, \Longrightarrow y^{n+1} =\frac{1}{1-h\lambda}y^n $$
\begin{aligned} \mathcal{N}_h y(t^n) & = y(t^{n+1}) - y(t^n) - \frac{h}{2}\left( f(t^n, y^n) + f(t^{n+1}, y^{n+1}) \right). \\ & = -\frac{h^3}{6}y'''(t^{n+1/2}) + O(h^4). \end{aligned}
$$ y^{n+1} =\frac{1+h\lambda}{1-h\lambda}y^n $$
\begin{aligned} y^* & = y^n + h f(t^n, y^n) \\ y^{n+1} & = y^n + \frac{h}{2}\left( f(t^n, y^n) + f(t^{n+1}, y^*) \right). \end{aligned}
The method is second order, but only conditionally stable.
The is a representative of a powerful class of multi-step methods called predictor-corrector methods: the forward Euler as the predictor and the Crank-Nicholson is the corrector.
\begin{aligned} K_0 & = f(t^n, y^n) \\ K_1 & = f(t^n + \frac{h}{2}, y^n + \frac{h}{2} K_0) \\ K_2 & = f(t^n + \frac{h}{2}, y^n + \frac{h}{2} K_1) \\ K_3 & = f(t^n + h, y^n + hK_2) \\ y^{n+1} & = y^n + \frac{h}{6}\left( K_0 + 2K_1 + 2K_2 + K_3 \right). \end{aligned}
ODEs with rapidly decaying transients or with varying time scales often presents huge problems for numerical methods which are not A-Stable.
Example: $$ y'(t) = -50[y(t) - t] + 1, \;\; y(0) = 1; $$
#restart notebook
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
r = 10
def f(y,t):
return [y[1], r*(1 - y[0]*y[0])*y[1] - y[0]]
# initial value
y0 = [2.0, 1.]
n = 10000
T = 3.0*r
h = T/n
# print h
ts = np.arange(0.0001,T, h)
ys = odeint(f, y0, ts)
fig = plt.figure(figsize=(16,6))
ax = fig.add_subplot(1, 2, 1)
ax.plot(ts, ys[:, :], '.-')
plt.xlabel('Time')
plt.title('odeint')
ax = fig.add_subplot(122)
ax.set_axis_off()
ax.text(0, .5, "$\;\; y' = \mu (1-x^2)y - x$ ", size="xx-large");
ax.text(0, .6, "$\;\; x' = y$", size="xx-large");
ax.text(0, .75, "Integrating Van der Pol system", size="x-large");
plt.show()
from scipy.integrate import ode
from mpl_toolkits.mplot3d import Axes3D
def lorenz_sys(t, q):
x = q[0]
y = q[1]
z = q[2]
# sigma, rho and beta are global.
f = [sigma * (y - x),
rho*x - y - x*z,
x*y - beta*z]
return f
#ic = [1.0, 2.0, 1.0]
ic = [0.01, 0.01, 0.01]
t0 = 0.0
t1 = 70.0
dt = 0.01
sigma = 10.0
rho = 28.0
beta = 8.0/3.0
#sigma = 28.0
#rho = 46.92
#beta = 4.0
solver = ode(lorenz_sys)
t = []
sol = []
solver.set_initial_value(ic, t0)
#solver.set_integrator('dop853')
solver.set_integrator('dopri5')
while solver.successful() and solver.t < t1:
solver.integrate(solver.t + dt)
t.append(solver.t)
sol.append(solver.y)
t = np.array(t)
sol = np.array(sol)
fig = plt.figure()
ax = Axes3D(fig)
ax.plot(sol[:,0], sol[:,1], sol[:,2])
plt.xlabel('x')
plt.ylabel('y')
ax = fig.add_subplot(122)
ax.set_axis_off()
ax.text(2, .3, "$\;\; z' = x y -\kappa z$", size="xx-large");
ax.text(2, .4, "$\;\; y' = x ( \gamma - z ) - y $", size="xx-large");
ax.text(2, .5, "$\;\; x' = \sigma (y - x) $", size="xx-large");
ax.text(2, .65, "Integrating Lorenz system", size="x-large");
plt.show()