Difference between revisions of "Numerical Methods for Ordinary Differential Equations"

From AstroBaki
Jump to navigationJump to search
Line 137: Line 137:
 
and then we calculate the new velocity of the electron given its instantaneous acceleration:
 
and then we calculate the new velocity of the electron given its instantaneous acceleration:
  
$$\vec{v}_{i+1} = \vec{v}_{i} + h \vec{a}_{i} = \vec{v}_{i} + h \frac{e}{m_e} \left( \frac{\vec{v}_{i}}{c} \times \vec{B} \right)$$.
+
$$\vec{v}_{i+1} = \vec{v}_{i} + h \vec{a}_{i} = \vec{v}_{i} + h \frac{e}{m_e} \left( \frac{\vec{v}_{i}}{c} \times \vec{B} \right) .$$
  
 
In our code, we will use a step size of 1e-6 seconds and integrate for a total of 0.1 seconds.
 
In our code, we will use a step size of 1e-6 seconds and integrate for a total of 0.1 seconds.

Revision as of 23:24, 12 December 2018

Short Topical Videos

Reference Materials

Need to Review?

<latex> \documentclass[11pt]{article} \def\inv#1Template:1 \over \def\ddtTemplate:D \over dt \def\mean#1{\left\langle {#1}\right\rangle} \def\sigot{\sigma_{12}} \def\sigto{\sigma_{21}} \def\eval#1{\big|_{#1}} \def\tr{\nabla} \def\dce{\vec\tr\times\vec E} \def\dcb{\vec\tr\times\vec B} \def\wz{\omega_0} \def\ef{\vec E} \def\ato{{A_{21}}} \def\bto{{B_{21}}} \def\bot{{B_{12}}} \def\bfieldTemplate:\vec B \def\apTemplate:A^\prime \def\xp{{x^{\prime}}} \def\yp{{y^{\prime}}} \def\zp{{z^{\prime}}} \def\tp{{t^{\prime}}} \def\upxTemplate:U x^\prime \def\upyTemplate:U y^\prime \def\e#1{\cdot10^{#1}} \def\hf{\frac12} \def\^{\hat } \def\.{\dot } \usepackage{fullpage} \usepackage{amsmath} \usepackage{eufrak}

\begin{document} \section{ Numerical Integration Methods }

Numerical integration is used to solve ODEs that cannot be solved analytically, generally through discretization of the ODE. Since the conception of the modern computer, numerical integration methods have become an essential tool in the physical sciences and beyond. Here we will describe two of many such methods, along with some sample code.

\subsection*{ The Euler Method }

The Euler Method is the simplest numerical integration method, and is most likely the only one you will need to use in C207. Here, we'll walk through how it works and show some examples of how to implement it. Imagine you have some ODE:

$$\frac{dy}{dx} = f(x,y)$$

with some initial condition $y(x_0) = y_0$. Now, let's say we want to determine the value of $y_1$ at some new value of $x_1$ that is close to $x_0$. We can estimate this by rearranging the ODE above into the following form:

$$\frac{y_{1} - y_{0}}{x_{1} - x_{0}} = f(x_0,y_0)$$

which we can write as:

$$y_{1} = y_{0} + (x_{1} - x_{0}) f(x_0,y_0) = y_{0} + h f(x_0,y_0)$$

where $(x_{1} - x_{0}) = h$ is called the "step size. We can generalize this equation with the following recursive relation:

$$ y_{i+1} = y_{i} + h f(x_{i},y_{i}) $$

where the step size is typically set as some constant value. As we will see, smaller step sizes produce more accurate integration results. More precisely, the error is on the order of $O(h)$ (i.e., reducing $h$ by a factor of two makes the Euler method twice as accurate).

\\[12pt]

$\textbf{Example \, 1:}$

Now that we have a simple integration scheme, let's see how we'd execute this in python. Say our ODE is:

$$\frac{dy}{dx} = y$$

and our initial condition is $y(0) = 1$. We can solve this analytically, but let's see what we get using the Euler method for different values of $h$.

\begin{verbatim}

. import numpy as np . import matplotlib.pyplot as plt . . # plot analytic solution . plt.figure(figsize=(10,4)) . x_analytic = np.linspace(0,5,100) . y_analytic = np.exp(x_analytic) . plt.plot(x_analytic, y_analytic, 'k--', label='analytic solution') . . # define initial condition and values of h that will be used . x0 = 0 . y0 = 1 . h_values = np.array([0.1,0.5,1.0]) . . for h in h_values: . # define arrays that can be used to record x and y . x = np.zeros([100]) . x[0] = x0 . y = np.zeros([100]) . y[0] = y0 . # execute integration for current value of h . for i in range(1, len(y)): . x[i] = x[i-1] + h . dydx = y[i-1] . y[i] = y[i-1] + h*dydx . # plot curve for current value of h . plt.plot(np.trim_zeros(x, 'b'), np.trim_zeros(y, 'b'), label='h = '+str(h)) . . # remainder of plotting routine . plt.xlabel('x', fontsize=12) . plt.ylabel('y', fontsize=12) . plt.xlim([0,7]) . plt.ylim([0,100]) . plt.legend(loc=2, fontsize=12) . plt.tight_layout() . plt.show()

\end{verbatim}

\begin{center} Numerical Euler1.png \end{center}

As we can see, the curve obtained using the Euler method converges to the analytic solution as $h \rightarrow 0$. Therefore, you must be careful to use a value of $h$ small enough to produce realistic results when performing numerical integrations.

\\[12pt]

$\textbf{Example \, 2:}$

Since this is an astrophysics course, we will conclude this section with a more relevant example. Let's calculate the trajectory of a non-relativistic electron in the presence of a magnetic field (for relativistic electrons, see: Synchrotron Radiation). Assume the electron begins at the origin and has an initial velocity of $\vec{v} = (\mathrm{1e5}, 0, \mathrm{1e5}) \, \mathrm{cm/s}$. Also assume a magnetic field with a strength of 1e-4 Gauss that points along the x-axis, or $\vec{B} = (\mathrm{1e-4}, 0, 0) \, \mathrm{Gauss}$. In this case, we need to apply Euler's method twice within each iteration of the integration. First, we calculate the new position of the electron given its instantaneous velocity:

$$\vec{x}_{i+1} = \vec{x}_{i} + h \vec{v}_{i}$$

and then we calculate the new velocity of the electron given its instantaneous acceleration:

$$\vec{v}_{i+1} = \vec{v}_{i} + h \vec{a}_{i} = \vec{v}_{i} + h \frac{e}{m_e} \left( \frac{\vec{v}_{i}}{c} \times \vec{B} \right) .$$

In our code, we will use a step size of 1e-6 seconds and integrate for a total of 0.1 seconds.

\begin{verbatim}

. import numpy as np . import matplotlib.pyplot as plt . from mpl_toolkits import mplot3d . . # constants . c = 3e10 # cm/s . m = 1e-27 # g . e = 5e-10 # esu . h = 1e-6 # s . . # magnetic field vector (Gauss) . Bx, By, Bz = 1e-4, 0, 0 . B = np.array([Bx, By, Bz]) . . # velocity vector (cm/s) . vx0, vy0, vz0 = 1e5, 0, 1e5 . v = np.array([vx0, vy0, vz0]) . . # position vector and trajectory arrays . x, y, z = np.zeros(100000), np.zeros(100000), np.zeros(100000) . r = np.array([0.0, 0.0, 0.0]) . . # integration . for i in range(1, len(x)): . # calculate next position given current velocity . r += h*v . x[i], y[i], z[i] = r[0], r[1], r[2] . # calculate next velocity given current acceleration . a = (e/m) * np.cross(v/c, B) . v += h*a . . # plotting routine . fig = plt.figure(figsize=(10,4)) . ax = plt.axes(projection='3d') . ax.plot3D(x, y, z, 'black') . ax.set_xlabel('x (cm)', fontsize=12, labelpad=10) . ax.set_ylabel('y (cm)', fontsize=12, labelpad=10) . ax.set_zlabel('z (cm)', fontsize=12, labelpad=10) . plt.tight_layout() . plt.show()

\end{verbatim}

\begin{center} Cyc Euler.png \end{center}


\subsection*{ The Runge-Kutta Method }

Runge-Kutta (RK) methods are a family of iterative techniques used to solve ODEs numerically. Explicit RK methods stem from the equation:

$$y_{i+1} = y_{i} + h \sum_{n=1}^{s} b_{n} k_{n}$$

where:

$$ k_{1} = f(x_{i}, y_{i}), $$ $$ k_{2} = f(x_{i} + c_2 h, y_{i} + h(a_{21} k_1)), $$ $$ k_{3} = f(x_{i} + c_3 h, y_{i} + h(a_{31} k_1 + a_{32} k_2)), $$ $$ \vdots $$ $$ k_{s} = f(x_{i} + c_s h, y_{i} + h(a_{s1} k_1 + a_{s2} k_2 + \ldots + a_{s,s-1} k_{s-1})) $$

and where the values of $s$, $a_{n,m}$, $b_n$, and $c_n$ vary based on the specific method being used.

One variation of this equation is the first-order Euler method, for which we set $s = 1$ and $b_1 = 1$. Another popular variation is the fourth-order classical Runge-Kutta method, also known as RK4. The equations for RK4 are the following:

$$ y_{i+1} = y_{i} + \frac{1}{6}(k_1 + k_2 + k_3 + k_4), $$ $$ x_{i+1} = x_{i} + h $$

where:

$$ k_{1} = h f(x_{i}, y_{i}), $$ $$ k_{2} = h f(x_{i} + h/2, y_{i} + k_1/2), $$ $$ k_{3} = h f(x_{i} + h/2, y_{i} + k_2/2), $$ $$ k_{4} = h f(x_{i} + h, y_{i} + k_3). $$

Each of these $k_{n}$ terms looks at the change in $y$ at different places in the $(x,y)$ space and calculates $y_{i+1}$ accordingly. The benefit of using RK4 over the Euler method is that it is much more accurate for larger step sizes. Specifically, the error is on the order of $O(h^4)$. This means that decreasing the step size by a factor of two makes the integration sixteen times more accurate. This advantage would come into play when running a long simulation that requires step sizes too small for the Euler method to complete it in a reasonable time frame. You won't need to use RK4 for C207, but it may be useful in your research down the line.

\section{ Monte Carlo Methods }

Monte Carlo methods are algorithms that use repeated random sampling to obtain results. In this section, we will discuss how to do this sampling manually in python and touch upon the more complicated subject of Markov Chain Monte Carlo algorithms.

\subsection*{ Monte Carlo Sampling }

Monte Carlo (MC) sampling is generally used when one wants to simulate a population of a continuous random variable that is described by some known probability distribution. Sure, there exist python packages that make this easy (see: Random Sampling with Numpy). But what if you are dealing with a probability distribution that isn't supported by a python package, or if you aren't using python altogether? Because of situations like this, it is useful to know how to do this sampling manually.

To do MC sampling manually, you must first compute the cumulative distribution corresponding to your probability distribution. The cumulative distribution function of a continuous random variable $X$ is given by:

$$ F_X(x) = A \int_{-\infty}^x f_X(t) dt $$

where $f_X(t)$ is the probability density function and $A$ is a normalization factor. It is necessary to normalize the probability density function because we want the maximum value of $F_X(x)$ (i.e., the sum of the probabilities of all possible outcomes) to be 1. Once you determine $A$ for your $f_X(t)$, you can rearrange the equation to solve for $x$. Last, you can simulate a population of $X$ by randomly drawing values of $F_X(x)$ between 0 to 1. Let's walk through a few examples to clarify.

\\[12pt]

$\textbf{Example \, 1:}$

First, we will imagine a scenario where our continuous random variable is uniformly distributed between 0 and 100. In this case, the unnormalized probability density function is simply:

$$ f(t) = 1 .$$

Solving for the normalization factor $A$, we get:

$$ A = \frac{1}{\int_{0}^{100} f(t) dt} = \frac{1}{100} .$$

Last, we can solve for $x$ using the equation above:

$$ F(x) = \frac{1}{100} \int_{0}^x f(t) dt = \frac{x}{100} \rightarrow x = 100 \, F(x) .$$

By randomly selecting multiple values of $F(x)$ between 0 and 1, we will obtain a population distributed in a uniformly between 0 and 100, just like the parent probability distribution.

\begin{verbatim}

. import numpy as np . import matplotlib.pyplot as plt . . # PDF and CDF . x_dist = np.linspace(0,100,int(1e4)) . f_dist = np.full_like(x_dist, 1) . F_dist = np.cumsum(f_dist) . F_dist /= max(F_dist) . . # sampling procedure . def sample(F): . x = 100*F . return x . . N = int(1e4) . F_samples = np.random.rand(N) . x_samples = sample(F_samples) . . # plotting routine . plt.figure(figsize=(15,4)) . ax1, ax2, ax3 = plt.subplot(131), plt.subplot(132), plt.subplot(133) . ax1.plot(x_dist, f_dist) . ax1.set_xlabel('t', fontsize=12) . ax1.set_ylabel('f(t)', fontsize=12) . ax2.plot(x_dist, F_dist) . ax2.set_xlabel('x', fontsize=12) . ax2.set_ylabel('F(x)', fontsize=12) . ax3.hist(x_samples, bins=np.linspace(0,100,25), edgecolor='black') . ax3.set_xlabel('x', fontsize=12) . ax3.set_ylabel('frequency', fontsize=12) . plt.tight_layout() . plt.show()

\end{verbatim}

\begin{center} MC1.png \end{center}

The plot on the left shows the (uniform) probability distribution. The plot in the center shows the cumulative distribution. The histogram on the right shows the population simulated with our sampling procedure for 1e4 samples.

\\[12pt]

$\textbf{Example \, 2:}$

Next, let's do a more physical example. Say you want to simulate a cloud of hydrogen atoms by sampling their velocities. If we assume the atoms are collisional, we can say that the probability density function of their velocities is Maxwellian:

$$ f(v) = v^2 e^{-\frac{m v^2}{2 k T}} .$$

The normalization factor is thus:

$$ A = \frac{1}{\int_{0}^{\infty} f(v) dv} = \frac{(\frac{m}{k T})^{3/2}}{(\pi/2)^{1/2}} .$$

And so our sampling equation is:

$$ \int_{0}^{x} f(v) dv = \frac{(\pi/2)^{1/2}}{(\frac{m}{k T})^{3/2}} \, F(x) .$$

We cannot solve for $x$ directly here, but we can use some coding tricks to complete the sampling procedure.

\begin{verbatim}

. import numpy as np . import matplotlib.pyplot as plt . from scipy.special import erf . . # constants . m = 1e-24 # g . k = 1e-16 # erg/K . T = 1e2 # K . . # PDF and CDF . v_dist = np.linspace(0,int(5e5),int(5e5)) . f_dist = v_dist**2 * np.exp(-(m*v_dist**2)/(2*k*T)) . F_dist = np.cumsum(f_dist) . F_dist /= max(F_dist) . . # sampling prodecure . # calculate LHS of the equation for many possible values of x, then select the x that gives LHS = RHS . LHS = (np.pi/2)**0.5 * (k*T/m)**1.5 * erf(v_dist * (m/(2*k*T))**0.5) - (k*T*v_dist/m)*np.exp(-(m*v_dist**2)/(2*k*T)) . def sample(F): . RHS = F * (np.pi/2)**0.5 * (m/(k*T))**(-1.5) . idx = np.argmin(abs(LHS-RHS)) . x = v_dist[idx] . return x . . N = int(1e4) . F_samples = np.random.rand(N) . x_samples = np.zeros_like(F_samples) . for i in range(len(F_samples)): . x_samples[i] = sample(F_samples[i]) . . # plotting routine . plt.figure(figsize=(15,4)) . ax1, ax2, ax3 = plt.subplot(131), plt.subplot(132), plt.subplot(133) . ax1.plot(v_dist, f_dist) . ax1.set_xlabel('v (cm/s)', fontsize=12) . ax1.set_ylabel('f(v)', fontsize=12) . ax2.plot(v_dist, F_dist) . ax2.set_xlabel('x (cm/s)', fontsize=12) . ax2.set_ylabel('F(x)', fontsize=12) . ax3.hist(x_samples, bins=np.linspace(0,5e5,25), edgecolor='black') . ax3.set_xlabel('x (cm/s)', fontsize=12) . ax3.set_ylabel('frequency', fontsize=12) . plt.tight_layout() . plt.show()

\end{verbatim}

\begin{center} MC2.png \end{center}

The plot on the left shows the (Maxwellian) probability distribution. The plot in the center shows the cumulative distribution. The histogram on the right shows the population simulated with our sampling procedure for 1e4 samples.

\subsection*{ Markov Chain Monte Carlo }

Markov Chain Monte Carlo (MCMC) algorithms are used to obtain random samples from probability distributions for which random sampling is difficult (e.g., high-dimensional distributions). Briefly, MCMCs work by placing "walkers into the parameter space. With each iteration, these walkers take a random step such that the sampled distribution of each parameter converges to the proposed distribution for that parameter. One common application of MCMCs is parameter estimation, which can be done by considering the mean and variance of each parameter's sample distribution after convergence is achieved. You will not need to use MCMC for C207, but it may come in handy in the future.

If you wish to learn more about how to implement MCMCs in python, check out PyMC3 and emcee.

\end{document} <\latex>