Adams-bashforth and Adams-moulton methods

Subject classification: this is a mathematics resource .

Definitions

Linearmultistepmethodsareusedforthenumericalsolutionofordinarydifferentialequations, inparticularthe initial value problem

 y' = f(t,y)\quad\text{with}\quad y(t_0)=y_0\,.

The Adams-Bashforth methods and Adams-Moulton methods are described on the Linear multistep method page.

Derivation

There are (at least) two ways that can be used to derive the Adams-Bashforth methods and Adams-Moulton methods. We will demonstrate the derivations using polynomial interpolation and using Taylor's theorem for the two-step Adams-Bashforth method.

Derive the two-step Adams-bashforth method by using polynomial interpolation

From the w:Fundamental theorem of Calculus we can get

 y(t_{n+1}) = y(t_{n}) + \int_{t_{n}}^{t_{n+1}} y'(t)\,dt\,.

 

 

 

 

(1 )

Set

 A = \int_{t_{n}}^{t_{n+1}} y'(t)\,dt = \int_{t_{n}}^{t_{n+1}} f(t,y(t))\,dt.

 

 

 

 

(2 )

To get the value of A, we can use an interpolating polynomial P(t) as an approximation of f(t,y(t)). The interpolation polynomial in the Lagrange form is a linear combination

L(x) := \sum_{j=0}^{k} y_j \ell_j(x)

where

\ell_j(x) := \prod_{\begin{smallmatrix}0\le m\le k\\ m\neq j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m}\,.

Then the interpolation can be

P(t)= f(t_{n},y_{n})\frac{t-t_{n-1}}{t_{n}-t_{n-1}} + f(t_{n-1},y_{n-1})\frac{t-t_{n}}{t_{n-1}-t_{n}}\,.

Thus, (2 ) becomes

A = \int_{t_{n}}^{t_{n+1}} f(t,y(t))\,dt \approx \int_{t_{n}}^{t_{n+1}} P(t)\,dt = \int_{t_{n}}^{t_{n+1}} f(t_{n},y_{n})\frac{t-t_{n-1}}{t_{n}-t_{n-1}} + f(t_{n-1},y_{n-1})\frac{t-t_{n}}{t_{n-1}-t_{n}}\,dt

 

 

 

 

(3 )

Integrating and simplifying, the right hand side of equation (3 ) becomes

\begin{array}{rcl}
&&\frac{1}{2}(t_{n}+t_{n+1})(f(t_{n},y_{n})-f(t_{n-1},y_{n-1}))-t_{n-1}f(t_{n},y_{n})+ t_{n}f(t_{n-1},y_{n-1})\\ \\
 &=&\frac{1}{2}(t_{n}+t_{n+1}-2t_{n-1})(f(t_{n},y_{n}) + \frac{1}{2}(2t_{n}-t_{n+1}-t_{n})f(t_{n-1},y_{n-1})\,.
\end{array}

Since t_{n-1}, t_{n} and t_{n+1} are equally spaced, then t_{n}-t_{n-1} = t_{n+1}-t_{n} = h. Therefore, the value of A is

A = \frac{3}{2}hf(t_{n},y_{n})-\frac{1}{2}hf(t_{n-1},y_{n-1})

Putting this value back to (1 ) yields

 y(t_{n+1}) \approx y(t_{n}) + \frac{3}{2}hf(t_{n},y_{n})-\frac{1}{2}hf(t_{n-1},y_{n-1})\,.

Thus, the equation  y_{n+1} = y_{n} + \frac{3}{2}hf(t_{n},y_{n})-\frac{1}{2}hf(t_{n-1},y_{n-1}) is the two-step Adams-Bashforth method.

Derive the two-step Adams-Bashforth method by using Taylor's theorem

To simplify, let's set f_{k}=f(x_{k},y_{k}). Then the general form of Adams-Bashforth method is

y_{n+r}=y_{n+r-1}+h\sum_{k=1}^r \lambda_{k}f_{n+r-k}

 

 

 

 

(4 )

where \sum_{k=1}^r \lambda_{k}=1. For the two-step Adams-Bashforth method, let's set \lambda_{1}=1-\lambda, \lambda_{2}=\lambda. Then (4 ) becomes

\begin{align}y_{n+2}&=y_{n+1}+h((1-\lambda)f_{n+1}+\lambda f_{n})\\
 &=y(t_{n+1})+h((1-\lambda)y'(t_{n+1})+\lambda y'(t_{n}))\,.\end{align}

By using Taylor's theorem, expand y'(t_{n}) at t_{n+1} to get

y_{n+2}=y(t_{n+1})+h((1-\lambda)y'(t_{n+1})+\lambda (y'(t_{n+1})-hy''(t_{n+1})+O(h^2))\,.

Thus, the simplified form is

y_{n+2}=y(t_{n+1})+hy'(t_{n+1})- \lambda (h^2)y''(t_{n+1})+O(h^3)

 

 

 

 

(5 )

Expanding y(t_{n+2}) at y(t_{n+1}) yields

y(t_{n+2})=y(t_{n+1})+hy'(t_{n+1})+\frac{1}{2}(h^2)y''(t_{n+1})+O(h^3)\,.

 

 

 

 

(6 )

Subtracting (5 ) from (6 ) and then requiring the h^2 term to cancel makes \lambda=-\frac{1}{2}. The two-step Adams-Bashforth method is then

 y_{n+2} = y_{n+1} + \frac{3}{2}h(f(t_{n+1},y_{n+1})-\frac{1}{2}hf(t_{n},y_{n})

Since

y(t_{n+2})-y_{n+2}=O(h^3)\,.

the local truncation error is of order O(h^3) and thus the method is second order. (See w: Linear multistep method#Consistency and order and w:Truncation error)

Exercises

1. Derive three-step Adams-Bashforth method by using polynomial interpolation

2. Derive the second-order Adams-Moulton method by using Taylor's theorem

Predictor–corrector method

To solve an ordinary differential equation (ODE), a w:Predictor–corrector method is an algorithm that can be used in two steps. First, the prediction step calculates a rough approximation of the desired quantity, typically using an explicit method. Second, the corrector step refines the initial approximation using another means, typically an implicit method.

Here mainly discuss about using Adams-bashforth and Adams-moulton methods as a pair to contruct a predictor–corrector method.

Example: Adams predictor–corrector method

Let's start from the two-step Adams method. The prediction step is to use two-step Adams-bashforth:

 P_{n+1} = y_{n} + \frac{3}{2}hf(t_{n},y_{n})-\frac{1}{2}hf(t_{n-1},y_{n-1})

Then, by using two-step Adams-moulton the corrector step can be:

 y_{n+1} = y_n + \tfrac12 h \big( f(t_{n+1},P_{n+1}) + f(t_n,y_n) \big)

Also, by using four-step Adams-bashforth and Adams-moulton methods together, the predictor-corrector formula is:


\begin{cases}
 P_{n+1}=y_{n}+\frac{h}{24}(55f(t_{n},y_{n})-59f(t_{n-1},y_{n-1})+37f(t_{n-2},y_{n-2})-9f(t_{n-3},y_{n-3})) \\
 y_{n+1}=y_{n}+\frac{h}{24}(9f(t_{n+1},P_{n+1})+19f(t_{n},y_{n})-5f(t_{n-1},y_{n-1})+f(t_{n-2},y_{n-2})) \\
\end{cases}

Note, the four-step Adams-bashforth method needs four initial values to start the calculation. It needs to use other methods, for example Runge-Kutta, to get these initial values.

matlab program

The four-step Adams predictor-corrector matlab program code is

function [x y]=maadams4(dyfun,xspan,y0,h)
% use four-step Adams predictor-corrector method to solve an ODE y'=f(x,y), y(x0)=y0
% inputs: dyfun -- the function f(x,y), as an inline
% xspan -- the interval [x0,xn]
% y0 -- the initial value
% h -- the step size
% output: x, y -- the node and the value of y
x=xspan(1):h:xspan(2);
% use Runge-Kutta method to get four initial values
[xx,yy]=marunge(dyfun,[x(1),x(4)],y0,h);
y(1)=yy(1);y(2)=yy(2);
y(3)=yy(3);y(4)=yy(4);
for n=4:(length(x)-1)
p=y(n)+h/24*(55*feval(dyfun,x(n),y(n))-59*feval(dyfun,x(n-1),y(n-1))+37*feval(dyfun,x(n-2),y(n-2))-9*feval(dyfun,x(n-3),y(n-3)));
y(n+1)=y(n)+h/24*(9*feval(dyfun,x(n+1),p)+19*feval(dyfun,x(n),y(n))-5*feval(dyfun,x(n-1),y(n-1))+feval(dyfun,x(n-2),y(n-2)));
end
x=x';y=y';

The matlab code of Runge-Kutta method is:

function [x y]=marunge(dyfun,xspan,y0,h)
x=xspan(1):h:xspan(2);
y(1)=y(0);
for n=1:(length(x)-1)
 k1=feval(dyfun,x(n),y(n));
 k2=feval(dyfun,x(n)+h/2,y(n)+h/2*k1);
 k3=feval(dyfun,x(n)+h/2,y(n)+h/2*k2);
 k4=feval(dyfun,x(n+1),y(n)+h*k3);
 y(n+1)=y(n)+h*(k1+2*k2+2*k3+k4)/6;
end
x=x';y=y'

References

This article is issued from Wikiversity - version of the Friday, March 25, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.