Numerical Analysis/Power iteration examples

< Numerical Analysis

w:Power method is an eigenvalue algorithm which can be used to find the w:eigenvalue with the largest absolute value but in some exceptional cases, it may not numerically converge to the dominant eigenvalue and the dominant eigenvector. We should know the definition for dominant eigenvalue and eigenvector before learning some exceptional examples.

Definitions

Let \lambda_1, \lambda_2,....\lambda_n be the eigenvalues of an n\times n matrix A. \lambda_1 is called the dominant eigenvalue of A if

|\lambda_1| > |\lambda_i|\quad\text{for}\quad i= 1,2,3.....n.

The eigenvectors corresponding to \lambda_1 are called dominant eigenvector of A. Next we should show some examples where the power method will not converge to the dominant eigenpair of a given matrix.

Example: Bad starting guess

Use the power method to find an eigenvalue and its corresponding eigenvector of the matrix

 A=\left[\begin{array}{c c c}1 & 2 & 1 \\-4 & 7 & 1 \\-1 & -2 & -1 \end{array} \right]

starting with the vector

X_0=\left[\begin{array}{c}1 \\1 \\ 1 \\\end{array} \right].

We apply the power method and get

Y_1 =A X_0 = \left[\begin{array}{c}4 \\4 \\ -4 \\\end{array} \right],

with c_1=4, and it implies

X_1=\left[\begin{array}{c}1 \\1 \\ -1 \\\end{array} \right].

In this way,

Y_2 =AX_1 =\left[\begin{array}{c}2 \\2 \\ -2 \\\end{array} \right],

so c_2=2, and it implies

X_2 =\left[\begin{array}{c}1 \\1 \\ -1 \\\end{array} \right].

As we can see, the sequence \left( c_{k} \right) converges to 2.

Using the w:characteristic polynomial, the eigenvalues of matrix A are \lambda_1=0, \lambda_2=2, and \lambda_3=5, so the dominant eigenvalue is 5, and our hand calculations converged to the second largest eigenvalue.

The problem is that our initial guess led us to exactly the eigenvector corresponding to \lambda_2=2. If we run this long enough on a computer then roundoff error will eventually introduce a component of the eigenvector for \lambda_3=5, and that will dominate. This process can be achieved by w:Matlab. Below is the code in the matlab language. This code will plot the convergence of the sequence \left( c_{k} \right).

eest=zeros(100,1);
for i=1:100
    x=A*v;
    e=(x'*v)/(v'*v);
    v=x/norm(x);
    eest(i)=e;
end
plot(eest);

We can see the convergence of the sequence \left( c_{k} \right) in the following figure:

\left( c_{k} \right)

We can draw the conclusion that if the starting guess we chose is very close to the corresponding eigenvector of the second largest eigenvalue, the estimation of the dominant eigenvalue will converge to the second largest eigenvalue first and dominant eigenvalue finally. Changing the starting guess will fix this problem.

Example: Complex eigenvalue

Consider the matrix

A=\left[\begin{array}{c c c}3 & 2 & -2 \\-1 & 1 & 4 \\3 & 2 & -5 \end{array} \right] .

Apply the power method to find the eigenvalue of the matrix with starting guess

X_0 =\left[\begin{array}{c}1 \\1 \\ 1 \\\end{array} \right].

We compute

Y_1 =AX_0 =\left[\begin{array}{c}3 \\4 \\ 0 \\\end{array} \right],

thus c_1=4, and it implies

X_1 = \left[\begin{array}{c}0.75 \\1 \\ 0 \\\end{array} \right].

We continue doing some iterations:

AX_1 =\left[\begin{array}{c}4.25 \\0.25 \\ 4.25 \\\end{array} \right]

so c_2=4.25, and it implies

X_2 =\left[\begin{array}{c}1 \\0.0588 \\ 1 \\\end{array} \right].
AX_2 =\left[\begin{array}{c}1.1176 \\3.0588 \\ -1.8824 \\\end{array} \right]

so c_3=3.0588, and it implies

X_3 =\left[\begin{array}{c}0.36537 \\1 \\ -0.615437 \\\end{array} \right].

We can see the sequence \left( c_{k} \right) and \left( X_{k} \right) are oscillating. The above matrix has eigenvalues 1 and (1 \pm \sqrt{14}i) and the dominant eigenvalues are w:complex conjugate to each other. The power method applied to a real matrix with a real starting guess can not work for matrices with the dominant eigenvalues which are complex conjugate to each other. Sometimes even though the sequence \left( c_{k} \right) and \left( X_{k} \right) can be convergent, the convergence has nothing to do with the dominant eigenpair. we can see the convergence of the sequence \left( c_{k} \right) from the figure below:

\left( c_{k} \right)

As we can see, the sequence \left( c_{k} \right) converges to -5 which has nothing to with our dominant eigenvalues (1 \pm \sqrt{14}i) and the power method will not work if the matrix has dominant eigenvalues which are complex conjugate to each other and our starting guess has all real entries. When we change our starting guess to the vectors which have complex entries, the power method should work as usual.

Reference

Power method for approximating eigenvalues

http://ceee.rice.edu/Books/LA/eigen/

Deri Prasad, An Introduction to Numerical Analysis, Third Edition,P7.2-7.18

This article is issued from Wikiversity - version of the Saturday, April 02, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.