Nonlinear finite elements/Bubnov Galerkin method

< Nonlinear finite elements

(Bubnov)-Galerkin Method for Problem 2

The Bubnov-Galerkin method is the most widely used weighted average method. This method is the basis of most finite element methods.

The finite-dimensional Galerkin form of the problem statement of our second order ODE is :

\text{(20)} \qquad 
    \begin{align}
    
      \text{Find}~ u_h(x) \in \mathcal{H}^n_0 ~& ~\text{such that} \\
      & \int^1_0 \left(\frac{du_h}{dx}\frac{dw_h}{dx} + u_h~w_h - 
        x~w_h\right)~dx = 0 
	\qquad ~\text{for all}~ w_h(x) \in \mathcal{H}^n_0 \\
      & u_h(0) = 0, u_h(1) = 0 ~;~~ \qquad w_h(0) = 0, w_h(1) = 0
    
    \end{align}
  

Since the basis functions (N_i) are known and linearly independent, the approximate solution u_h is completely determined once the constants (a_i) are known.

The Galerkin method provides a great way of constructing solutions. But the question is: how do we choose N_i so that these functions are not only linearly independent but arbitrary? Since the solution is expressed as a sum of these functions, the accuracy of our result depends strongly on the choice of N_i.

Let the trial solution take the form,


    u_h(x) = \sum^n_{i=1} a_i N_i(x)~.

According to the Bubnov-Galerkin approach, the weighting function also takes a similar form


    w_h(x) = \sum^n_{j=1} b_j N_j(x)~.

Plug these values into the weak form to get


    \int^1_0 \left[\left(\sum_{i=1}^n a_i\cfrac{dN_i}{dx}\right)
                   \left(\sum_{j=1}^n b_j\cfrac{dN_j}{dx}\right) + 
                   \left(\sum_{i=1}^n a_i N_i\right)
                   \left(\sum_{j=1}^n b_j N_j \right) - 
                   x \left(\sum_{j=1}^n b_j N_j\right)\right]~dx = 0

or


    \int^1_0 \left[\sum_{j=1}^n b_j
             \left(\cfrac{dN_j}{dx} \sum_{i=1}^n a_i\cfrac{dN_i}{dx} +
                   N_j \sum_{i=1}^n a_i N_i - 
                   x~N_j
             \right)
             \right] ~dx = 0

or


    \int^1_0 \left[\sum_{j=1}^n b_j
             \left(\sum_{i=1}^n \left(a_i\cfrac{dN_j}{dx} \cfrac{dN_i}{dx} +
                   a_i N_j N_i\right) - x~N_j
             \right)
             \right] ~dx = 0 ~.

Taking the sums and constants outside the integrals and rearranging, we get


    \sum_{j=1}^n b_j \left[\sum_{i=1}^n a_i \int^1_0 
       \left(\cfrac{dN_i}{dx} \cfrac{dN_j}{dx} +
          N_i N_j\right)~dx - \int^1_0 x~N_j~dx \right] = 0 ~.

Since the b_js are arbitrary, the quantity inside the square brackets must be zero. That is

\text{(21)} \qquad 
    {
    \sum_{i=1}^n a_i \int^1_0
       \left(\cfrac{dN_i}{dx} \cfrac{dN_j}{dx} +
          N_i N_j\right)~dx - \int^1_0 x~N_j~dx  = 0 \qquad j = 1\dots n~.
    }

Let us define

 \text{(22)} \qquad 
    {
    K_{ji} := \int^1_0 \left(\cfrac{dN_i}{dx} \cfrac{dN_j}{dx} + N_i N_j
      \right)~dx  \qquad \text{and} \qquad
    f_j := \int^1_0 x~N_j~dx ~.
    }

Then we get a set of simultaneous linear equations

 \text{(23)} \qquad 
    {
    \sum_{i=1}^n K_{ji} a_i = f_j~.
    }

In matrix form,


    {
    \mathbf{K} \mathbf{a} = \mathbf{f} ~.
    }
This article is issued from Wikiversity - version of the Friday, March 07, 2008. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.