The bisection method

 

The bisection method


Numerical analysis > The bisection method

The bisection method is based on the theorem of existence of roots for continuous functions, which guarantese the existence of at least one root \alpha of the function f in the interval [a, b] if f(a) and f(b) have opposite sign. If in [a,b] the function f is also monotone, that is f'(x)>0\;\forall x\in [a,b], then the root of the function is unique. Once established the existence of the solution, the algorithm defines a sequence x_k as the sequence of the mid-points of the intervals of decreasing width which satisfy the hypotesis of the roots theorem.

Roots Theorem

The theorema of existence of roots for continuous function (or Bolzano's theorem) states

Let f:[a,b] \to \mathbb{R} be a continuous function such that f(a) \cdot f(b)<0.

Then there exist at least one point x in the open interval \displaystyle (a,b) such that \displaystyle f(x)=0.

The proof can be found here .

Bisection algorithm

Given f \in C^0([a,b]) such that the hypothesis of the roots theorem are satisfied and given a tolerance \epsilon

  1. x_k=\frac{a+b}{2},\qquad k\geq 1;
  2. if \displaystyle e_k=|b-x_k|\leq \epsilon:\qquad\qquad\qquadesci;
  3. if \displaystyle f(x_k)=0:\qquad\qquad break;
    else if \displaystyle f(a)f(x_k)>0: \displaystyle a=x_k;
    else :\displaystyle b=x_k;
  4. go to step 1;

In the first step we define the new value of the sequence: the new mid-point. In the second step we do a control on the tolerance: if the error is less than the given tolerance we accept x_k as a root of f. The third step consists in the evaluation of the function in x_k: if f(x_k)=0 we have found the solution; else ,since we divided the interval in two, we need to find out on which side is the root. To this aim we use the hypotesis of the roots theorem, that is, we seek the new interval such that the function has opposite signs at the boundaries and we re-define the interval moving a or b in x_k. Eventually, if we have not yet found a good approximation of the solution, we go back to the starting point.

convergence of bisection method and then the root of convergence of f(x)=0in this method

At each iteration the interval \mathcal{I}_k=[a_k, b_k] is divided into halves, where with a_k and b_k we indicate the extrema of the interval at iteration k\geq 0. Obviously \mathcal{I}_0=[a,b]. We indicate with |\mathcal{I}_k|=meas(\mathcal{I}_k) the length of the interval \mathcal{I}_k. In particular we have

|\mathcal{I}_k|=\frac{|\mathcal{I}_{k-1}|}{2}=\frac{|\mathcal{I}_{k-2}|}{2^2}=...=\frac{|\mathcal{I}_{0}|}{2^k}=\frac{b-a}{2^k}.

Note that \alpha \in \mathcal{I}_k\; , \forall k \geq 0, that means

e_k \leq |\mathcal{I}_k|.

From this we have that \lim_{k\to \infty}e_k=0, since \lim_{k\to \infty} \frac{1}{2^k} = 0. For this reason we obtain

\displaystyle \lim_{k\to \infty} x_k = \alpha ,

which proves the global convergence of the method.

The convergence of the bisection method is very slow. Although the error, in general, does not decrease monotonically, the average rate of convergence is 1/2 and so, slightly changing the definition of order of convergence, it is possible to say that the method converges linearly with rate 1/2. Don't get confused by the fact that, on some books or other references, sometimes, the error is written as e_k=\frac{b-a}{2^{k+1}}. This is due to the fact that the sequence is defined for k\geq 0 instead of k\geq 1.

Example

Consider the function \displaystyle f(x)=\cos x in \displaystyle [0, 3\pi]. In this interval the function has 3 roots: \alpha_1=\frac{\pi}{2}, \alpha_2=\frac{3\pi}{2} and \alpha_3=\frac{5\pi}{2}.

Theoretically the bisection method converges with only one iteration to \displaystyle \alpha_2. In practice, nonetheless, the method converges to \displaystyle  \alpha_1 or to \displaystyle \alpha_3. In fact, since the finite representation of real numbers on the calculator, x_1 \neq \frac{3\pi}{2} and depending on the approximation of the calculator \displaystyle f(x_1) could be positive or negative, but never zero. In this way the bisection algorithm, in this case, is excluding automatically the root \displaystyle \alpha_2 at the first iteration, since the error is still large (\displaystyle e_1=\alpha_2).

Suppose that the algorithm converges to \displaystyle  \alpha_1 and let's see how many iterations are required to satisfy the relation \displaystyle 10^{-10}. In practice, we need to impose

e_k\leq\frac{3\pi}{2^k}\leq 10^{-10},

and so, solving this inequality, we have

k\geq\log_2 ( 3\cdot 10^{10}\pi)\approx 36.46,

and, since k is a natural number, we find k\geq 37 .

References

    Other resources on the bisection method

    Look on the resources about rootfinding for nonlinear equations page.

    This article is issued from Wikiversity - version of the Sunday, March 06, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.