Applied linear operators and spectral methods/Lecture 1

< Applied linear operators and spectral methods

Linear operators can be thought of as infinite dimensional matrices. Hence we can use well known results from matrix theory when dealing with linear operators. However, we have to be careful. A finite dimensional matrix has an inverse if none of its eigenvalues are zero. For an infinite dimensional matrix, even though all the eigenvectors may be nonzero, we might have a sequence of eigenvalues that tend to zero. There are several other subtleties that we will discuss in the course of this series of lectures.

Let us start off with the basics, i.e., linear vector spaces.

Linear Vector Spaces (\mathcal{S})

Let \mathcal{S} be a linear vector space.

Addition and scalar multiplication

Let us first define addition and scalar multiplication in this space. The addition operation acts completely in \mathcal{S} while the scalar multiplication operation may involved multiplication either by a real (in \mathbb{R}) or by a complex number (in \mathbb{C}). These operations must have the following closure properties:

  1. If \mathbf{x}, \mathbf{y} \in \mathcal{S} then \mathbf{x} + \mathbf{y} \in \mathcal{S}.
  2. If \alpha \in \mathbb{R} (or \mathbb{C}) and \mathbf{x} \in \mathcal{S} then \alpha~\mathbf{x} \in \mathcal{S}.

And the following laws must hold for addition

  1. \mathbf{x} + \mathbf{y} = \mathbf{y} + \mathbf{x} \qquad Commutative law.
  2. \mathbf{x} + (\mathbf{y} + \mathbf{z}) = (\mathbf{x} + \mathbf{y}) + \mathbf{z} \qquad Associative law.
  3. \exists \mathbf{0} \in \mathcal{S} such that \mathbf{0} + \mathbf{x} = \mathbf{x} \quad \forall \mathbf{x} \in \mathcal{S} \qquad Additive identity.
  4. \forall \mathbf{x} \in \mathcal{S} \quad \exists -\mathbf{x} \in \mathcal{S} such that -\mathbf{x} + \mathbf{x} = \mathbf{0} \qquad Additive inverse.

For scalar multiplication we have the properties

  1. \alpha~(\beta~\mathbf{x}) = (\alpha~\beta)~\mathbf{x}.
  2. (\alpha + \beta)~\mathbf{x} = \alpha~\mathbf{x} + \beta~\mathbf{x}.
  3. \alpha~(\mathbf{x}+\mathbf{y}) = \alpha~\mathbf{x} + \alpha~\mathbf{y}.
  4. \mathbf{1}~\mathbf{x} = \mathbf{x}.
  5. \mathbf{0}~\mathbf{x} = \mathbf{0}.

Example 1: n tuples

The n tuples (x_1, x_2, \dots, x_n) with


  \begin{align}
  (x_1, x_2, \dots, x_n) + (y_1, y_2, \dots, y_n)  & = 
  (x_1 + y_1, x_2 + y_2, \dots, x_n + y_n) \\
  \alpha~(x_1, x_2, \dots, x_n)  & = 
  (\alpha~x_1, \alpha~x_2, \dots, \alpha~x_n) 
  \end{align}

form a linear vector space.

Example 2: Matrices

Another example of a linear vector space is the set of 2 \times 2 matrices with addition as usual and scalar multiplication, or more generally n\times m matrices.


  \alpha\begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix} = 
  \begin{bmatrix} \alpha~x_{11} & \alpha~x_{12} \\ \alpha~x_{21} & \alpha~x_{22} 
  \end{bmatrix}

Example 3: Polynomials

The space of n-th order polynomials forms a linear vector space.


  p_n = \sum_{j=1}^n \alpha_j~x^j

Example 4: Continuous functions

The space of continuous functions, say in [0, 1], also forms a linear vector space with addition and scalar multiplication defined as usual.

Linear Dependence

A set of vectors \mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_n \in \mathcal{S} are said to be linearly dependent if \exists~ \alpha_1, \alpha_2, \dots, \alpha_n not all zero such that


  \alpha~\mathbf{x}_1 + \alpha~\mathbf{x}_2 + \dots + \alpha~\mathbf{x}_n = \mathbf{0}

If such a set of constants \alpha_1, \alpha_2, \dots, \alpha_n do not exists then the vectors are said to be linearly independent.

Example

Consider the matrices


  \boldsymbol{M}_1 = \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}, 
  \boldsymbol{M}_2 = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, 
  \boldsymbol{M}_3 = \begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix}

These are linearly dependent since \boldsymbol{M}_1 - \boldsymbol{M}_2 + 2~\boldsymbol{M}_3 = \mathbf{0}.

Span

The span of a set of vectors (\boldsymbol{T}) is the set of all vectors that are linear combinations of the vectors \mathbf{x}_i. Thus


  \text{span}(\boldsymbol{T}) = \{\boldsymbol{T}_1, \boldsymbol{T}_2, \dots, \boldsymbol{T}_n\}

where


  \boldsymbol{T}_i = \alpha_1~\mathbf{x}_1 + \alpha_2~\mathbf{x}_2 + \dots + \alpha_n~\mathbf{x}_n

as \alpha_1, \alpha_2, \dots, \alpha_n vary.

Spanning set

If the span = \mathcal{S} then \boldsymbol{T} is said to be a spanning set.

Basis

If \boldsymbol{T} is a spanning set and its elements are linearly independent then we call it a basis for \mathcal{S}. A vector in \mathcal{S} has a unique representation as a linear combination of the basis elements. why is it unqiue?

Dimension

The dimension of a space \mathcal{S} is the number of elements in the basis. This is independent of actual elements that form the basis and is a property of \mathcal{S}.

Example 1: Vectors in \mathbb{R}^2

Any two non-collinear vectors \mathbb{R}^2 is a basis for \mathbb{R}^2 because any other vector in \mathbb{R}^2 can be expressed as a linear combination of the two vectors.

Example 2: Matrices

A basis for the linear space of 2 \times 2 matrices is


  \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, 
  \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}, 
  \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}, 
  \begin{bmatrix} 1 & 3 \\ 1 & 1 \end{bmatrix}

Note that there is a lot of nonuniqueness in the choice of bases. One important skill that you should develop is to choose the right basis to solve a particular problem.

Example 3: Polynomials

The set \{1, x, x^2, \dots, x^n\} is a basis for polynomials of degree n.

Example 4: The natural basis

A natural basis is the set \{ \mathbf{e}_1, \mathbf{e}_2, \dots, \mathbf{e}_n\} where the jth entry of \mathbf{e}_k is


  \delta_{jk} = \begin{cases} 1 & \mbox{for} ~j = k \\
      0 & \mbox{for}~ j \ne k \end{cases}

The quantity \delta_{jk} is also called the Kronecker delta.

Inner Product Spaces

To give more structure to the idea of a vector space we need concepts such as magnitude and angle. The inner product provides that structure.

The inner product generalizes the concept of an angle and is defined as a function


  \langle\bullet,~\bullet\rangle : \mathcal{S}\times\mathcal{S} \rightarrow \mathbb{R}
  \quad (\text{or}~\mathbb{C}~\text{for a complex vector space})

with the properties

  1. \langle\mathbf{x},~\mathbf{y}\rangle = \overline{\langle\mathbf{y},~\mathbf{x}\rangle} \qquad overbar indicates complex conjugation.
  2. \langle\alpha~\mathbf{x},~\mathbf{y}\rangle = \alpha~\langle\mathbf{x},~\mathbf{y}\rangle \quad Linear with respect to scalar multiplication.
  3. \langle\mathbf{x}+\mathbf{y},~\mathbf{z}\rangle = \langle\mathbf{x},~\mathbf{z}\rangle + \langle\mathbf{y},~\mathbf{z}\rangle \quad Linearity with respect to addition.
  4. \langle\mathbf{x},~\mathbf{x}\rangle > \mathbf{0} if \mathbf{x} \ne 0 and \langle\mathbf{x},~\mathbf{x}\rangle = \mathbf{0} if and only if \mathbf{x} = \mathbf{0}.

A vector space with an inner product is called an inner product space.

Example 1:


  \langle\mathbf{x},~\beta~\mathbf{y}\rangle = \overline{\langle\beta~\mathbf{y},~\mathbf{x}\rangle} 
    = \overline{\beta}~\overline{\langle\mathbf{y},~\mathbf{x}\rangle} 
    = \overline{\beta}\langle\mathbf{x},~\mathbf{y}\rangle

Example 2: Discrete vectors

In \mathbb{R}^n with \mathbf{x} = \{x_1, x_2, \dots, x_n \} and \mathbf{y} = \{y_1, y_2, \dots, y_n \} the Eulidean norm is given by


  \langle\mathbf{x},~\mathbf{y}\rangle = \sum_n x_n~y_n

With \mathbf{x}, \mathbf{y} \in \mathbb{C}^n the standard norm is


  \langle\mathbf{x},~\mathbf{y}\rangle = \sum_k x_k~\overline{y_k}

Example 3: Continuous functions

For two complex valued continuous functions f(x) and g(x) in [0, 1] we could approximately represent them by their function values at equally spaced points.

Approximate f(x) and g(x) by


  \begin{align}
  F & = \{f(x_1), f(x_2), \dots, f(x_n)\} \qquad \text{with} ~x_k = \cfrac{k}{n}\\
  G & = \{g(x_1), g(x_2), \dots, g(x_n)\} \qquad \text{with} ~x_k = \cfrac{k}{n}
  \end{align}

With that approximation, a natural norm is


  \langle F,~G\rangle = \cfrac{1}{n}~\sum_{k=1}^n f(x_k)~\overline{g(x_k)}

Taking the limit as n \rightarrow \infty (show this)


  \langle f,~g\rangle = \int_0^1 f(x)~\overline{g(x)}~dx

If we took non-equally spaced yet smoothly distributed points we would get


  \langle f,~g\rangle = \int_0^1 f(x)~\overline{g(x)}~w(x)~dx

where w(x) > 0 is a smooth weighting function (show this).

There are many other inner products possible. For functions that are not only continuous but also differentiable, a useful norm is


  \langle f,~g\rangle = \int_0^1 \left[f(x)~\overline{g(x)} + 
      f^{'}(x)~\overline{g^{'}(x)}\right]~dx

We will continue further explorations into linear vector spaces in the next lecture.

Resource type: this resource contains a lecture or lecture notes.
Action required: please create Category:Applied linear operators and spectral methods/Lectures and add it to Category:Lectures.
This article is issued from Wikiversity - version of the Saturday, February 07, 2009. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.