Introduction to Elasticity/Tensors

< Introduction to Elasticity

Tensors in Solid Mechanics

A sound understanding of tensors and tensor operation is essential if you want to read and understand modern papers on solid mechanics and finite element modeling of complex material behavior. This brief introduction gives you an overview of tensors and tensor notation. For more details you can read A Brief on Tensor Analysis by J. G. Simmonds, the appendix on vector and tensor notation from Dynamics of Polymeric Liquids - Volume 1 by R. B. Bird, R. C. Armstrong, and O. Hassager, and the monograph by R. M. Brannon. An introduction to tensors in continuum mechanics can be found in An Introduction to Continuum Mechanics by M. E. Gurtin. Most of the material in this page is based on these sources.

Notation

The following notation is usually used in the literature:

\begin{align}
    s & = ~\text{scalar (lightface italic small)} \\
    \mathbf{v} & = ~\text{vector (boldface roman small)} \\
    \boldsymbol{\sigma} & = ~\text{second-order tensor (boldface Greek)} \\
    \boldsymbol{A} & = ~\text{third-order tensor (boldface italic capital)} \\
    \boldsymbol{\mathsf{A}}  & = ~\text{fourth-order tensor (sans-serif capital)}
  \end{align}

Motivation

A force \mathbf{f}\, has a magnitude and a direction, can be added to another force, be multiplied by a scalar and so on. These properties make the force \mathbf{f}\, a vector.

Similarly, the displacement \mathbf{u} is a vector because it can be added to other displacements and satisfies the other properties of a vector.

However, a force cannot be added to a displacement to yield a physically meaningful quantity. So the physical spaces that these two quantities lie on must be different.

Recall that a constant force \mathbf{f} moving through a displacement \mathbf{u}\, does \mathbf{f}\bullet\mathbf{u} units of work. How do we compute this product when the spaces of \mathbf{f}\, and \mathbf{u}\, are different? If you try to compute the product on a graph, you will have to convert both quantities to a single basis and then compute the scalar product.

An alternative way of thinking about the operation \mathbf{f}\bullet\mathbf{u} is to think of \mathbf{f}\, as a linear operator that acts on \mathbf{u} to produce a scalar quantity (work). In the notation of sets we can write


    \mathbf{f}\bullet\mathbf{u} ~~~\equiv~~~\mathbf{f} : \mathbf{u} \rightarrow \mathbb{R}^{}~.

A first order tensor is a linear operator that sends vectors to scalars.

Next, assume that the force \mathbf{f}\, acts at a point \mathbf{x}\,. The moment of the force about the origin is given by \mathbf{x}\times\mathbf{f}\, which is a vector. The vector product can be thought of as an linear operation too. In this case the effect of the operator is to convert a vector into another vector.

A second order tensor is a linear operator that sends vectors to vectors.

According to Simmonds, "the name tensor comes from elasticity theory where in a loaded elastic body the stress tensor acting on a unit vector normal to a plane through a point delivers the tension (i.e., the force per unit area) acting across the plane at that point."

Examples of second order tensors are the stress tensor, the deformation gradient tensor, the velocity gradient tensor, and so on.

Another type of tensor that we encounter frequently in mechanics is the fourth order tensor that takes strains to stresses. In elasticity, this is the stiffness tensor.

A fourth order tensor is a linear operator that sends second order tensors to second order tensors.

Tensor algebra

A tensor \boldsymbol{A}\, is a linear transformation from a vector space \mathcal{V} to \mathcal{V}. Thus, we can write


    \boldsymbol{A} : \mathbf{u} \in \mathcal{V} \rightarrow \mathbf{v \in \mathcal{V}}~.

More often, we use the following notation:


    \mathbf{v} = \boldsymbol{A} \mathbf{u} \equiv \boldsymbol{A}(\mathbf{u}) \equiv \boldsymbol{A}\bullet\mathbf{u}~.

I have used the "dot" notation in this handout. None of the above notations is obviously superior to the others and each is used widely.

Addition of tensors

Let \boldsymbol{A}\, and \boldsymbol{B}\, be two tensors. Then the sum (\boldsymbol{A} + \boldsymbol{B})\, is another tensor \boldsymbol{C}\, defined by


    \boldsymbol{C} = \boldsymbol{A} + \boldsymbol{B} \implies \boldsymbol{C}\bullet\mathbf{v} = 
    (\boldsymbol{A} + \boldsymbol{B})\bullet\mathbf{v} = \boldsymbol{A}\bullet\mathbf{v} + \boldsymbol{B}\bullet\mathbf{v} ~.

Multiplication of a tensor by a scalar

Let \boldsymbol{A}\, be a tensor and let \lambda\, be a scalar. Then the product \boldsymbol{C} = \lambda \boldsymbol{A}\, is a tensor defined by


    \boldsymbol{C} = \lambda \boldsymbol{A} \implies \boldsymbol{C}\bullet\mathbf{v} = 
    (\lambda \boldsymbol{A})\bullet\mathbf{v} = \lambda (\boldsymbol{A}\bullet\mathbf{v}) ~.

Zero tensor

The zero tensor \boldsymbol{\mathit{0}}\, is the tensor which maps every vector \mathbf{v}\, into the zero vector.


    \boldsymbol{\mathit{0}}\bullet\mathbf{v} = \mathbf{0} ~.

Identity tensor

The identity tensor \boldsymbol{\mathit{I}}\, takes every vector \mathbf{v}\, into itself.


    \boldsymbol{\mathit{I}}\bullet\mathbf{v} = \mathbf{v} ~.

The identity tensor is also often written as \boldsymbol{\mathit{1}}\,.

Product of two tensors

Let \boldsymbol{A}\, and \boldsymbol{B}\, be two tensors. Then the product \boldsymbol{C} = \boldsymbol{A}\bullet\boldsymbol{B} is the tensor that is defined by


    \boldsymbol{C} = \boldsymbol{A}\bullet\boldsymbol{B} \implies 
    \boldsymbol{C}\bullet\mathbf{v} = (\boldsymbol{A}\bullet\boldsymbol{B})\bullet{\mathbf{v}} =
      \boldsymbol{A}\bullet(\boldsymbol{B}\bullet{\mathbf{v}}) ~.

In general \boldsymbol{A}\bullet\boldsymbol{B} \ne \boldsymbol{B}\bullet\boldsymbol{A}.

Transpose of a tensor

The transpose of a tensor \boldsymbol{A}\, is the unique tensor \boldsymbol{A}^T\, defined by


    (\boldsymbol{A}\bullet\mathbf{u})\bullet\mathbf{v} = \mathbf{u}\bullet(\boldsymbol{A}^T\bullet\mathbf{v})~.

The following identities follow from the above definition:

\begin{align}
    (\boldsymbol{A} + \boldsymbol{B})^T & = \boldsymbol{A}^T + \boldsymbol{B}^T ~, \\
    (\boldsymbol{A}\bullet\boldsymbol{B})^T & = \boldsymbol{B}^T\bullet\boldsymbol{A}^T ~, \\
    (\boldsymbol{A}^T)^T & = \boldsymbol{A} ~.
  \end{align}

Symmetric and skew tensors

A tensor \boldsymbol{A}\, is symmetric if


    \boldsymbol{A} = \boldsymbol{A}^T ~.

A tensor \boldsymbol{A}\, is skew if


    \boldsymbol{A} = -\boldsymbol{A}^T ~.

Every tensor \boldsymbol{A}\, can be expressed uniquely as the sum of a symmetric tensor \boldsymbol{E}\, (the symmetric part of \boldsymbol{A}\,) and a skew tensor \boldsymbol{W}\, (the skew part of \boldsymbol{A}\,).


    \boldsymbol{A} = \boldsymbol{E} + \boldsymbol{W} ~;~~ \boldsymbol{E} = \cfrac{\boldsymbol{A} + \boldsymbol{A}^T}{2} ~;~~
    \boldsymbol{W} = \cfrac{\boldsymbol{A} - \boldsymbol{A}^T}{2} ~.

Tensor product of two vectors

The tensor (or dyadic) product \mathbf{a}\mathbf{b}\, (also written \mathbf{a}\otimes\mathbf{b}\,) of two vectors \mathbf{a}\, and \mathbf{b}\, is a tensor that assigns to each vector \mathbf{v}\, the vector (\mathbf{b}\bullet\mathbf{v})\mathbf{a}.


    (\mathbf{a}\mathbf{b})\bullet\mathbf{v} =  (\mathbf{a}\otimes\mathbf{b})\bullet\mathbf{v} = 
       (\mathbf{b}\bullet\mathbf{v})\mathbf{a} ~.

Notice that all the above operations on tensors are remarkably similar to matrix operations.

Spectral theorem

The spectral theorem for tensors is widely used in mechanics. We will start off by definining eigenvalues and eigenvectors.

Eigenvalues and eigenvectors

Let \boldsymbol{S} be a second order tensor. Let \lambda be a scalar and \mathbf{n} be a vector such that


   \boldsymbol{S}\cdot\mathbf{n} = \lambda~\mathbf{n}

Then  \lambda is called an eigenvalue of \boldsymbol{S} and  \mathbf{n} is an eigenvector .

A second order tensor has three eigenvalues and three eigenvectors, since the space is three-dimensional. Some of the eigenvalues might be repeated. The number of times an eigenvalue is repeated is called multiplicity.

In mechanics, many second order tensors are symmetric and positive definite. Note the following important properties of such tensors:

  1. If \boldsymbol{S} is positive definite, then \lambda > 0.
  2. If \boldsymbol{S} is symmetric, the eigenvectors \mathbf{n} are mutually orthogonal.

For more on eigenvalues and eigenvectors see Applied linear operators and spectral methods.

Let \boldsymbol{S} be a symmetric second-order tensor. Then

  1. the normalized eigenvectors \mathbf{n}_1, \mathbf{n}_2, \mathbf{n}_3 form an orthonormal basis.
  2. if  \lambda_1, \lambda_2, \lambda_3 are the corresponding eigenvalues then  \boldsymbol{S} = \sum_{i=1}^3 \lambda_i \mathbf{n}_i \otimes \mathbf{n}_i .

This relation is called the spectral decomposition of \boldsymbol{S}.

Polar decomposition theorem

Let  \boldsymbol{F} be second order tensor with  \det\boldsymbol{F} > 0 . Then

  1. there exist positive definite, symmetric tensors  \boldsymbol{U} , \boldsymbol{V} and a rotation (orthogonal) tensor  \boldsymbol{R} such that  \boldsymbol{F} = \boldsymbol{R}\cdot \boldsymbol{U} = \boldsymbol{V} \cdot \boldsymbol{R} .
  2. also each of these decompositions is unique.

Principal invariants of a tensor

Let  \boldsymbol{S} be a second order tensor. Then the determinant of  \boldsymbol{S} - \lambda~\boldsymbol{\mathit{I}} can be expressed as


   \det(\boldsymbol{S} - \lambda~\boldsymbol{\mathit{I}}) = -\lambda^3 + I_1(\boldsymbol{S})~\lambda^2 - I_2(\boldsymbol{S})~\lambda + I_3(\boldsymbol{S})

The quantities  I_1, I_2, I_3\, are called the principal invariants of \boldsymbol{S}. Expressions of the principal invariants are given below.

Principal invariants of \boldsymbol{S}


  \begin{align}
    I_1 & = \text{tr}~ \boldsymbol{S} = \lambda_1 + \lambda_2 + \lambda_3 \\
    I_2 & = \cfrac{1}{2}\left[ (\text{tr}~ \boldsymbol{S})^2 -  \text{tr}(\boldsymbol{S^2})\right] =  \lambda_1~\lambda_2 + \lambda_2~\lambda_3 + \lambda_3~\lambda_1\\
    I_3 & = \det\boldsymbol{S} = \lambda_1~\lambda_2~\lambda_3
  \end{align}

Note that  \lambda is an eigenvalue of  \boldsymbol{S} if and only if


   \det(\boldsymbol{S} - \lambda~\boldsymbol{\mathit{1}}) = 0

The resulting equations is called the characteristic equation and is usually written in expanded form as


  \lambda^3 - I_1(\boldsymbol{S})~\lambda^2 + I_2(\boldsymbol{S})~\lambda -I_3(\boldsymbol{S}) = 0

Cayley-Hamilton theorem

The Cayley-Hamilton theorem is a very useful result in continuum mechanics. It states that

Cayley-Hamilton theorem

If \boldsymbol{S} is a second order tensor then it satisfies its own characteristic equation


  \boldsymbol{S}^3 - I_1(\boldsymbol{S})~\boldsymbol{S}^2 + I_2(\boldsymbol{S})~\boldsymbol{S} -I_3(\boldsymbol{S})~\boldsymbol{\mathit{1}} = \boldsymbol{\mathit{0}}

Index notation

All the equations so far have made no mention of the coordinate system. When we use vectors and tensor in computations we have to express them in some coordinate system (basis) and use the components of the object in that basis for our computations.

Commonly used bases are the Cartesian coordinate frame, the cylindrical coordinate frame, and the spherical coordinate frame.

A Cartesian coordinate frame consists of an orthonormal basis (\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3)\, together with a point \mathbf{o}\, called the origin. Since these vectors are mutually perpendicular, we have the following relations:

\begin{align}\text{(1)} \qquad 
    \mathbf{e}_1\bullet\mathbf{e}_1 & = 1 ~;~~ \mathbf{e}_1\bullet\mathbf{e}_2 = 0 ~;~~
    \mathbf{e}_1\bullet\mathbf{e}_3 = 0 ~;  \\
    \mathbf{e}_2\bullet\mathbf{e}_1 & = 0 ~;~~ \mathbf{e}_2\bullet\mathbf{e}_2 = 1 ~;~~
    \mathbf{e}_2\bullet\mathbf{e}_3 = 0 ~;\\
    \mathbf{e}_3\bullet\mathbf{e}_1 & = 0 ~;~~ \mathbf{e}_3\bullet\mathbf{e}_2 = 0 ~;~~
    \mathbf{e}_3\bullet\mathbf{e}_3 = 1 ~. 
  \end{align}

Kronecker delta

To make the above relations more compact, we introduce the Kronecker delta symbol


    {
    \delta_{ij} = \begin{cases} 
                     1 & ~\rm{if}~ i = j~. \\
                     0 & ~\rm{if}~ i \ne j ~.
                  \end{cases}
    }

Then, instead of the nine equations in (1) we can write (in index notation)


    \mathbf{e}_i\bullet\mathbf{e}_j = \delta_{ij} ~.

Einstein summation convention

Recall that the vector \mathbf{u}\, can be written as

\text{(2)} \qquad 
    \mathbf{u} = u_1 \mathbf{e}_1 + u_2 \mathbf{e}_2 + u_3 \mathbf{e}_3 = \sum_{i=1}^3 u_i \mathbf{e}_i ~.

In index notation, equation (2) can be written as


    {
    \mathbf{u} = u_i \mathbf{e}_i~.
    }

This convention is called the Einstein summation convention. If indices are repeated, we understand that to mean that there is a sum over the indices.

Components of a vector

We can write the Cartesian components of a vector \mathbf{u}\, in the basis (\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3)\, as


    u_i = \mathbf{e}_i\bullet\mathbf{u}  ~,~~~i = 1, 2, 3~.

Components of a tensor

Similarly, the components of A_{ij}\, of a tensor \boldsymbol{A}\, are defined by


    {
    A_{ij} = \mathbf{e}_i\bullet(\boldsymbol{A}\bullet\mathbf{e}_j)~.
    }

Using the definition of the tensor product, we can also write


    \boldsymbol{A} = \sum_{i,j=1}^3 A_{ij} \mathbf{e}_i\mathbf{e}_j
        \equiv \sum_{i,j=1}^3 A_{ij} \mathbf{e}_i\otimes\mathbf{e}_j ~.

Using the summation convention,


    {
    \boldsymbol{A} =  A_{ij} \mathbf{e}_i\mathbf{e}_j \equiv A_{ij} \mathbf{e}_i\otimes\mathbf{e}_j~.
    }

In this case, the bases of the tensor are \{\mathbf{e}_i\otimes\mathbf{e}_j\} and the components are A_{ij}\,.

Operation of a tensor on a vector

From the definition of the components of tensor \boldsymbol{A}\,, we can also see that (using the summation convention)


    {
    \mathbf{v} = \boldsymbol{A}\bullet\mathbf{u} ~~~\equiv~~~ v_i = A_{ij} u_j~.
    }

Dyadic product

Similarly, the dyadic product can be expressed as


    {
    (\mathbf{a}\mathbf{b})_{ij} \equiv (\mathbf{a}\otimes\mathbf{b})_{ij} = a_i b_j ~.
    }

Matrix notation

We can also write a tensor \boldsymbol{A} in matrix notation as


    \boldsymbol{A} = A_{ij}\mathbf{e}_i\mathbf{e}_j = A_{ij}\mathbf{e}_i\otimes\mathbf{e}_j \implies 
    \mathbf{A} = \begin{bmatrix}
                      A_{11} & A_{12} & A_{13} \\
                      A_{21} & A_{22} & A_{23} \\
                      A_{31} & A_{32} & A_{33} 
                      \end{bmatrix} ~.

Note that the Kronecker delta represents the components of the identity tensor in a Cartesian basis. Therefore, we can write


    \boldsymbol{I} = \delta_{ij}\mathbf{e}_i\mathbf{e}_j = \delta_{ij}\mathbf{e}_i\otimes\mathbf{e}_j \implies 
        \mathbf{I} = \begin{bmatrix}
                      1 & 0 & 0 \\
                      0 & 1 & 0 \\
                      0 & 0 & 1 
                      \end{bmatrix} ~.

Tensor inner product

The inner product \boldsymbol{A} : \boldsymbol{B}\, of two tensors \boldsymbol{A}\, and \boldsymbol{B}\, is an operation that generates a scalar. We define (summation implied)


    {
    \boldsymbol{A} : \boldsymbol{B} = A_{ij} B_{ij} ~.
    }

The inner product can also be expressed using the trace :


    {
    \boldsymbol{A} : \boldsymbol{B} = Tr(\boldsymbol{A^T} \bullet \boldsymbol{B}) ~.
    }

Proof using the definition of the trace below :


    {
    Tr(\boldsymbol{A^T} \bullet \boldsymbol{B}) = \boldsymbol{I}:(\boldsymbol{A^T} \bullet \boldsymbol{B})=\delta_{ij}\mathbf{e}_i\otimes\mathbf{e}_j :(A_{lk}\mathbf{e}_k \otimes \mathbf{e}_l \bullet B_{mn}\mathbf{e}_m\otimes \mathbf{e}_n) = \delta_{ij}\mathbf{e}_i\otimes\mathbf{e}_j : (A_{mk}B_{mn}\mathbf{e}_k \otimes \mathbf{e}_n) =  }

    {
    A_{mk}B_{mn}\delta_{ij}\delta_{ik}\delta_{jn}=A_{mk}B_{mn}\delta_{kn} = A_{mn}B_{mn} =A:B
    }

Trace of a tensor

The trace of a tensor is the scalar given by


 \text{Tr}(\boldsymbol{A}) = \boldsymbol{I}:\boldsymbol{A} = \delta_{ij}\mathbf{e}_i\otimes\mathbf{e}_j:A_{mn}\mathbf{e}_m\otimes\mathbf{e}_n = \delta_{ij}\delta_{im}\delta_{jn}A_{mn}  = A_{ii}

(** Needs a proper definition **)

Magnitude of a tensor

The magnitude of a tensor \boldsymbol{A}\, is defined by


    \Vert \boldsymbol{A} \Vert = \sqrt{\boldsymbol{A}:\boldsymbol{A}} \equiv \sqrt{A_{ij}A_{ij}} ~.

Tensor product of a tensor with a vector

Another tensor operation that is often seen is the tensor product of a tensor with a vector. Let \boldsymbol{A}\, be a tensor and let \mathbf{v}\, be a vector. Then the tensor cross product gives a tensor \boldsymbol{C}\, defined by


    {
    \boldsymbol{C} = \boldsymbol{A}\times\mathbf{v} \implies
       C_{ij} = e_{klj} A_{ik} v_{l} ~.
     }

Permutation symbol

The permutation symbol e_{ijk}\, is defined as


    {
    e_{ijk} = \begin{cases}
                1 & ~\text{if}~ ijk = 123, 231, ~\text{or}~ 312 \\
                -1 & ~\text{if}~ ijk = 321, 132, ~\text{or}~ 213 \\
                0  & ~\text{if any two indices are alike}
              \end{cases}
    }

Identities in tensor algebra

Let \boldsymbol{A}, \boldsymbol{B} and \boldsymbol{C} be three second order tensors. Then


  \boldsymbol{A}:(\boldsymbol{B}\cdot\boldsymbol{C}) = (\boldsymbol{C}\cdot\boldsymbol{A}^T):\boldsymbol{B}^T = (\boldsymbol{B}^T\cdot\boldsymbol{A}):\boldsymbol{C}

Proof:

It is easiest to show these relations by using index notation with respect to an orthonormal basis. Then we can write


   \boldsymbol{A}:(\boldsymbol{B}\cdot\boldsymbol{C}) \equiv A_{ij} (B_{ik}~C_{kj}) = C_{kj}~A^T_{ji}~B^T_{ki} \equiv (\boldsymbol{C}\cdot\boldsymbol{A}^T):\boldsymbol{B}^T

Similarly,


   \boldsymbol{A}:(\boldsymbol{B}\cdot\boldsymbol{C}) \equiv A_{ij} (B_{ik}~C_{kj}) = B^T_{ki}~A_{ij}~C_{kj} \equiv (\boldsymbol{B}^T\cdot\boldsymbol{A}):\boldsymbol{C}

Tensor calculus

Recall that the vector differential operator (with respect to a Cartesian basis) is defined as


    \boldsymbol{\nabla}{} = \cfrac{\partial }{\partial x_1}\mathbf{e}_1+\cfrac{\partial }{\partial x_2}\mathbf{e}_2+\cfrac{\partial }{\partial x_3}\mathbf{e}_3 
            \equiv \cfrac{\partial }{\partial x_i}\mathbf{e}_i ~.

In this section we summarize some operations of \boldsymbol{\nabla}{} on vectors and tensors.

The gradient of a vector field

The dyadic product \boldsymbol{\nabla}{\mathbf{v}}\, (or \boldsymbol{\nabla}{}\otimes\mathbf{v}) is called the gradient of the vector field \mathbf{v}\,. Therefore, the quantity \boldsymbol{\nabla}{\mathbf{v}} is a tensor given by


    {
    \boldsymbol{\nabla}{\mathbf{v}} = \sum_i\sum_j \cfrac{\partial v_j}{\partial x_i} \mathbf{e}_i \mathbf{e}_j 
               \equiv v_{j,i} \mathbf{e}_i \mathbf{e}_j ~.
    }

In the alternative dyadic notation,


    {
    \boldsymbol{\nabla}{\mathbf{v}} \equiv
    \boldsymbol{\nabla}{}\otimes\mathbf{v} = \sum_i\sum_j \cfrac{\partial v_j}{\partial x_i} \mathbf{e}_i\otimes\mathbf{e}_j 
               \equiv v_{j,i} \mathbf{e}_i\otimes\mathbf{e}_j ~.
    }

'Warning: Some authors define the ij component of \boldsymbol{\nabla}{\mathbf{v}} as \partial v_i/\partial x_j = v_{i,j}.

The divergence of a tensor field

Let \boldsymbol{A}\, be a tensor field. Then the divergence of the tensor field is a vector \boldsymbol{\nabla}\bullet{\boldsymbol{A}} given by


    {
    \boldsymbol{\nabla}\bullet{\boldsymbol{A}} = \sum_j \left[\sum_i \cfrac{\partial A_{ij}}{\partial x_i}\right] \mathbf{e}_j
              \equiv \cfrac{\partial A_{ij}}{\partial x_i} \mathbf{e}_j = A_{ij,i} \mathbf{e}_j~.
    }

To fix the definition of divergence of a general tensor field (possibly of higher order than 2), we use the relation


   (\boldsymbol{\nabla}\bullet{\boldsymbol{A}})\bullet\mathbf{c} = \boldsymbol{\nabla}\bullet(\boldsymbol{A}\bullet\mathbf{c})

where \mathbf{c} is an arbitrary constant vector.

The Laplacian of a vector field

The Laplacian of a vector field is given by


    {
    \nabla^2{\mathbf{v}} = \boldsymbol{\nabla}\bullet{\boldsymbol{\nabla}{\mathbf{v}}} = 
        \sum_j \left[\sum_i \cfrac{\partial^2 v_j}{\partial x_i^2}\right] \mathbf{e}_j \equiv
        v_{j,ii} \mathbf{e}_j ~.
    }

Tensor Identities

Some important identities involving tensors are:

  1. \boldsymbol{\nabla}\bullet{\boldsymbol{\nabla}{\mathbf{v}}} = \boldsymbol{\nabla}{(\boldsymbol{\nabla}\bullet{\mathbf{v}})} - \boldsymbol{\nabla}\times{(\boldsymbol{\nabla}\times{\mathbf{v}})}.
  2. \mathbf{v}\bullet\boldsymbol{\nabla}{\mathbf{v}} = \frac{1}{2}\boldsymbol{\nabla}{(\mathbf{v}\bullet\mathbf{v})} - \mathbf{v}\times(\boldsymbol{\nabla}\times{\mathbf{v})} .
  3. \boldsymbol{\nabla}\bullet{(\mathbf{v}\otimes\mathbf{w})} = \mathbf{v}\bullet\boldsymbol{\nabla}{\mathbf{w}} + \mathbf{w}(\boldsymbol{\nabla}\bullet{\mathbf{v}}) .
  4. \boldsymbol{\nabla}\bullet{(\varphi\boldsymbol{A})} = \boldsymbol{\nabla}{\varphi}\bullet\boldsymbol{A} + \varphi\boldsymbol{\nabla}\bullet{\boldsymbol{A}} .
  5. \boldsymbol{\nabla}{(\mathbf{v}\bullet\mathbf{w})} = (\boldsymbol{\nabla}{\mathbf{v}})\bullet\mathbf{w} + (\boldsymbol{\nabla}{\mathbf{w}})\bullet\mathbf{v} .
  6. \boldsymbol{\nabla}\bullet{(\boldsymbol{A}\bullet\mathbf{w})} = (\boldsymbol{\nabla}\bullet{\boldsymbol{A}})\bullet\mathbf{w} + \boldsymbol{A}^T:(\boldsymbol{\nabla}{\mathbf{w}}) .

Integral theorems

The following integral theorems are useful in continuum mechanics and finite elements.

The Gauss divergence theorem

If \Omega is a region in space enclosed by a surface \Gamma\, and \boldsymbol{A}\, is a tensor field, then


    {
    \int_{\Omega} \boldsymbol{\nabla}\bullet{\boldsymbol{A}} ~dV = \int_{\Gamma} \mathbf{n}\bullet\boldsymbol{A} ~dA 
    }

where \mathbf{n}\, is the unit outward normal to the surface.

The Stokes curl theorem

If \Gamma\, is a surface bounded by a closed curve \mathcal{C}, then


    \int_{\Gamma} \mathbf{n}\bullet(\boldsymbol{\nabla}\times{\boldsymbol{A})}~dA = \oint_{\mathcal{C}} \mathbf{t}\bullet\boldsymbol{A}~ ds

where \boldsymbol{A}\, is a tensor field, \mathbf{n}\, is the unit normal vector to \Gamma\, in the direction of a right-handed screw motion along \mathcal{C}, and \mathbf{t}\, is a unit tangential vector in the direction of integration along \mathcal{C}.

The Leibniz formula

Let \Omega be a closed moving region of space enclosed by a surface \Gamma\,. Let the velocity of any surface element be \mathbf{v}\,. Then if \boldsymbol{A}(\mathbf{x},t)\, is a tensor function of position and time,

 
    \cfrac{\partial }{\partial t} \int_{\Omega} \boldsymbol{A}~dV = \int_{\Omega} \cfrac{\partial \boldsymbol{A}}{\partial t}~dV
      + \int_{\Gamma} \boldsymbol{A}(\mathbf{v}\bullet\mathbf{n})~dA

where \mathbf{n}\, is the outward unit normal to the surface \Gamma\,.

Directional derivatives

We often have to find the derivatives of vectors with respect to vectors and of tensors with respect to vectors and tensors. The directional directive provides a systematic way of finding these derivatives.

The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.

Derivatives of scalar valued functions of vectors

Let f(\mathbf{v}) be a real valued function of the vector \mathbf{v}. Then the derivative of f(\mathbf{v}) with respect to \mathbf{v} (or at \mathbf{v}) in the direction \mathbf{u} is the vector defined as


  \frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} = Df(\mathbf{v})[\mathbf{u}] 
     = \left[\frac{\partial }{\partial \alpha}~f(\mathbf{v} + \alpha~\mathbf{u})\right]_{\alpha = 0}

for all vectors \mathbf{u}.

Properties:

1) If f(\mathbf{v}) = f_1(\mathbf{v}) + f_2(\mathbf{v}) then 
   \frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} =  \left(\frac{\partial f_1}{\partial \mathbf{v}} + \frac{\partial f_2}{\partial \mathbf{v}}\right)\cdot\mathbf{u}

2) If f(\mathbf{v}) = f_1(\mathbf{v})~ f_2(\mathbf{v}) then 
   \frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} =  \left(\frac{\partial f_1}{\partial \mathbf{v}}\cdot\mathbf{u}\right)~f_2(\mathbf{v}) + f_1(\mathbf{v})~\left(\frac{\partial f_2}{\partial \mathbf{v}}\cdot\mathbf{u} \right)

3) If f(\mathbf{v}) = f_1(f_2(\mathbf{v})) then 
   \frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} =  \frac{\partial f_1}{\partial f_2}~\frac{\partial f_2}{\partial \mathbf{v}}\cdot\mathbf{u}

Derivatives of vector valued functions of vectors

Let \mathbf{f}(\mathbf{v}) be a vector valued function of the vector \mathbf{v}. Then the derivative of \mathbf{f}(\mathbf{v}) with respect to \mathbf{v} (or at \mathbf{v}) in the direction \mathbf{u} is the second order tensor defined as


  \frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} = D\mathbf{f}(\mathbf{v})[\mathbf{u}] 
     = \left[\frac{\partial }{\partial \alpha}~\mathbf{f}(\mathbf{v} + \alpha~\mathbf{u})\right]_{\alpha = 0}

for all vectors \mathbf{u}.

Properties:

1) If \mathbf{f}(\mathbf{v}) = \mathbf{f}_1(\mathbf{v}) + \mathbf{f}_2(\mathbf{v}) then 
   \frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} =  \left(\frac{\partial \mathbf{f}_1}{\partial \mathbf{v}} + \frac{\partial \mathbf{f}_2}{\partial \mathbf{v}}\right)\cdot\mathbf{u}

2) If \mathbf{f}(\mathbf{v}) = \mathbf{f}_1(\mathbf{v})\times\mathbf{f}_2(\mathbf{v}) then 
   \frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} =  \left(\frac{\partial \mathbf{f}_1}{\partial \mathbf{v}}\cdot\mathbf{u}\right)\times\mathbf{f}_2(\mathbf{v}) + \mathbf{f}_1(\mathbf{v})\times\left(\frac{\partial \mathbf{f}_2}{\partial \mathbf{v}}\cdot\mathbf{u} \right)

3) If \mathbf{f}(\mathbf{v}) = \mathbf{f}_1(\mathbf{f}_2(\mathbf{v})) then 
   \frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} =  \frac{\partial \mathbf{f}_1}{\partial \mathbf{f}_2}\cdot\left(\frac{\partial \mathbf{f}_2}{\partial \mathbf{v}}\cdot\mathbf{u} \right)

Derivatives of scalar valued functions of tensors

Let f(\boldsymbol{S}) be a real valued function of the second order tensor \boldsymbol{S}. Then the derivative of f(\boldsymbol{S}) with respect to \boldsymbol{S} (or at \boldsymbol{S}) in the direction \boldsymbol{T} is the second order tensor defined as


  \frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = Df(\boldsymbol{S})[\boldsymbol{T}] 
     = \left[\frac{\partial }{\partial \alpha}~f(\boldsymbol{S} + \alpha~\boldsymbol{T})\right]_{\alpha = 0}

for all second order tensors \boldsymbol{T}.

Properties:

1) If f(\boldsymbol{S}) = f_1(\boldsymbol{S}) + f_2(\boldsymbol{S}) then  \frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} =  \left(\frac{\partial f_1}{\partial \boldsymbol{S}} + \frac{\partial f_2}{\partial \boldsymbol{S}}\right):\boldsymbol{T}

2) If f(\boldsymbol{S}) = f_1(\boldsymbol{S})~ f_2(\boldsymbol{S}) then  \frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} =  \left(\frac{\partial f_1}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)~f_2(\boldsymbol{S}) + f_1(\boldsymbol{S})~\left(\frac{\partial f_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)

3) If f(\boldsymbol{S}) = f_1(f_2(\boldsymbol{S})) then  \frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} =  \frac{\partial f_1}{\partial f_2}~\left(\frac{\partial f_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)

Derivatives of tensor valued functions of tensors

Let \boldsymbol{F}(\boldsymbol{S}) be a second order tensor valued function of the second order tensor \boldsymbol{S}. Then the derivative of \boldsymbol{F}(\boldsymbol{S}) with respect to \boldsymbol{S} (or at \boldsymbol{S}) in the direction \boldsymbol{T} is the fourth order tensor defined as


  \frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} = D\boldsymbol{F}(\boldsymbol{S})[\boldsymbol{T}] 
     = \left[\frac{\partial }{\partial \alpha}~\boldsymbol{F}(\boldsymbol{S} + \alpha~\boldsymbol{T})\right]_{\alpha = 0}

for all second order tensors \boldsymbol{T}.

Properties:

1) If \boldsymbol{F}(\boldsymbol{S}) = \boldsymbol{F}_1(\boldsymbol{S}) + \boldsymbol{F}_2(\boldsymbol{S}) then  \frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} =  \left(\frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{S}} + \frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}\right):\boldsymbol{T}

2) If \boldsymbol{F}(\boldsymbol{S}) = \boldsymbol{F}_1(\boldsymbol{S})\cdot\boldsymbol{F}_2(\boldsymbol{S}) then  \frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} =  \left(\frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)\cdot\boldsymbol{F}_2(\boldsymbol{S}) + \boldsymbol{F}_1(\boldsymbol{S})\cdot\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)

3) If \boldsymbol{F}(\boldsymbol{S}) = \boldsymbol{F}_1(\boldsymbol{F}_2(\boldsymbol{S})) then  \frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} =  \frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{F}_2}:\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)

3) If f(\boldsymbol{S}) = f_1(\boldsymbol{F}_2(\boldsymbol{S})) then  \frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} =  \frac{\partial f_1}{\partial \boldsymbol{F}_2}:\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right)

Derivative of the determinant of a tensor

Derivative of the determinant of a tensor

The derivative of the determinant of a second order tensor \boldsymbol{A} is given by


  \frac{\partial }{\partial \boldsymbol{A}}\det(\boldsymbol{A}) = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T ~.

In an orthonormal basis the components of \boldsymbol{A} can be written as a matrix \mathbf{A}. In that case, the right hand side corresponds the cofactors of the matrix.

Proof:

Let \boldsymbol{A} be a second order tensor and let f(\boldsymbol{A}) = \det(\boldsymbol{A}). Then, from the definition of the derivative of a scalar valued function of a tensor, we have


  \begin{align}
    \frac{\partial f}{\partial \boldsymbol{A}}:\boldsymbol{T} & = 
        \left.\cfrac{d}{d\alpha} \det(\boldsymbol{A} + \alpha~\boldsymbol{T}) \right|_{\alpha=0} \\
    & = \left.\cfrac{d}{d\alpha} 
          \det\left[\alpha~\boldsymbol{A}\left(\cfrac{1}{\alpha}~\boldsymbol{\mathit{1}} + \boldsymbol{A}^{-1}\cdot\boldsymbol{T}\right)
              \right] \right|_{\alpha=0} \\
    & = \left.\cfrac{d}{d\alpha} \left[\alpha^3~\det(\boldsymbol{A})~
          \det\left(\cfrac{1}{\alpha}~\boldsymbol{\mathit{1}} + \boldsymbol{A}^{-1}\cdot\boldsymbol{T}\right)\right]
               \right|_{\alpha=0} ~.
  \end{align}

Recall that we can expand the determinant of a tensor in the form of a characteristic equation in terms of the invariants I_1,I_2,I_3 using (note the sign of \lambda)


  \det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A}) = 
      \lambda^3 + I_1(\boldsymbol{A})~\lambda^2 + I_2(\boldsymbol{A})~\lambda + I_3(\boldsymbol{A}) ~.

Using this expansion we can write


  \begin{align}
    \frac{\partial f}{\partial \boldsymbol{A}}:\boldsymbol{T} 
    & = \left.\cfrac{d}{d\alpha} \left[\alpha^3~\det(\boldsymbol{A})~
      \left(\cfrac{1}{\alpha^3} + I_1(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\cfrac{1}{\alpha^2} + 
      I_2(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\cfrac{1}{\alpha} + I_3(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})\right)
          \right] \right|_{\alpha=0} \\
    & = \left.\det(\boldsymbol{A})~\cfrac{d}{d\alpha} \left[
           1 + I_1(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha + 
            I_2(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha^2 + I_3(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha^3
          \right] \right|_{\alpha=0} \\
    & = \left.\det(\boldsymbol{A})~\left[I_1(\boldsymbol{A}^{-1}\cdot\boldsymbol{T}) + 
            2~I_2(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha + 3~I_3(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})~\alpha^2
          \right] \right|_{\alpha=0} \\
    & = \det(\boldsymbol{A})~I_1(\boldsymbol{A}^{-1}\cdot\boldsymbol{T}) ~.
  \end{align}

Recall that the invariant I_1 is given by


  I_1(\boldsymbol{A}) = \text{tr}{\boldsymbol{A}} ~.

Hence,


  \frac{\partial f}{\partial \boldsymbol{A}}:\boldsymbol{T} = \det(\boldsymbol{A})~\text{tr}(\boldsymbol{A}^{-1}\cdot\boldsymbol{T})
        = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T : \boldsymbol{T} ~.

Invoking the arbitrariness of \boldsymbol{T} we then have


  \frac{\partial f}{\partial \boldsymbol{A}} = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T ~.

Derivatives of the invariants of a tensor

Derivatives of the principal invariants of a tensor

The principal invariants of a second order tensor are


  \begin{align}
    I_1(\boldsymbol{A}) & = \text{tr}{\boldsymbol{A}} \\
    I_2(\boldsymbol{A}) & = \frac{1}{2} \left[ (\text{tr}{\boldsymbol{A}})^2 - \text{tr}{\boldsymbol{A}^2} \right] \\
    I_3(\boldsymbol{A}) & = \det(\boldsymbol{A}) 
  \end{align}

The derivatives of these three invariants with respect to \boldsymbol{A} are


  \begin{align}
    \frac{\partial I_1}{\partial \boldsymbol{A}} & = \boldsymbol{\mathit{1}}  \\
    \frac{\partial I_2}{\partial \boldsymbol{A}} & = I_1~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T \\
    \frac{\partial I_3}{\partial \boldsymbol{A}} & = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T 
                         = I_2~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T~(I_1~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T)
                         = (\boldsymbol{A}^2 - I_1~\boldsymbol{A} + I_2~\boldsymbol{\mathit{1}})^T 
  \end{align}

Proof:

From the derivative of the determinant we know that


  \frac{\partial I_3}{\partial \boldsymbol{A}} = \det(\boldsymbol{A})~[\boldsymbol{A}^{-1}]^T ~.

For the derivatives of the other two invariants, let us go back to the characteristic equation


    \det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A}) = 
      \lambda^3 + I_1(\boldsymbol{A})~\lambda^2 + I_2(\boldsymbol{A})~\lambda + I_3(\boldsymbol{A}) ~.

Using the same approach as for the determinant of a tensor, we can show that


    \frac{\partial }{\partial \boldsymbol{A}}\det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A}) =  
      \det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A})~[(\lambda~\boldsymbol{\mathit{1}}+\boldsymbol{A})^{-1}]^T ~.

Now the left hand side can be expanded as


    \begin{align}
    \frac{\partial }{\partial \boldsymbol{A}}\det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A}) & =  
    \frac{\partial }{\partial \boldsymbol{A}}\left[ 
     \lambda^3 + I_1(\boldsymbol{A})~\lambda^2 + I_2(\boldsymbol{A})~\lambda + I_3(\boldsymbol{A}) \right] \\
    & = 
     \frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + 
     \frac{\partial I_3}{\partial \boldsymbol{A}}~.
    \end{align}

Hence


     \frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + 
     \frac{\partial I_3}{\partial \boldsymbol{A}} = 
      \det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A})~[(\lambda~\boldsymbol{\mathit{1}}+\boldsymbol{A})^{-1}]^T

or,


     (\lambda~\boldsymbol{\mathit{1}}+\boldsymbol{A})^T\cdot\left[
     \frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + 
     \frac{\partial I_3}{\partial \boldsymbol{A}}\right] = 
      \det(\lambda~\boldsymbol{\mathit{1}} + \boldsymbol{A})~\boldsymbol{\mathit{1}} ~.

Expanding the right hand side and separating terms on the left hand side gives


     (\lambda~\boldsymbol{\mathit{1}} +\boldsymbol{A}^T)\cdot\left[
     \frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + 
     \frac{\partial I_3}{\partial \boldsymbol{A}}\right] = 
      \left[\lambda^3 + I_1~\lambda^2 + I_2~\lambda + I_3\right]
      \boldsymbol{\mathit{1}}

or,


  \begin{align}
     \left[\frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^3 \right.& 
     \left.+ \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda^2 + 
     \frac{\partial I_3}{\partial \boldsymbol{A}}~\lambda\right]\boldsymbol{\mathit{1}} +
        \boldsymbol{A}^T\cdot\frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + 
        \boldsymbol{A}^T\cdot\frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + 
        \boldsymbol{A}^T\cdot\frac{\partial I_3}{\partial \boldsymbol{A}} \\
     & = 
      \left[\lambda^3 + I_1~\lambda^2 + I_2~\lambda + I_3\right]
      \boldsymbol{\mathit{1}} ~.
  \end{align}

If we define I_0 := 1 and I_4 := 0, we can write the above as


  \begin{align}
     \left[\frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^3 \right.&
     \left.+ \frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda^2 + 
     \frac{\partial I_3}{\partial \boldsymbol{A}}~\lambda + \frac{\partial I_4}{\partial \boldsymbol{A}}\right]\boldsymbol{\mathit{1}} +
     \boldsymbol{A}^T\cdot\frac{\partial I_0}{\partial \boldsymbol{A}}~\lambda^3 + 
     \boldsymbol{A}^T\cdot\frac{\partial I_1}{\partial \boldsymbol{A}}~\lambda^2 + 
     \boldsymbol{A}^T\cdot\frac{\partial I_2}{\partial \boldsymbol{A}}~\lambda + 
     \boldsymbol{A}^T\cdot\frac{\partial I_3}{\partial \boldsymbol{A}} \\ 
    &= 
      \left[I_0~\lambda^3 + I_1~\lambda^2 + I_2~\lambda + I_3\right]
      \boldsymbol{\mathit{1}} ~.
  \end{align}

Collecting terms containing various powers of \lambda, we get


    \begin{align}
    \lambda^3&\left(I_0~\boldsymbol{\mathit{1}} - \frac{\partial I_1}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - 
                   \boldsymbol{A}^T\cdot\frac{\partial I_0}{\partial \boldsymbol{A}}\right) + 
    \lambda^2\left(I_1~\boldsymbol{\mathit{1}} - \frac{\partial I_2}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - 
                   \boldsymbol{A}^T\cdot\frac{\partial I_1}{\partial \boldsymbol{A}}\right) + \\
    &\qquad \qquad\lambda\left(I_2~\boldsymbol{\mathit{1}} - \frac{\partial I_3}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - 
                   \boldsymbol{A}^T\cdot\frac{\partial I_2}{\partial \boldsymbol{A}}\right) + 
    \left(I_3~\boldsymbol{\mathit{1}} - \frac{\partial I_4}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - 
                   \boldsymbol{A}^T\cdot\frac{\partial I_3}{\partial \boldsymbol{A}}\right)  = 0 ~.
    \end{align}

Then, invoking the arbitrariness of \lambda, we have


    \begin{align}
    I_0~\boldsymbol{\mathit{1}} - \frac{\partial I_1}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_0}{\partial \boldsymbol{A}} & = 0 \\
    I_1~\boldsymbol{\mathit{1}} - \frac{\partial I_2}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} -
    I_2~\boldsymbol{\mathit{1}} - \frac{\partial I_3}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_2}{\partial \boldsymbol{A}} & = 0 \\
    I_3~\boldsymbol{\mathit{1}} - \frac{\partial I_4}{\partial \boldsymbol{A}}~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\cdot\frac{\partial I_3}{\partial \boldsymbol{A}} & = 0 ~.
    \end{align}

This implies that


  \begin{align}
    \frac{\partial I_1}{\partial \boldsymbol{A}} &= \boldsymbol{\mathit{1}} \\
    \frac{\partial I_2}{\partial \boldsymbol{A}} & = I_1~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T\\
    \frac{\partial I_3}{\partial \boldsymbol{A}} & = I_2~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T~(I_1~\boldsymbol{\mathit{1}} - \boldsymbol{A}^T) = (\boldsymbol{A}^2 - I_1~\boldsymbol{A} + I_2~\boldsymbol{\mathit{1}})^T 
  \end{align}

Derivative of the identity tensor

Let \boldsymbol{\mathit{1}} be the second order identity tensor. Then the derivative of this tensor with respect to a second order tensor \boldsymbol{A} is given by


  \frac{\partial \boldsymbol{\mathit{1}}}{\partial \boldsymbol{A}}:\boldsymbol{T} = \boldsymbol{\mathsf{0}}:\boldsymbol{T} = \boldsymbol{\mathit{0}}

This is because \boldsymbol{\mathit{1}} is independent of \boldsymbol{A}.

Derivative of a tensor with respect to itself

Let \boldsymbol{A} be a second order tensor. Then


  \frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}}:\boldsymbol{T} = \left[\frac{\partial }{\partial \alpha} (\boldsymbol{A} + \alpha~\boldsymbol{T})\right]_{\alpha = 0} = \boldsymbol{T} = \boldsymbol{\mathsf{I}}:\boldsymbol{T}

Therefore,


   \frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}} = \boldsymbol{\mathsf{I}}

Here \boldsymbol{\mathsf{I}} is the fourth order identity tensor. In index notation with respect to an orthonormal basis


  \boldsymbol{\mathsf{I}} = \delta_{ik}~\delta_{jl}~\mathbf{e}_i\otimes\mathbf{e}_j\otimes\mathbf{e}_k\otimes\mathbf{e}_l

This result implies that


   \frac{\partial \boldsymbol{A}^T}{\partial \boldsymbol{A}}:\boldsymbol{T} = \boldsymbol{\mathsf{I}}^T:\boldsymbol{T} = \boldsymbol{T}^T

where


  \boldsymbol{\mathsf{I}}^T = \delta_{jk}~\delta_{il}~\mathbf{e}_i\otimes\mathbf{e}_j\otimes\mathbf{e}_k\otimes\mathbf{e}_l

Therefore, if the tensor \boldsymbol{A} is symmetric, then the derivative is also symmetric and we get


   \frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}} = \boldsymbol{\mathsf{I}}^{(s)}
     = \frac{1}{2}~(\boldsymbol{\mathsf{I}} + \boldsymbol{\mathsf{I}}^T)

where the symmetric fourth order identity tensor is


  \boldsymbol{\mathsf{I}}^{(s)} = \frac{1}{2}~(\delta_{ik}~\delta_{jl} + \delta_{il}~\delta_{jk})
    ~\mathbf{e}_i\otimes\mathbf{e}_j\otimes\mathbf{e}_k\otimes\mathbf{e}_l

Derivative of the inverse of a tensor

Derivative of the inverse of a tensor

Let \boldsymbol{A} and \boldsymbol{T} be two second order tensors, then


  \frac{\partial }{\partial \boldsymbol{A}} \left(\boldsymbol{A}^{-1}\right) : \boldsymbol{T} = - \boldsymbol{A}^{-1}\cdot\boldsymbol{T}\cdot\boldsymbol{A}^{-1}

In index notation with respect to an orthonormal basis


  \frac{\partial A^{-1}_{ij}}{\partial A_{kl}}~T_{kl} = - A^{-1}_{ik}~T_{kl}~A^{-1}_{lj} \implies \frac{\partial A^{-1}_{ij}}{\partial A_{kl}} = - A^{-1}_{ik}~A^{-1}_{lj}

We also have


 \frac{\partial }{\partial \boldsymbol{A}} \left(\boldsymbol{A}^{-T}\right) : \boldsymbol{T} = - \boldsymbol{A}^{-T}\cdot\boldsymbol{T}\cdot\boldsymbol{A}^{-T}

In index notation


  \frac{\partial A^{-1}_{ji}}{\partial A_{kl}}~T_{kl} = - A^{-1}_{jk}~T_{kl}~A^{-1}_{li} \implies \frac{\partial A^{-1}_{ji}}{\partial A_{kl}} = - A^{-1}_{li}~A^{-1}_{jk}

If the tensor \boldsymbol{A} is symmetric then


 \frac{\partial A^{-1}_{ij}}{\partial A_{kl}} = -\cfrac{1}{2}\left(A^{-1}_{ik}~A^{-1}_{jl} + A^{-1}_{il}~A^{-1}_{jk}\right)

Proof:

Recall that


  \frac{\partial \boldsymbol{\mathit{1}}}{\partial \boldsymbol{A}}:\boldsymbol{T} = \boldsymbol{\mathit{0}}

Since \boldsymbol{A}^{-1}\cdot\boldsymbol{A} = \boldsymbol{\mathit{1}}, we can write


  \frac{\partial }{\partial \boldsymbol{A}}(\boldsymbol{A}^{-1}\cdot\boldsymbol{A}):\boldsymbol{T} = \boldsymbol{\mathit{0}}

Using the product rule for second order tensors


  \frac{\partial }{\partial \boldsymbol{S}}[\boldsymbol{F}_1(\boldsymbol{S})\cdot\boldsymbol{F}_2(\boldsymbol{S})]:\boldsymbol{T} = 
  \left(\frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)\cdot\boldsymbol{F}_2 + 
  \boldsymbol{F}_1\cdot\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)

we get


  \frac{\partial }{\partial \boldsymbol{A}}(\boldsymbol{A}^{-1}\cdot\boldsymbol{A}):\boldsymbol{T} = 
  \left(\frac{\partial \boldsymbol{A}^{-1}}{\partial \boldsymbol{A}}:\boldsymbol{T}\right)\cdot\boldsymbol{A} + 
  \boldsymbol{A}^{-1}\cdot\left(\frac{\partial \boldsymbol{A}}{\partial \boldsymbol{A}}:\boldsymbol{T}\right)
  = \boldsymbol{\mathit{0}}

or,


  \left(\frac{\partial \boldsymbol{A}^{-1}}{\partial \boldsymbol{A}}:\boldsymbol{T}\right)\cdot\boldsymbol{A} = - 
  \boldsymbol{A}^{-1}\cdot\boldsymbol{T}

Therefore,


  \frac{\partial }{\partial \boldsymbol{A}} \left(\boldsymbol{A}^{-1}\right) : \boldsymbol{T} = - \boldsymbol{A}^{-1}\cdot\boldsymbol{T}\cdot\boldsymbol{A}^{-1}

Remarks

The boldface notation that I've used is called the Gibbs notation. The index notation that I have used is also called Cartesian tensor notation.

This article is issued from Wikiversity - version of the Monday, February 01, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.