1 Continuum Assumption

Material and particle point

Note

See drawing during class!

In order to predict the motion, deformation and phase-transition of a material, we think of it as an idealized medium that is continuously filling a region in three-dimensional space.

We discriminate a particle point and a material point:

  • material point: the “exact” position within the medium with its micro-scale properties

  • particle point: an intermediate-scale position within the medium that has all its properties

Open question:

The particle point seems to be larger than a material point, yet from a continuum mechanical perspective, a particle point should be infinitesimally small, such that we can translate reality into mathematical models.

How can we formalize this concept?

Intermediate Asymptotics

A material is assumed to have an intermediate asymptotics if there are scales \(X_1\) and \(X_2\), such that an interval \(\mathcal I\) exists, for which:

  • \(X_1 \ll x\), for \(x \in \mathcal I\), hence \(\tfrac{x}{X_1} \to \infty\)
  • \(x \ll X_2\), for \(x \in \mathcal I\), hence \(\tfrac{x}{X_2} \to 0\)

This is formaly referred to as \(X_1 \lll X_2\).

Intermediate asymptotics exist for materials, in which the characteristic length-scale of the microstructure \(X_1\) (often also denoted \(\delta\)) is much smaller than the characteristic length scale of the problem setting \(X_2\) (often also denoted by \(L\)), hence (\(\delta \lll L\)).

In such a situations, we can safely assume the continuum assumption to apply within all scales that fall into the intermediate interval \(\mathcal I\).

Intermediate Asymptotics

Examples are:

  • Fluids: The ratio of the molecular free path and the characteristic length scale is usually very small. This ratio is also referred to as the Knudsen Number. Systems with a Knudsen Number \(<0.1\) can be regarded as continuous.

  • Porous media: Porous media can be treated as continuous, when we are interested in a scale that is much larger than the characteristic pore size, e.g. in reservoir engineering. This is different, when we are interested in processes on the pore size level.

An observer of a continuous medium provides

  • a reference system, e.g. an inertial reference system

  • a clock that allows to measure time

  • a system of units sufficient for the phenomena that we consider

2 Tensor algebra in a nutshell

The content of this subsection is a summary of content presented in chapters 1 and 2 in (Gonzalez and Stuart 2001).

Repetition: What is a vector space?

A vector space \(V\) over a field \(F\) exhibits a binary interior operation called

vector addition: \(\mathbf{+}: V \times V \rightarrow V\)

and an operation called

scalar multiplication: \(\mathbf{\cdot}: F \times V \rightarrow V,\)

which fullfill the following properties for each \(\mathbf{u},\mathbf{v},\mathbf{w} \in V\) and \(a,b \in F\):


(V1) Associativity of vector addition: \((\mathbf{u}+\mathbf{v})+\mathbf{w} = \mathbf{u}+(\mathbf{v}+\mathbf{w})\)

(V2) Commutativity of vector addition: \(\mathbf{u}+\mathbf{v} = \mathbf{v}+\mathbf{u}\)

(V3) Identity / neutral element of vector addition \(\mathbf{0}\): \(\mathbf{u}+\mathbf{0} = \mathbf{0}+\mathbf{u} = \mathbf{u}\)

(V4) Inverse element of vector addition (zero vector) \(-\mathbf{u}\): \(\mathbf{u}+(-\mathbf{u}) = (-\mathbf{u})+\mathbf{u} = \mathbf{0}\)

Repetition: What is a vector space?

A vector space \(V\) over a field \(F\) exhibits a binary interior operation called

vector addition: \(\mathbf{+}: V \times V \rightarrow V\)

and an operation called

scalar multiplication: \(\mathbf{\cdot}: F \times V \rightarrow V,\)

which fullfill the following properties for each \(\mathbf{u},\mathbf{v},\mathbf{w} \in V\) and \(a,b \in F\):


(V5) Compatibility of vector addition and scalar multiplication: \(a \cdot (\underbrace{b \cdot \mathbf{u}}_{\text{scalar multiplication}}) = \underbrace{(a b)}_{\text{in }F} \cdot \mathbf{u}\)

(V6) Identity element of scalar multiplication \(1\): \(1 \cdot \mathbf{u} = \mathbf{u} \cdot 1 =\mathbf{u}\)

(V7) Distributivity of scalar multiplication w.r.t. vector addition: \(a \cdot (\mathbf{u}+\mathbf{v}) = a \cdot \mathbf{u} + a \cdot \mathbf{v}\)

(V8) Distributivity of scalar multiplication w.r.t. addition in \(F\): \((a+b) \cdot \mathbf{u} = a \cdot \mathbf{u} + b \cdot \mathbf{u}\)

Repetition: What is a vector space?

A vector space \(V\) over a field \(F\) exhibits a binary interior operation called

vector addition: \(\mathbf{+}: V \times V \rightarrow V\)

and an operation called

scalar multiplication: \(\mathbf{\cdot}: F \times V \rightarrow V.\)


Direct consequences of the above properties are the following:

(VC1) Multiplication with field zero implies zero vector: \(0 \cdot \mathbf u = \mathbf 0\)

(VC2) Multiplication with zero vector has no effect: \(a \cdot \mathbf 0 = \mathbf 0\)

(VC3) Multiplication with field negative 1 implies inverse element of vector addition: \((-1) \cdot \mathbf u = - \mathbf u\)

(VC4) Zero product property is inherited by the scalar multiplication: \(a \cdot \mathbf u = 0 \; \Rightarrow \; a=0 \; \text{and/or} \; \mathbf u = \mathbf 0\)

Repetition: What is a vector space?

A vector space \(V\) over a field \(F\) exhibits a binary interior operation called

vector addition: \(\mathbf{+}: V \times V \rightarrow V\)

and an operation called

scalar multiplication: \(\mathbf{\cdot}: F \times V \rightarrow V.\)


Common examples are

  • Real vector space for which \(F=\mathbb R\),
  • Complex vector space for which \(F=\mathbb C\)
  • Real vector space for which \(F=\mathbb R^3\)

Remark:

Unless otherwise stated, we will refer to \(F=\mathbb R^3\) if we talk about the vector space. It is important to keep in mind, however, that this is just one example of a vector space!

How can we navigate in a vector space?

Let’s consider vectors \(\mathbf v_i \in V\) and scalars \(\alpha_i \in F\) for \(i=1,...,n\).

  • \(\mathbf w = \sum_{i=1}^{n} \alpha_i \mathbf v_i\) is called a linear combination, hence a weighted superposition of a finite number of elements of \(V\).
  • The vectors \(\mathbf v_i\) are called linearly independent if \(\sum_{i=1}^{n} \alpha_i \mathbf v_i=0\) implies \(\alpha_i = 0\) for all \(i=1,...,n\) (see LA textbook for equivalent definitions!)
  • A subset \(U \subset V\) is called a linear subset if it is closed under vector addition, hence \(\mathbf u + \mathbf v \in U\) for all \(\mathbf u, \mathbf v \in U\).
  • A linear span, denoted by \(span(\{\mathbf v_i \})\), is the smallest linear subspace that contain \(\{\mathbf v_i \}\), hence all its linear combinations!

How can we navigate in a vector space?

A set of vectors \(\{\mathbf b_i\} \subset V\) is called a basis if they are linearly independent and form a span of \(V\).

  • Each vector space has at least one basis.

  • The number of basis vectors denotes the dimension of \(V\).

  • There are finite dimensional and infinite dimensional vector spaces.

Important consequence:

For each element \(\mathbf v \in V\), we have

\[ \mathbf v = \sum_{i=1}^n \alpha_i \mathbf b_i \]

with \(n=dim(V)\) and coordinates/coefficients \(\alpha_i\) unique for a specific set of basis vectors.

Eamples: Eucledian vector space \(\mathbb R^3\) has dimension 3.

Vectors in Euclidean space \(\mathbb R^3\)

Unless otherwise stated, the vector space in this course will be \(\mathbb R^3\), such that we can think of the vector to be represented by an element of \(\mathbb R^3\) that has a unique coordinate representation.

This implies a magnitude and a direction, hence

\[ \mathbf v = (v_1,v_2,v_3)^T = \underbrace{|\mathbf v|}_{\text{magnitude}} \quad \underbrace{\mathbf e_v}_{\text{direction}}\]

We always have \(|\mathbf v| \geq 0\). A vector with \(\mathbf v\) with \(|\mathbf v|=1\) is called a unit vector. Two vectors are the same if they coincide in magnitude and direction.

Recall your background knowledge on vectors: In going forward, you should be familiar with scalar product and vector product.

Vectors - important conclusion

For a basis \(\{\mathbf e_1, \mathbf e_2, \mathbf e_3 \}\) in \(\mathbb R^3\), we get can write any vector uniquely in its components:

\[ \mathbf v = v_1 \mathbf e_1 + v_2 \mathbf e_2 + v_3 \mathbf e_3 \quad \left( = v_i \mathbf e_i \right)\]

with \([\mathbf v] = (v_1,v_2,v_3)^T\) being the coordinates of vector \(\mathbf v\).

Second order tensors

Physical quantities, such as stress and strain cannot be represented as vectors, but rather by linear transformations between vectors.

A second order tensor on the vector space \(V\) is hence a mapping \(\mathbf T : V \rightarrow V\), which is linear for all \(\mathbf u, \mathbf v \in V\) and \(\alpha \in F\), hence

\[ \mathbf T(\mathbf u + \mathbf v) = \mathbf T(\mathbf u) + \mathbf T(\mathbf v) \quad \mathbf T( \alpha \mathbf u) = \alpha \mathbf T(\mathbf u)\]

Second order tensors

For a basis \(\{\mathbf e_1, \mathbf e_2, \mathbf e_3 \}\), we get can write any second order tensor in its components according to

\[ [S] = S_{ij} = \mathbf e_i \cdot \mathbf S \mathbf e_j\]

The coordinates can hence be represented as a matrix \(S_{ij}\).

The space of second order tensors is itself a vector space of basis \(\{\mathbf e_i \otimes \mathbf e_j\}\), in which the dyadic product is defined as

\[(\mathbf u \otimes \mathbf v) \mathbf a = (\mathbf a \cdot \mathbf v)\mathbf u\]

for all \(\mathbf a \in \mathcal V\). We then get

\[ [S + T] = [S] + [T] \qquad [\alpha S] = \alpha [S]\]

Second order tensors

Recall your background knowledge on the properties of second order tensors and their matrix representation: In going forward, you should be familiar with

  • transposition of a matrix / tensor
  • symmetry of a matrix / tensor
  • skewsymmetry of a matrix / tensor

You should know what is meant and how to test a matrix / tensor to be

  • positive definite
  • invertible
  • orthogonal.

Second order tensors

A second order tensor can always be decomposed into a symmetric and a skewsymmetric part \[ \mathbf S = \mathbf E + \mathbf W\]

in which \[ \mathbf E = \frac{1}{2} (\mathbf S + \mathbf S^T) \qquad \mathbf W = \frac{1}{2} (\mathbf S - \mathbf S^T)\]

As mappings, \(\mathbf E\) and \(\mathbf W\) are referred to as \(sym(\mathbf S)\) and \(skew(\mathbf S)\).

Second order tensors

A skew symmetric tensor can furthermore always be expressed as a vector product, in the sense that there exist a \(\mathbf w\), such that

\[\mathbf W \mathbf v = \mathbf w \times \mathbf v\]

\(\mathbf w\) is also denoted by \(vec(\mathbf W)\) and referred to as the axial vector.

Recall your background knowledge on the properties of second order tensors and their matrix representation:

  • invariants of a matrix / tensor, such as trace and determinant
  • eigensystem of a matrix / tensor including eigenvalue and eigenvector

3 Scalar, vector and tensor fields

Continuous field

Under the continuum assumption, we can assume that we can describe the state of the system in terms of continuous fields. The fields are functions of position \(\mathbf x = (x,y,z)^T\) and time \(t\).

Relevant to this lecture will be: \[ \begin{aligned} &\rho(\mathbf x,t) : \text{density field (mass per unit volume)}\\ &\mathbf v (\mathbf x,t) = (v_1(\mathbf x,t),v_2(\mathbf x,t),v_2(\mathbf x,t))^T : \text{velocity field}\\ &p(\mathbf x,t) : \text{pressure field}\\ &T(\mathbf x,t) : \text{temperature field }\\ &\mathbf \sigma(\mathbf x,t) : \text{stress field}\\ \end{aligned} \]

Remark

Field relates to the fact that the variable is defined on some region \(\Omega\). We discriminate

  • scalar fields, e.g. density \(\rho:\Omega \times \mathbb R^+ \rightarrow \mathbb R^+\)

  • vector fields, e.g. velocity \(\mathbf v:\Omega \times \mathbb R^+ \rightarrow \mathbb R^d\)

  • tensor fields, e.g. stress tensor \(\mathbf \sigma:\Omega \times \mathbb R^+ \rightarrow \mathbb R^{d \times d}\)

4 Index and vector notation

Element in a vector space in index notation

As an element of a vector space, a vector can always be written in its basis:

\[ \mathbf{v} \left( \mathbf{x},t \right) = u \left( \mathbf{x},t \right) \mathbf{e}_x + v \left( \mathbf{x},t \right)\mathbf{e}_y + w \left( \mathbf{x},t \right) \mathbf{e}_z \]

Alternatively, basis and vector components can be numbered, which yields

\[ \mathbf{v} = v_1 e_1 + v_2 e_2 + v_3 e_3 = \sum_{i=1}^3 v_i e_i \]

If an index appears twice in a term, summation is assumed, such that

\[ \mathbf{v} = \sum_{i=1}^3 v_i e_i = v_i e_i \quad \text{and} \quad \mathbf{v} \cdot \mathbf{w} = v_i w_i \]

References

Gonzalez, Oscar, and Andrew M. Stuart. 2001. A First Course in Continuum Mechanics. 1st ed. Cambridge University Press. https://www.cambridge.org/core/product/identifier/9780511619571/type/book.