## Tensor Algebra

After dealing with some niceties involving dual vector spaces, the next thing I need to understand more formally is tensors.

A tensor of rank (m,n) is a linear, scalar valued function of m vectors and n one forms. Linear means the same thing here that it did last time, but to be explicit

$T(\alpha_1 \mathbf{v_1} + \alpha_2 \mathbf{v_2}, \mathbf{v_3}, \ldots , \mathbf{u_1}, \ldots , \mathbf{u_n}) \\ = \alpha_1*T( \mathbf{v_1} , \mathbf{v_3}, \ldots , \mathbf{u_1}, \ldots , \mathbf{u_n}) + \alpha_2*T(\mathbf{v_2}, \mathbf{v_3}, \ldots , \mathbf{u_1}, \ldots , \mathbf{u_n})$

expresses linearity in the first vector argument of $T$, and this property holds for all the vector and one form “slots” in the tensor.

The set of all (m,n) tensors on a dual space is itself a vector space, with a couple of definitions. Just let

$\left[S+T\right](\mathbf{v_1}, \ldots, \mathbf{v_n}, \mathbf{u_1}, \ldots, \mathbf{u_m}) \\ = S(\mathbf{v_1}, \ldots, \mathbf{v_n}, \mathbf{u_1}, \ldots, \mathbf{u_m}) + T(\mathbf{v_1}, \ldots, \mathbf{v_n}, \mathbf{u_1}, \ldots, \mathbf{u_m})$

define addition of tensors. Multiplication by a scalar is just

$\left[\alpha T\right](\mathbf{v_1}, \ldots, \mathbf{v_n}, \mathbf{u_1}, \ldots, \mathbf{u_m}) = \alpha*T(\mathbf{v_1}, \ldots, \mathbf{v_n}, \mathbf{u_1}, \ldots, \mathbf{u_m}).$

The identity tensor is the one that maps everything to zero.

Tensors are a generalization of vectors and one forms, which were previously defined as being scalar valued linear maps, as well. The difference is that a tensor may take many arguments, but the vector and one form can only take one argument.

Examples

A (0,0) tensor would take no arguments and give a scalar. That means it is a scalar.

A (1,0) tensor takes one vector and gives a scalar. The space of all (1,0) tensors has already been identified as the space of one forms, so (1,0) tensors are one forms.

Similarly, the set of (0,1) tensors is the original vector space.

It is easy to define a certain (1,1) tensor. It takes a vector and a one form, and returns the vector acting on the one form (or, equivalently, the one form acting on the vector). This is not the only possible (1,1) tensor. It’s just the most obvious (other than the identity, perhaps).

If you start with a (1,1) tensor and feed in one vector, it still needs a one form before it can spit out a scalar. That is, after giving it a vector, it has become a (0,1) tensor. And as we stated above, a (0,1) tensor is itself a vector. So a (1,1) tensor can be thought of as a function that maps a one form and a vector to a scalar, or as a function that maps a single vector to another, (generally) different vector. So the space of (1,1) tensors is actually the space of linear operators on the vector space, which you may know from linear algebra. In linear algebra calculations, we fix a basis for the vector space and dual space, and this allows us to write these (1,1) tensors as matrices. In the language of matrix algebra, a row vector (i.e. one form) times a matrix (i.e. (1,1) tensor), times a column vector (i.e. vector) yields a scalar.

We’ve established that the set of rank (m,n) tensors on a dual space forms another vector space, but we have yet to find its dimension. If the dimension of the vector space is d, then the dimension of the space of (m,n) tensors is (m+n)d.

Before proving this, we’ll define the tensor product of two tensors.

Let $S$ and $T$ be tensors of rank (m,n) and (p,q) respectively. The tensor product, $S \otimes T$, is a rank (m+p, n+q) tensor defined by

$S \otimes T(\mathbf{v_1},\ldots,\mathbf{v_m},\mathbf{v_{m+1}},\ldots,\mathbf{v_{m+p}},\mathbf{u_1},\ldots,\mathbf{u_n},\mathbf{u_{n+1}},\ldots,\mathbf{u_{n+q}} \\ = S(\mathbf{v_1},\ldots,\mathbf{v_m},\mathbf{u_1},\ldots,\mathbf{u_n})*T(\mathbf{v_1},\ldots,\mathbf{v_p},\mathbf{u_1},\ldots,\mathbf{u_q})$

You can check that this is linear in any argument. The importance is that the tensor product gives us a nice way to form a basis for the space of (m,n) tensors.

Any arbitrary rank (n,m) tensor $T$ can be broken down and completely characterized by its action on every possible set of basis vectors.

$\begin{array}{cl} {} & T(\mathbf{v_1},\ldots,\mathbf{v_m},\mathbf{u_1},\ldots,\mathbf{u_n}) \\ = & T(\sum_{k=1}^d a_{1,k }\hat{\mathbf{v_k}},\ldots,\sum_{k=1}^d a_{m,k }\hat{\mathbf{v_k}},\sum_{k=1}^d b_{1,k }\hat{\mathbf{u_k}},\sum_{k=1}^d b_{n,k }\hat{\mathbf{u_k}}) \\ = & a_{1,1}*\ldots*a_{m,1}*b_{1,1}*\ldots*b_{n,1}*T(\hat{\mathbf{v_1}},\ldots,\hat{\mathbf{v_1}},\hat{\mathbf{u_1}},\ldots,\hat{\mathbf{u_1}}) \\ {} & + \ldots + a_{1,d}*\ldots*a_{m,d}*b_{1,d}*\ldots*b_{n,d}*T(\hat{\mathbf{v_d}},\ldots,\hat{\mathbf{v_d}},\hat{\mathbf{u_d}},\ldots,\hat{\mathbf{u_d}})\end{array}$

Where the final sum is over (m+n)d terms, where we’re plugging in basis vectors and one forms to the arguments of $T$ in every conceivable way. If we look at just the first one of these, it’s

$T(\hat{\mathbf{v_1}},\ldots,\hat{\mathbf{v_1}},\hat{\mathbf{u_1}},\ldots,\hat{\mathbf{u_1}}) = \alpha \\= \alpha * \hat{\mathbf{u_1}}\otimes\ldots\otimes\hat{\mathbf{u_1}}\otimes\hat{\mathbf{v_1}}\otimes\ldots\otimes\hat{\mathbf{v_1}}(\hat{\mathbf{v_1}},\ldots,\hat{\mathbf{v_1}},\hat{\mathbf{u_1}},\ldots,\hat{\mathbf{u_1}})$

The long tensor product has m one forms and n vectors. That way, when we feed it m vectors and n one forms, we get (m+n) one’s all multiplied to each other, yielding just one, and the identity holds. This shows that if we take all the possible tensor products of m one forms and n vectors, we get a set that spans the space of (m,n) tensors.

To show we have a basis, we just need them to be independent. I’m getting tired of typing $\LaTeX$, so for this just look back to where we proved the basis of one forms was independent last time, and do basically the same thing. Assume you can find a linear combination of the tensors with coefficients not all zero that yields the identity. Then plug in all of (m+n)d possible sets of basis vectors to each side of the equation. One by one, this will give you that each of the coefficients in your equation is zero, a contradiction.

So a basis for the (m,n) tensors is the direct products of the basis vectors and one forms, and the dimension of that space of (m,n) tensors is (m+n)d, with d the mutual dimension of the vector and one form spaces.

That felt good. Next time we’ll look at the Levi-Civita tensor, and maybe the metric tensor, as examples of how to use this stuff.