# Change of basis

A linear combination of one basis set of vectors (purple) obtains new vectors (red). If they are linearly independent, these form a new basis set. The linear combinations relating the first set to the other extend to a linear transformation, called the change of basis.
A vector represented by two different bases (purple and red arrows).

In linear algebra, a basis for a vector space of dimension n is a set of n vectors 1, …, αn), called basis vectors, with the property that every vector in the space can be expressed as a unique linear combination of the basis vectors. The matrix representations of operators are also determined by the chosen basis. Since it is often desirable to work with more than one basis for a vector space, it is of fundamental importance in linear algebra to be able to easily transform coordinate-wise representations of vectors and operators taken with respect to one basis to their equivalent representations with respect to another basis. Such a transformation is called a change of basis.

Although the terminology of vector spaces is used below and the symbol R can be taken to mean the field of real numbers, the results discussed hold whenever R is a commutative ring and vector space is everywhere replaced with free R-module.

## Preliminary notions

### Transformation matrix

The standard basis for ${\displaystyle R^{n}}$ is the ordered sequence ${\displaystyle E_{n}=\{e_{1},\cdots ,e_{n}\}}$, where ${\displaystyle e_{j}}$ is the element of ${\displaystyle R^{n}}$ with ${\displaystyle 1}$ in the ${\displaystyle j^{\text{th}}}$ place and ${\displaystyle 0}$s elsewhere. For example, the standard basis for ${\displaystyle R^{2}}$ would be

${\displaystyle E_{2}=\left\{{\begin{pmatrix}1\\0\end{pmatrix}},{\begin{pmatrix}0\\1\end{pmatrix}}\right\}}$

If ${\displaystyle T:R^{n}\rightarrow R^{m}}$ is a linear transformation, the ${\displaystyle m\times n}$ matrix associated with ${\displaystyle T}$ is the matrix ${\displaystyle M_{T}}$ whose ${\displaystyle jth}$ column is ${\displaystyle T(e_{j})\in R^{m}}$, for ${\displaystyle j=1,\cdots ,n}$, that is

${\displaystyle M_{T}={\begin{bmatrix}T(e_{1})\cdots T(e_{j})\cdots T(e_{n})\end{bmatrix}}\in R^{m\times n}}$

In this case we have ${\displaystyle T(x)=M_{T}\cdot x}$, ${\displaystyle \forall x\in R^{n}}$, where we regard ${\displaystyle x}$ as a column vector and the multiplication on the right side is matrix multiplication. It is a basic fact in linear algebra that the vector space Hom(${\displaystyle R^{n},R^{m}}$) of all linear transformations from ${\displaystyle R^{n}}$ to ${\displaystyle R^{m}}$ is naturally isomorphic to the space ${\displaystyle R^{m\times n}}$of ${\displaystyle m\times n}$ matrices over ${\displaystyle R}$; that is, a linear transformation ${\displaystyle T:R^{n}\rightarrow R^{m}}$ is for all intents and purposes equivalent to its matrix ${\displaystyle M_{T}}$.

### Uniqueness of linear transformations

We will also make use of the following simple observation.

#### Theorem

Let ${\displaystyle V}$ and ${\displaystyle W}$ be vector spaces, let ${\displaystyle B=\{\alpha _{1},\cdots ,\alpha _{n}\}}$ be a basis for ${\displaystyle V}$, and let ${\displaystyle C=\{\gamma _{1},\cdots ,\gamma _{n}\}}$ be any ${\displaystyle n}$ vectors in ${\displaystyle W}$. Then there exists a unique linear transformation ${\displaystyle T:V\rightarrow W}$ with ${\displaystyle T(\alpha _{j})=\gamma _{j}}$, for ${\displaystyle j=1,\cdots ,n}$.

This unique ${\displaystyle T}$ is defined by

${\displaystyle T(x_{1}\alpha _{1}+\cdots +x_{n}\alpha _{n})=T(x_{1}\alpha _{1})+\cdots +T(x_{n}\alpha _{n})=x_{1}T(\alpha _{1})+\cdots +x_{n}T(\alpha _{n})=x_{1}\gamma _{1}+\cdots +x_{n}\gamma _{n}}$

Of course, if ${\displaystyle C=\{\gamma _{1},\cdots ,\gamma _{n}\}}$ happens to be a basis for ${\displaystyle W}$, then ${\displaystyle T}$ is bijective as well as linear; in other words, ${\displaystyle T}$ is an isomorphism. If in this case we also have ${\displaystyle W=V}$, then ${\displaystyle T}$ is said to be an automorphism.

##### Coordinate isomorphism

Now let ${\displaystyle V}$ be a vector space over ${\displaystyle R}$ and suppose ${\displaystyle B=\{\alpha _{1},\cdots ,\alpha _{n}\}}$ is a basis for ${\displaystyle V}$. By definition, if ${\displaystyle \xi }$ is a vector in ${\displaystyle V}$, then ${\displaystyle \xi =x_{1}\alpha _{1}+\cdots +x_{n}\alpha _{n}}$ for a unique choice of scalars ${\displaystyle x_{1},\cdots ,x_{n}\in R}$ called the coordinates of ${\displaystyle \xi }$ relative to the ordered basis ${\displaystyle B}$. The vector ${\displaystyle x=(x_{1},\cdots ,x_{n})^{T}\in R^{n}}$ is called the coordinate tuple of ${\displaystyle \xi }$ relative to ${\displaystyle B}$.

The unique linear map ${\displaystyle \phi :R^{n}\rightarrow V}$ with ${\displaystyle \phi (e_{j})=\alpha _{j}}$ for ${\displaystyle j=1,\cdots ,n}$ is called the coordinate isomorphism for ${\displaystyle V}$ and the basis ${\displaystyle B=\{\alpha _{1},\cdots ,\alpha _{n}\}}$. Thus ${\displaystyle \phi (x)=\xi }$ if and only if ${\displaystyle \xi =x_{1}\alpha _{1}+\cdots +x_{n}\alpha _{n}}$.

### Matrix of a set of vectors

A set of vectors can be represented by a matrix of which each column consists of the components of the corresponding vector of the set. As a basis is a set of vectors, a basis can be given by a matrix of this kind. Later it will be shown that the change of basis of any object of the space is related to this matrix. For example, vectors change with its inverse (and they are therefore called contravariant objects).

## Change of coordinates of a vector

First we examine the question of how the coordinates of a vector ${\displaystyle \xi }$ in the vector space ${\displaystyle V}$ change when we select another basis.

### Two dimensions

This means that given a matrix ${\displaystyle M}$ whose columns are the vectors of the new basis of the space (new basis matrix), the new coordinates for a column vector ${\displaystyle v}$ are given by the matrix product ${\displaystyle M^{-1}v}$. For this reason, it is said that ordinary vectors are contravariant objects.

Any finite set of vectors can be represented by a matrix in which its columns are the coordinates of the given vectors. As an example in dimension 2, a pair of vectors obtained by rotating the standard basis counterclockwise for 45°. The matrix whose columns are the coordinates of these vectors is

${\displaystyle M={\begin{bmatrix}{\frac {1}{\sqrt {2}}}&{\frac {-1}{\sqrt {2}}}\\{\frac {1}{\sqrt {2}}}&{\frac {1}{\sqrt {2}}}\end{bmatrix}}}$

If we want to change any vector of the space to this new basis, we only need to left-multiply its components by the inverse of this matrix.[1]

### Three dimensions

For example, let R be a new basis given by its Euler angles. The matrix of the basis will have as columns the components of each vector. Therefore, this matrix will be (See Euler angles article):

${\displaystyle \mathbf {R} ={\begin{bmatrix}\mathrm {c} _{\alpha }\,\mathrm {c} _{\gamma }-\mathrm {s} _{\alpha }\,\mathrm {c} _{\beta }\,\mathrm {s} _{\gamma }&-\mathrm {c} _{\alpha }\,\mathrm {s} _{\gamma }-\mathrm {s} _{\alpha }\,\mathrm {c} _{\beta }\,\mathrm {c} _{\gamma }&\mathrm {s} _{\beta }\,\mathrm {s} _{\alpha }\\\mathrm {s} _{\alpha }\,\mathrm {c} _{\gamma }+\mathrm {c} _{\alpha }\,\mathrm {c} _{\beta }\,\mathrm {s} _{\gamma }&-\mathrm {s} _{\alpha }\,\mathrm {s} _{\gamma }+\mathrm {c} _{\alpha }\,\mathrm {c} _{\beta }\,\mathrm {c} _{\gamma }&-\mathrm {s} _{\beta }\,\mathrm {c} _{\alpha }\\\mathrm {s} _{\beta }\,\mathrm {s} _{\gamma }&\mathrm {s} _{\beta }\,\mathrm {c} _{\gamma }&\mathrm {c} _{\beta }\end{bmatrix}}.}$

Again, any vector of the space can be changed to this new basis by left-multiplying its components by the inverse of this matrix.

### General case

Suppose ${\displaystyle A=\{\alpha _{1},\dots ,\alpha _{n}\}}$ and ${\displaystyle B=\{\beta _{1},\dots ,\beta _{n}\}}$ are two ordered bases for an n-dimensional vector space V over a field K. Let φA and φB be the corresponding coordinate isomorphisms (linear maps) from Kn to V, i.e. ${\displaystyle \phi _{A}(e_{i})=\alpha _{i}}$ and ${\displaystyle \phi _{B}(e_{i})=\beta _{i}}$ for i = 1, …, n, where ei denotes the n-tuple with i th entry equal to 1, and all other entries equal to 0.

If ${\displaystyle x=(x_{1},\dots ,x_{n})}$ is the coordinate n-tuple of a vector v in V with respect to the basis A, so that ${\displaystyle v=\phi _{A}(x)}$, then the coordinate tuple of v with respect to B is the tuple y such that ${\displaystyle \phi _{B}(y)=v}$, i.e. ${\displaystyle y=\phi _{B}^{-1}(v)=\phi _{B}^{-1}(\phi _{A}(x))}$, so that for any vector in V, the map ${\displaystyle \phi _{B}^{-1}\circ \phi _{A}}$ maps its coordinate tuple with respect to A to its coordinate tuple with respect to B. Since this map is an automorphism on Kn, it therefore has an associated square matrix C. Moreover, the i th column of C is ${\displaystyle \phi _{B}^{-1}\circ \phi _{A}(e_{i})=\phi _{B}^{-1}(\alpha _{i})}$, that is, the coordinate tuple of αi with respect to B.

Thus, for any vector v in V, if x is the coordinate tuple of v with respect to A, then the tuple ${\displaystyle y=\phi _{B}^{-1}(\phi _{A}(x))=Cx}$ is the coordinate tuple of v with respect to B. The matrix C is called the transition matrix from A to B.

## The matrix of a linear transformation

Now suppose T : VW is a linear transformation, 1, …, αn} is a basis for V and 1, …, βm} is a basis for W. Let φ and ψ be the coordinate isomorphisms for V and W, respectively, relative to the given bases. Then the map T1 = ψ−1T ∘ φ is a linear transformation from Rn to Rm, and therefore has a matrix t; its jth column is ψ−1(Tj)) for j = 1, …, n. This matrix is called the matrix of T with respect to the ordered bases 1, …, αn} and 1, …, βm}. If η = T(ξ) and y and x are the coordinate tuples of η and ξ, then y = ψ−1(T(φ(x))) = tx. Conversely, if ξ is in V and x = φ−1(ξ) is the coordinate tuple of ξ with respect to 1, …, αn}, and we set y = tx and η = ψ(y), then η = ψ(T1(x)) = T(ξ). That is, if ξ is in V and η is in W and x and y are their coordinate tuples, then y = tx if and only if η = T(ξ).

Theorem Suppose U, V and W are vector spaces of finite dimension and an ordered basis is chosen for each. If T : UV and S : VW are linear transformations with matrices s and t, then the matrix of the linear transformation ST : UW (with respect to the given bases) is st.

### Change of basis

Now we ask what happens to the matrix of T : VW when we change bases in V and W. Let 1, …, αn} and 1, …, βm} be ordered bases for V and W respectively, and suppose we are given a second pair of bases {α′1, …, α′n} and {β′1, …, β′m}. Let φ1 and φ2 be the coordinate isomorphisms taking the usual basis in Rn to the first and second bases for V, and let ψ1 and ψ2 be the isomorphisms taking the usual basis in Rm to the first and second bases for W.

Let T1 = ψ1−1T ∘ φ1, and T2 = ψ2−1T ∘ φ2 (both maps taking Rn to Rm), and let t1 and t2 be their respective matrices. Let p and q be the matrices of the change-of-coordinates automorphisms φ2−1 ∘ φ1 on Rn and ψ2−1 ∘ ψ1 on Rm.

The relationships of these various maps to one another are illustrated in the following commutative diagram. Since we have T2 = ψ2−1T ∘ φ2 = (ψ2−1 ∘ ψ1) ∘ T1 ∘ (φ1−1 ∘ φ2), and since composition of linear maps corresponds to matrix multiplication, it follows that

t2 = q t1 p−1.

Given that the change of basis has once the basis matrix and once its inverse, these objects are said to be 1-co, 1-contra-variant.

## The matrix of an endomorphism

An important case of the matrix of a linear transformation is that of an endomorphism, that is, a linear map from a vector space V to itself: that is, the case that W = V. We can naturally take 1, …, βn} = {α1, …, αn} and {β′1, …, β′m} = {α′1, …, α′n}. The matrix of the linear map T is necessarily square.

### Change of basis

We apply the same change of basis, so that q = p and the change of basis formula becomes

t2 = p t1 p−1.

In this situation the invertible matrix p is called a change-of-basis matrix for the vector space V, and the equation above says that the matrices t1 and t2 are similar.

## The matrix of a bilinear form

A bilinear form on a vector space V over a field R is a mapping V × VR which is linear in both arguments. That is, B : V × VR is bilinear if the maps

${\displaystyle v\mapsto B(v,w)}$
${\displaystyle v\mapsto B(w,v)}$

are linear for each w in V. This definition applies equally well to modules over a commutative ring with linear maps being module homomorphisms.

The Gram matrix G attached to a basis ${\displaystyle \alpha _{1},\dots ,\alpha _{n}}$ is defined by

${\displaystyle G_{i,j}=B(\alpha _{i},\alpha _{j}).}$

If ${\displaystyle v=\sum _{i}x_{i}\alpha _{i}}$ and ${\displaystyle w=\sum _{i}y_{i}\alpha _{i}}$ are the expressions of vectors v, w with respect to this basis, then the bilinear form is given by

${\displaystyle B(v,w)=v^{\mathsf {T}}Gw.}$

The matrix will be symmetric if the bilinear form B is a symmetric bilinear form.

### Change of basis

If P is the invertible matrix representing a change of basis from ${\displaystyle \alpha _{1},\dots ,\alpha _{n}}$ to ${\displaystyle \alpha '_{1},\dots ,\alpha '_{n}}$ then the Gram matrix transforms by the matrix congruence

${\displaystyle G'=P^{\mathsf {T}}GP.}$

## Important instances

In abstract vector space theory the change of basis concept is innocuous; it seems to add little to science. Yet there are cases in associative algebras where a change of basis is sufficient to turn a caterpillar into a butterfly, figuratively speaking:

• In the split-complex number plane there is an alternative "diagonal basis". The standard hyperbola xxyy = 1 becomes xy = 1 after the change of basis. Transformations of the plane that leave the hyperbolae in place correspond to each other, modulo a change of basis. The contextual difference is profound enough to then separate Lorentz boost from squeeze mapping. A panoramic view of the literature of these mappings can be taken using the underlying change of basis.
• With the 2 × 2 real matrices one finds the beginning of a catalogue of linear algebras due to Arthur Cayley. His associate James Cockle put forward in 1849 his algebra of coquaternions or split-quaternions, which are the same algebra as the 2 × 2 real matrices, just laid out on a different matrix basis. Once again it is the concept of change of basis that synthesizes Cayley’s matrix algebra and Cockle’s coquaternions.
• A change of basis turns a 2 × 2 complex matrix into a biquaternion.