# Derivation of the Routh array

The Routh array is a tabular method permitting one to establish the stability of a system using only the coefficients of the characteristic polynomial. Central to the field of control systems design, the Routh–Hurwitz theorem and Routh array emerge by using the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices.

## The Cauchy index

Given the system:

{\displaystyle {\begin{aligned}f(x)&{}=a_{0}x^{n}+a_{1}x^{n-1}+\cdots +a_{n}&{}\quad (1)\\&{}=(x-r_{1})(x-r_{2})\cdots (x-r_{n})&{}\quad (2)\\\end{aligned}}}

Assuming no roots of ${\displaystyle f(x)=0}$ lie on the imaginary axis, and letting

${\displaystyle N}$ = The number of roots of ${\displaystyle f(x)=0}$ with negative real parts, and
${\displaystyle P}$ = The number of roots of ${\displaystyle f(x)=0}$ with positive real parts

then we have

${\displaystyle N+P=n\quad (3)}$

Expressing ${\displaystyle f(x)}$ in polar form, we have

${\displaystyle f(x)=\rho (x)e^{j\theta (x)}\quad (4)}$

where

${\displaystyle \rho (x)={\sqrt {{\mathfrak {Re}}^{2}[f(x)]+{\mathfrak {Im}}^{2}[f(x)]}}\quad (5)}$

and

${\displaystyle \theta (x)=\tan ^{-1}{\big (}{\mathfrak {Im}}[f(x)]/{\mathfrak {Re}}[f(x)]{\big )}\quad (6)}$

from (2) note that

${\displaystyle \theta (x)=\theta _{r_{1}}(x)+\theta _{r_{2}}(x)+\cdots +\theta _{r_{n}}(x)\quad (7)}$

where

${\displaystyle \theta _{r_{i}}(x)=\angle (x-r_{i})\quad (8)}$

Now if the ith root of ${\displaystyle f(x)=0}$ has a positive real part, then (using the notation y=(RE[y],IM[y]))

{\displaystyle {\begin{aligned}\theta _{r_{i}}(x){\big |}_{x=j\infty }&=\angle (x-r_{i}){\big |}_{x=j\infty }\\&=\angle (0-{\mathfrak {Re}}[r_{i}],\infty -{\mathfrak {Im}}[r_{i}])\\&=\angle (-{\mathfrak {Re}}[r_{i}],\infty )\\&=\lim _{\phi \to -\infty }\tan ^{-1}\phi =-{\frac {\pi }{2}}\quad (9)\\\end{aligned}}}

and

${\displaystyle \theta _{r_{i}}(x){\big |}_{x=-j\infty }=\angle (-{\mathfrak {Re}}[r_{i}],-\infty )=\lim _{\phi \to \infty }\tan ^{-1}\phi ={\frac {\pi }{2}}\quad (10)}$

Similarly, if the ith root of ${\displaystyle f(x)=0}$ has a negative real part,

${\displaystyle \theta _{r_{i}}(x){\big |}_{x=j\infty }=\angle (-{\mathfrak {Re}}[r_{i}],\infty )=\lim _{\phi \to \infty }\tan ^{-1}\phi ={\frac {\pi }{2}}\,\quad (11)}$

and

${\displaystyle \theta _{r_{i}}(x){\big |}_{x=-j\infty }=\angle (-{\mathfrak {Re}}[r_{i}],-\infty )=\lim _{\phi \to -\infty }\tan ^{-1}\phi =-{\frac {\pi }{2}}\,\quad (12)}$

Therefore, ${\displaystyle \theta _{r_{i}}(x){\Big |}_{x=-j\infty }^{x=j\infty }=-\pi }$ when the ith root of ${\displaystyle f(x)}$ has a positive real part, and ${\displaystyle \theta _{r_{i}}(x){\Big |}_{x=-j\infty }^{x=j\infty }=\pi }$ when the ith root of ${\displaystyle f(x)}$ has a negative real part. Alternatively,

${\displaystyle \theta (x){\big |}_{x=j\infty }=\angle (x-r_{1}){\big |}_{x=j\infty }+\angle (x-r_{2}){\big |}_{x=j\infty }+\cdots +\angle (x-r_{n}){\big |}_{x=j\infty }={\frac {\pi }{2}}N-{\frac {\pi }{2}}P\quad (13)}$

and

${\displaystyle \theta (x){\big |}_{x=-j\infty }=\angle (x-r_{1}){\big |}_{x=-j\infty }+\angle (x-r_{2}){\big |}_{x=-j\infty }+\cdots +\angle (x-r_{n}){\big |}_{x=-j\infty }=-{\frac {\pi }{2}}N+{\frac {\pi }{2}}P\quad (14)}$

So, if we define

${\displaystyle \Delta ={\frac {1}{\pi }}\theta (x){\Big |}_{-j\infty }^{j\infty }\quad (15)}$

then we have the relationship

${\displaystyle N-P=\Delta \quad (16)}$

and combining (3) and (16) gives us

${\displaystyle N={\frac {n+\Delta }{2}}}$ and ${\displaystyle P={\frac {n-\Delta }{2}}\quad (17)}$

Therefore, given an equation of ${\displaystyle f(x)}$ of degree ${\displaystyle n}$ we need only evaluate this function ${\displaystyle \Delta }$ to determine ${\displaystyle N}$, the number of roots with negative real parts and ${\displaystyle P}$, the number of roots with positive real parts.

 Figure 1 ${\displaystyle \tan(\theta )}$ versus ${\displaystyle \theta }$

Equations (13) and (14) show that at ${\displaystyle x=\pm \infty }$, ${\displaystyle \theta =\theta (x)}$ is an integer multiple of ${\displaystyle \pi /2}$. Note now, in accordance with (6) and Figure 1, the graph of ${\displaystyle \tan(\theta )}$ vs ${\displaystyle \theta }$, that varying ${\displaystyle x}$ over an interval (a,b) where ${\displaystyle \theta _{a}=\theta (x)|_{x=ja}}$ and ${\displaystyle \theta _{b}=\theta (x)|_{x=jb}}$ are integer multiples of ${\displaystyle \pi }$, this variation causing the function ${\displaystyle \theta (x)}$ to have increased by ${\displaystyle \pi }$, indicates that in the course of travelling from point a to point b, ${\displaystyle \theta }$ has "jumped" from ${\displaystyle +\infty }$ to ${\displaystyle -\infty }$ one more time than it has jumped from ${\displaystyle -\infty }$ to ${\displaystyle +\infty }$. Similarly, if we vary ${\displaystyle x}$ over an interval (a,b) this variation causing ${\displaystyle \theta (x)}$ to have decreased by ${\displaystyle \pi }$, where again ${\displaystyle \theta }$ is a multiple of ${\displaystyle \pi }$ at both ${\displaystyle x=ja}$ and ${\displaystyle x=jb}$, implies that ${\displaystyle \tan \theta (x)={\mathfrak {Im}}[f(x)]/{\mathfrak {Re}}[f(x)]}$ has jumped from ${\displaystyle -\infty }$ to ${\displaystyle +\infty }$ one more time than it has jumped from ${\displaystyle +\infty }$ to ${\displaystyle -\infty }$ as ${\displaystyle x}$ was varied over the said interval.

Thus, ${\displaystyle \theta (x){\Big |}_{-j\infty }^{j\infty }}$ is ${\displaystyle \pi }$ times the difference between the number of points at which ${\displaystyle {\mathfrak {Im}}[f(x)]/{\mathfrak {Re}}[f(x)]}$ jumps from ${\displaystyle -\infty }$ to ${\displaystyle +\infty }$ and the number of points at which ${\displaystyle {\mathfrak {Im}}[f(x)]/{\mathfrak {Re}}[f(x)]}$ jumps from ${\displaystyle +\infty }$ to ${\displaystyle -\infty }$ as ${\displaystyle x}$ ranges over the interval ${\displaystyle (-j\infty ,+j\infty \,)}$ provided that at ${\displaystyle x=\pm j\infty }$, ${\displaystyle \tan[\theta (x)]}$ is defined.

 Figure 2 ${\displaystyle -\cot(\theta )}$ versus ${\displaystyle \theta }$

In the case where the starting point is on an incongruity (i.e. ${\displaystyle \theta _{a}=\pi /2\pm i\pi }$, i = 0, 1, 2, ...) the ending point will be on an incongruity as well, by equation (16) (since ${\displaystyle N}$ is an integer and ${\displaystyle P}$ is an integer, ${\displaystyle \Delta }$ will be an integer). In this case, we can achieve this same index (difference in positive and negative jumps) by shifting the axes of the tangent function by ${\displaystyle \pi /2}$, through adding ${\displaystyle \pi /2}$ to ${\displaystyle \theta }$. Thus, our index is now fully defined for any combination of coefficients in ${\displaystyle f(x)}$ by evaluating ${\displaystyle \tan[\theta ]={\mathfrak {Im}}[f(x)]/{\mathfrak {Re}}[f(x)]}$ over the interval (a,b) = ${\displaystyle (+j\infty ,-j\infty )}$ when our starting (and thus ending) point is not an incongruity, and by evaluating

${\displaystyle \tan[\theta '(x)]=\tan[\theta +\pi /2]=-\cot[\theta (x)]=-{\mathfrak {Re}}[f(x)]/{\mathfrak {Im}}[f(x)]\quad (18)}$

over said interval when our starting point is at an incongruity.

This difference, ${\displaystyle \Delta }$, of negative and positive jumping incongruities encountered while traversing ${\displaystyle x}$ from ${\displaystyle -j\infty }$ to ${\displaystyle +j\infty }$ is called the Cauchy Index of the tangent of the phase angle, the phase angle being ${\displaystyle \theta (x)}$ or ${\displaystyle \theta '(x)}$, depending as ${\displaystyle \theta _{a}}$ is an integer multiple of ${\displaystyle \pi }$ or not.

## The Routh criterion

To derive Routh's criterion, first we'll use a different notation to differentiate between the even and odd terms of ${\displaystyle f(x)}$:

${\displaystyle f(x)=a_{0}x^{n}+b_{0}x^{n-1}+a_{1}x^{n-2}+b_{1}x^{n-3}+\cdots \quad (19)}$

Now we have:

{\displaystyle {\begin{aligned}f(j\omega )&=a_{0}(j\omega )^{n}+b_{0}(j\omega )^{n-1}+a_{1}(j\omega )^{n-2}+b_{1}(j\omega )^{n-3}+\cdots &{}\quad (20)\\&=a_{0}(j\omega )^{n}+a_{1}(j\omega )^{n-2}+a_{2}(j\omega )^{n-4}+\cdots &{}\quad (21)\\&+b_{0}(j\omega )^{n-1}+b_{1}(j\omega )^{n-3}+b_{2}(j\omega )^{n-5}+\cdots \\\end{aligned}}}

Therefore, if ${\displaystyle n}$ is even,

{\displaystyle {\begin{aligned}f(j\omega )&=(-1)^{n/2}{\big [}a_{0}\omega ^{n}+a_{1}\omega ^{n-2}+a_{2}\omega ^{n-4}+\cdots {\big ]}&{}\quad (22)\\&+j(-1)^{(n/2)-1}{\big [}b_{0}\omega ^{n-1}+b_{1}\omega ^{n-3}+b_{2}\omega ^{n-5}+\cdots {\big ]}&{}\\\end{aligned}}}

and if ${\displaystyle n}$ is odd:

{\displaystyle {\begin{aligned}f(j\omega )&=j(-1)^{(n-1)/2}{\big [}a_{0}\omega ^{n}+a_{1}\omega ^{n-2}+a_{2}\omega ^{n-4}+\cdots {\big ]}&{}\quad (23)\\&+(-1)^{(n-1)/2}{\big [}b_{0}\omega ^{n-1}+b_{1}\omega ^{n-3}+b_{2}\omega ^{n-5}+\cdots {\big ]}&{}\\\end{aligned}}}

Now observe that if ${\displaystyle n}$ is an odd integer, then by (3) ${\displaystyle N+P}$ is odd. If ${\displaystyle N+P}$ is an odd integer, then ${\displaystyle N-P}$ is odd as well. Similarly, this same argument shows that when ${\displaystyle n}$ is even, ${\displaystyle N-P}$ will be even. Equation (13) shows that if ${\displaystyle N-P}$ is even, ${\displaystyle \theta }$ is an integer multiple of ${\displaystyle \pi }$. Therefore, ${\displaystyle \tan(\theta )}$ is defined for ${\displaystyle n}$ even, and is thus the proper index to use when n is even, and similarly ${\displaystyle \tan(\theta ')=\tan(\theta +\pi )=-\cot(\theta )}$ is defined for ${\displaystyle n}$ odd, making it the proper index in this latter case.

Thus, from (6) and (22), for ${\displaystyle n}$ even:

${\displaystyle \Delta =I_{-\infty }^{+\infty }{\frac {-{\mathfrak {Im}}[f(x)]}{{\mathfrak {Re}}[f(x)]}}=I_{-\infty }^{+\infty }{\frac {b_{0}\omega ^{n-1}-b_{1}\omega ^{n-3}+\cdots }{a_{0}\omega ^{n}-a_{1}\omega ^{n-2}+\ldots }}\quad (24)}$

and from (18) and (23), for ${\displaystyle n}$ odd:

${\displaystyle \Delta =I_{-\infty }^{+\infty }{\frac {{\mathfrak {Re}}[f(x)]}{{\mathfrak {Im}}[f(x)]}}=I_{-\infty }^{+\infty }{\frac {b_{0}\omega ^{n-1}-b_{1}\omega ^{n-3}+\ldots }{a_{0}\omega ^{n}-a_{1}\omega ^{n-2}+\ldots }}\quad (25)}$

Lo and behold we are evaluating the same Cauchy index for both:

${\displaystyle \Delta =I_{-\infty }^{+\infty }{\frac {b_{0}\omega ^{n-1}-b_{1}\omega ^{n-3}+\ldots }{a_{0}\omega ^{n}-a_{1}\omega ^{n-2}+\ldots }}\quad (26)}$

## Sturm's theorem

Sturm gives us a method for evaluating ${\displaystyle \Delta =I_{-\infty }^{+\infty }{\frac {f_{2}(x)}{f_{1}(x)}}}$. His theorem states as follows:

Given a sequence of polynomials ${\displaystyle f_{1}(x),f_{2}(x),\dots ,f_{m}(x)}$ where:

1) If ${\displaystyle f_{k}(x)=0}$ then ${\displaystyle f_{k-1}(x)\neq 0}$, ${\displaystyle f_{k+1}(x)\neq 0}$, and ${\displaystyle \operatorname {sign} [f_{k-1}(x)]=-\operatorname {sign} [f_{k+1}(x)]}$

2) ${\displaystyle f_{m}(x)\neq 0}$ for ${\displaystyle -\infty

and we define ${\displaystyle V(x)}$ as the number of changes of sign in the sequence ${\displaystyle f_{1}(x),f_{2}(x),\dots ,f_{m}(x)}$ for a fixed value of ${\displaystyle x}$, then:

${\displaystyle \Delta =I_{-\infty }^{+\infty }{\frac {f_{2}(x)}{f_{1}(x)}}=V(-\infty )-V(+\infty )\quad (27)}$

A sequence satisfying these requirements is obtained using the Euclidean algorithm, which is as follows:

Starting with ${\displaystyle f_{1}(x)}$ and ${\displaystyle f_{2}(x)}$, and denoting the remainder of ${\displaystyle f_{1}(x)/f_{2}(x)}$ by ${\displaystyle f_{3}(x)}$ and similarly denoting the remainder of ${\displaystyle f_{2}(x)/f_{3}(x)}$ by ${\displaystyle f_{4}(x)}$, and so on, we obtain the relationships:

{\displaystyle {\begin{aligned}&f_{1}(x)=q_{1}(x)f_{2}(x)-f_{3}(x)\quad (28)\\&f_{2}(x)=q_{2}(x)f_{3}(x)-f_{4}(x)\\&\ldots \\&f_{m-1}(x)=q_{m-1}(x)f_{m}(x)\\\end{aligned}}}

or in general

${\displaystyle f_{k-1}(x)=q_{k-1}(x)f_{k}(x)-f_{k+1}(x)}$

where the last non-zero remainder, ${\displaystyle f_{m}(x)}$ will therefore be the highest common factor of ${\displaystyle f_{1}(x),f_{2}(x),\dots ,f_{m-1}(x)}$. It can be observed that the sequence so constructed will satisfy the conditions of Sturm's theorem, and thus an algorithm for determining the stated index has been developed.

It is in applying Sturm's theorem (28) to (26), through the use of the Euclidean algorithm above that the Routh matrix is formed.

We get

${\displaystyle f_{3}(\omega )={\frac {a_{0}}{b_{0}}}f_{2}(\omega )-f_{1}(\omega )\quad (29)}$

and identifying the coefficients of this remainder by ${\displaystyle c_{0}}$, ${\displaystyle -c_{1}}$, ${\displaystyle c_{2}}$, ${\displaystyle -c_{3}}$, and so forth, makes our formed remainder

${\displaystyle f_{3}(\omega )=c_{0}\omega ^{n-2}-c_{1}\omega ^{n-4}+c_{2}\omega ^{n-6}-\cdots \quad (30)}$

where

${\displaystyle c_{0}=a_{1}-{\frac {a_{0}}{b_{0}}}b_{1}={\frac {b_{0}a_{1}-a_{1}b_{0}}{b_{0}}};c_{1}=a_{2}-{\frac {a_{0}}{b_{0}}}b_{2}={\frac {b_{0}a_{2}-a_{0}b_{2}}{b_{0}}};\ldots \quad (31)}$

Continuing with the Euclidean algorithm on these new coefficients gives us

${\displaystyle f_{4}(\omega )={\frac {b_{0}}{c_{0}}}f_{3}(\omega )-f_{2}(\omega )\quad (32)}$

where we again denote the coefficients of the remainder ${\displaystyle f_{4}(\omega )}$ by ${\displaystyle d_{0}}$, ${\displaystyle -d_{1}}$, ${\displaystyle d_{2}}$, ${\displaystyle -d_{3}}$,

making our formed remainder

${\displaystyle f_{4}(\omega )=d_{0}\omega ^{n-3}-d_{1}\omega ^{n-5}+d_{2}\omega ^{n-7}-\cdots \quad (33)}$

and giving us

${\displaystyle d_{0}=b_{1}-{\frac {b_{0}}{c_{0}}}c_{1}={\frac {c_{0}b_{1}-b_{1}c_{0}}{c_{0}}};d_{1}=b_{2}-{\frac {b_{0}}{c_{0}}}c_{2}={\frac {c_{0}b_{2}-b_{0}c_{2}}{c_{0}}};\ldots \quad (34)}$

The rows of the Routh array are determined exactly by this algorithm when applied to the coefficients of (19). An observation worthy of note is that in the regular case the polynomials ${\displaystyle f_{1}(\omega )}$ and ${\displaystyle f_{2}(\omega )}$ have as the highest common factor ${\displaystyle f_{n+1}(\omega )}$ and thus there will be ${\displaystyle n}$ polynomials in the chain ${\displaystyle f_{1}(x),f_{2}(x),\dots ,f_{m}(x)}$.

Note now, that in determining the signs of the members of the sequence of polynomials ${\displaystyle f_{1}(x),f_{2}(x),\dots ,f_{m}(x)}$ that at ${\displaystyle \omega =\pm \infty }$ the dominating power of ${\displaystyle \omega }$ will be the first term of each of these polynomials, and thus only these coefficients corresponding to the highest powers of ${\displaystyle \omega }$ in ${\displaystyle f_{1}(x),f_{2}(x),\dots }$, and ${\displaystyle f_{m}(x)}$, which are ${\displaystyle a_{0}}$, ${\displaystyle b_{0}}$, ${\displaystyle c_{0}}$, ${\displaystyle d_{0}}$, ... determine the signs of ${\displaystyle f_{1}(x)}$, ${\displaystyle f_{2}(x)}$, ..., ${\displaystyle f_{m}(x)}$ at ${\displaystyle \omega =\pm \infty }$.

So we get ${\displaystyle V(+\infty )=V(a_{0},b_{0},c_{0},d_{0},\dots )}$ that is, ${\displaystyle V(+\infty )}$ is the number of changes of sign in the sequence ${\displaystyle a_{0}\infty ^{n}}$, ${\displaystyle b_{0}\infty ^{n-1}}$, ${\displaystyle c_{0}\infty ^{n-2}}$, ... which is the number of sign changes in the sequence ${\displaystyle a_{0}}$, ${\displaystyle b_{0}}$, ${\displaystyle c_{0}}$, ${\displaystyle d_{0}}$, ... and ${\displaystyle V(-\infty )=V(a_{0},-b_{0},c_{0},-d_{0},...)}$; that is ${\displaystyle V(-\infty )}$ is the number of changes of sign in the sequence ${\displaystyle a_{0}(-\infty )^{n}}$, ${\displaystyle b_{0}(-\infty )^{n-1}}$, ${\displaystyle c_{0}(-\infty )^{n-2}}$, ... which is the number of sign changes in the sequence ${\displaystyle a_{0}}$, ${\displaystyle -b_{0}}$, ${\displaystyle c_{0}}$, ${\displaystyle -d_{0}}$, ...

Since our chain ${\displaystyle a_{0}}$, ${\displaystyle b_{0}}$, ${\displaystyle c_{0}}$, ${\displaystyle d_{0}}$, ... will have ${\displaystyle n}$ members it is clear that ${\displaystyle V(+\infty )+V(-\infty )=n}$ since within ${\displaystyle V(a_{0},b_{0},c_{0},d_{0},\dots )}$ if going from ${\displaystyle a_{0}}$ to ${\displaystyle b_{0}}$ a sign change has not occurred, within ${\displaystyle V(a_{0},-b_{0},c_{0},-d_{0},\dots )}$ going from ${\displaystyle a_{0}}$ to ${\displaystyle -b_{0}}$ one has, and likewise for all ${\displaystyle n}$ transitions (there will be no terms equal to zero) giving us ${\displaystyle n}$ total sign changes.

As ${\displaystyle \Delta =V(-\infty )-V(+\infty )}$ and ${\displaystyle n=V(+\infty )+V(-\infty )}$, and from (17) ${\displaystyle P=(n-\Delta /2)}$, we have that ${\displaystyle P=V(+\infty )=V(a_{0},b_{0},c_{0},d_{0},\dots )}$ and have derived Routh's theorem -

The number of roots of a real polynomial ${\displaystyle f(z)}$ which lie in the right half plane ${\displaystyle {\mathfrak {Re}}(r_{i})>0}$ is equal to the number of changes of sign in the first column of the Routh scheme.

And for the stable case where ${\displaystyle P=0}$ then ${\displaystyle V(a_{0},b_{0},c_{0},d_{0},\dots )=0}$ by which we have Routh's famous criterion:

In order for all the roots of the polynomial ${\displaystyle f(z)}$ to have negative real parts, it is necessary and sufficient that all of the elements in the first column of the Routh scheme be different from zero and of the same sign.

## References

• Hurwitz, A., "On the Conditions under which an Equation has only Roots with Negative Real Parts", Rpt. in Selected Papers on Mathematical Trends in Control Theory, Ed. R. T. Ballman et al. New York: Dover 1964
• Routh, E. J., A Treatise on the Stability of a Given State of Motion. London: Macmillan, 1877. Rpt. in Stability of Motion, Ed. A. T. Fuller. London: Taylor & Francis, 1975
• Felix Gantmacher (J.L. Brenner translator) (1959) Applications of the Theory of Matrices, pp 177–80, New York: Interscience.