# A Control Theory Perspective

In this post, we’ll talk about the trace parameterization of nonnegative Hermitian trigonometric polynomials, providing a proof which depends on the Kalman-Yakubovich-Popov lemma.  This follows very closely along the lines of Chapter 2, Section 2.5 of Dumitrescu’s book Positive Trigonometric Polynomials and Signal Processing Applications” (Springer, 2007).

Abstract

The trace parameterization of a Hermitian trigonometric polynomial which is nonnegative on the unit circle is an analog to the Riesz-Fejér theorem, which connects the convex set of trigonometric polynomials which are nonnegative on the unit circle to the set of positive definite matrices by means of causal polynomials. For this reason, the trace parameterization is often useful in computational settings, as it allows one bring the methods of semidefinite programming to bear on a problem involving these polynomials. We sketch an equivalent formulation of the trace parameterization, derived instead from the control-theoretic Kalman-Yakubovich-Popov lemma. This reveals a surprising connection between Hermitian trigonometric polynomials which are nonnegative on the unit circle and positive real transfer functions of linear time-invariant control systems with linear time-invariant feedback.

Basic Harmonic Analysis

Let us first establish some of the relevant Harmonic analysis terminology, notation, and definitions before proceeding. First, let us define the general object with which we will concern ourselves.

Definition (Trigonometric Polynomial)

Let $R(x) \in \mathbb{C}[x]$. We say that $R$ is a trigonometric polynomial if

$\displaystyle R(x) = a_0 + \sum_{k=1}^n a_n cos(nx) + i\sum_{k=1}^n b_n sin(nx), \enskip x \in \mathbb{R}.$

Using Euler’s formula, this can equivalently be seen as a polynomial $R(z) \in \mathbb{C}_n[z]$ given by

$\displaystyle R(z) = \sum_{k=-n}^n r_kz^{-k}, \enskip z \in \mathbb{C}.$

A specific subset of trigonometric polynomials will be the primary item on which we focus in this post.

Definition (Hermitian Trigonometric Polynomial)

If $R(z) \in \mathbb{C}_n[z]$, we say that $R$ is a Hermitian trigonometric polynomial if

$\displaystyle R(z) = \sum_{k=-n}^n r_kz^{-k}$

and $r_{-k}=r^*_k$.

Letting $r_k = \rho_ke^{i\theta_k}$, the final condition above then says that $r_{-k} = \rho_ke^{-i\theta_k}$. This allows us to see the symmetry which makes these polynomials particularly interesting, namely:

$R(z) = \rho_0 e^{i\theta_0} + \sum_{k=1}^n \left ( \rho_ke^{i\theta_k} z^{-k}+\rho_ke^{-e\theta_k} z^{k} \right ).$

It can be shown that Hermitian trigonometric polynomials are real-valued as well. For $u_k,v_k \in \mathbb{R}$ with $-v_k = v_{-k}$, we may write the complex coefficients of $R(z)$ as $r_k = u_k+iv_k$. This then allows us to see the following holds for values on the unit circle:

$R(z) = \sum_{k=-n}^n (u_k+iv_k)z^{-k} = u_0+2\sum_{k=1}^n u_k cos(k\theta) + 2 \sum_{k=1}^n v_ksin(k\theta).$

So, it is indeed the case that Hermitian trigonometric polynomials are real-valued.

A portion of a Hermitian trigonometric polynomial is, itself another important class of polynomials. In fact, this class of polynomials allows such polynomials as we’ve seen above to be written in a very nice way.

Definition (Causal Polynomial)

Let $\displaystyle H(z) \in \mathbb{C}_n[z]$. We say that $H$ is a causal polynomial if

$H(z) = \sum_{k=0}^n h_k z^{-k}.$

That is, Causal polynomials are simply polynomials with no $z^k$ terms for $k > 0$.

Notice then that the causal part of $R(z)$ as given above (which we denote by $R_+(z)$) is simply

$R_+(z) = \frac{r_0}{2} + \sum_{k=1}^n r_kz^{-k}.$

As another notational item, for a polynomial $p(z) = \sum_{k=0}^n p_k z^{k} \in \mathbb{C}[z]$, let us write $p^*(z) = \sum_{k=0}^n p^*_k z^{k}$.

Now, with this notation in mind, note that — as $r_0 \in \mathbb{R}$ for $R(z)$ a Hermitian trigonometric polynomial — we may observe

$R(z) = R_+(z) + R_+^*(z^{-1}).$

So, Hermitian trigonometric polynomials are necessary the sum of two causal polynomials. Later, we will see that they are also the product of causal polynomials, demonstrating that these two classes of polynomials are indeed intimately linked.

Basic Control Theory

Control theory broadly concerns itself with control-parameterized dynamical systems of ODE’s of the form

$\dot{x}=f(x(t),u(t))$

where $\dot{x}$ denotes $\frac{dx(t)}{dt}$ and $f$ is a sufficiently smooth vector field (which generally depends on the context of the problem). We say that $x(t)$ is the state vector, which is point-wise in $\mathbb{R}^n$, and $u(t)$ is the and control vector, which is point-wise in $\mathbb{R}^m$. Both the state vector and control vector can be thought of as being bounded, continuous vector fields as well (for our purposes, as least), and we generally consider dynamics which are time-invariant (i.e. autonomous) and finite dimensional.

Considering a fixed initial control $u$ and some initial value problem $\dot{x}=f(x,u), \quad x(0)=x_0$, if the correct assumptions are made (i.e. satisfying Pickard-Lindelöff or the like), we can (in theory) find a solution $\Phi_{x_0}(t)$. This essentially is the main problem in addressed by the theory of ODE’s. Control theory, very generally, is concerned with the reverse problem: Given a desired trajectory $\Phi_{x_0}(t)$, we seek to find a control vector field $u$ such that $\Phi_{x_0}(t)$ is a solution to the IVP.

Frequently, we will also concern ourselves with systems that have feedback and controls which may depend on that feedback. Intuitively, feedback can be thought of as a measurement of part of a state of the system — think of the cruise control in a car, which accelerates or decelerates based in response to the measured speed of the vehicle and the present control being applied. More precisely, let us make the following definitions:

Definition (Open Loop and Closed Loop Systems)

Consider the autonomous dynamical control system governed by ODE’s with feedback

$\dot{x}=f(x(t),u(t,y)), \enskip y=h(x(t)) \quad \textrm{with state vector} \enskip x(t)\in \mathbb{R}^n, \enskip \textrm{control} \enskip u(t,y) \in \mathbb{R}^m, \enskip \textrm{and feedback} \enskip y \in \mathbb{R}^p$

We say such a controller $u(t,y)$ is a state-feedback control, and say that such a system is a closed loop system. A system in which the controller $u(t)$ does not depend on the feedback is called an open loop system.

We will consider a particularly nice class of control systems called linear, time invariant systems, which we will now define.

Definition (Linear Time-Invariant Systems)

An autonomous dynamical control system govened by ODE’s with feedback of the form

$\dot{x} = Ax(t) = Bu(t,y), \qquad y=Cx(t)+Du(t)$

where $A,B,C,D$ are appropriately sized matrices, is said to be a linear time-invariant system.

For these systems, we will particularly concerned with two concepts that are essentially dual (and are dual general, not just in the linear case): controllability and observability. Before we see this link, let us first build up to each term’s respective definitions. First, we will need the notion of the reachable set.

Definition (Reachable Set)

Consider the autonomous dynamical control system governed by ODE’s

$\dot{x}=f(x(t),u(t)) \quad \textrm{with state vector} \enskip x(t)\in \mathbb{R}^n, \enskip \textrm{and control} \enskip u(t) \in \mathbb{R}^m.$

Given a fixed $x_0 \in \mathbb{R}^n$ and control $u(t)$ defined for $t \geq 0$, denote by $\Phi_t(x_0,u)$ the solution to the IVP

$\dot{x}=f(x(t),u(t)), \quad x(0)=x_0.$

We define the reachable set at $x_0$ to be

$\mathscr{R}(x_0) = \left \{ x \in \mathbb{R}^n: \exists T \geq 0 \enskip \textrm{and input trajectory} \enskip u(t) \enskip s.t. \enskip \Phi_T(x_0,u)=x \right \}.$

That is, the reachable set at $x_0$ is everywhere you can steer the system to beginning at $x_0$ in a finite amount of time. Of particular interest is the case when we can steer the system anywhere we would like, which leads to the concept of controllability.

Definition (Controllability)

Consider the autonomous dynamical control system governed by ODE’s

$\dot{x}=f(x(t),u(t,y)), \enskip \textrm{with state vector} \enskip x(t)\in \mathbb{R}^n, \enskip \textrm{control} \enskip u(t,y) \in \mathbb{R}^m$

We say the system is controllable at $x_0$ if $\mathscr{R}(x_0)=\mathbb{R}^n$. If the system is controllable at any $x_0 \in \mathbb{R}^n$, we say the system is controllable.

Essentially, the internal state of a controllable system can be moved from any initial state to any final state in finite time — it is both an intuitive notion and a rather strong condition!

The notion of observability can be seen in a relatively concrete fashion as well. Broadly, observability is is a measure for how much of the internal state of a system can be accurately inferred by the output. Think of a nice thermostat — if you set the tempurature to a certain level, the system will keep the the room at that level by turning on and off the heat in response to its internal thermometer, which (barring any strange lacunas in the air flow of the room) precisely yield a current measurement for the internal state of the system. More precisely, let us make the following definition for a simple class of systems with which we will concern ourselves:

Definition (Observability)

Consider the autonomous dynamical control system governed by ODE’s

$\dot{x}=f(x(t),u(t,y)), \enskip y=h(x(t)) \quad \textrm{with state vector} \enskip x(t)\in \mathbb{R}^n, \enskip \textrm{control} \enskip u(t,y) \in \mathbb{R}^m, \enskip \textrm{and feedback} \enskip y \in \mathbb{R}^p$

We say the system is observable if, at any time $T$ and under any sequence of valid state and control vectors $\left \{u_i(t,y) \right \}_{i=0}^r$ and $x_k(t)_{k=0}^l$ applied to the system during the interval $[0,T]$, the internal state of the system can be recovered in finite time from only the data yielded by $y$.

In more colloquial terms, this says that the behavior of the entire system — no matter what fashion it has been steered in previously — can be determined at any time, and in finite time, by the output of the system alone. The link between observability and controllability can be seen rather naturally in this context: Controllability allows one to steer the internal state of a system anywhere one wishes in a finite amount of time irrespective of the starting point, while observability allows one to determine the internal state of a system in a finite amount of time irrespective of the previous steering.

For the case of linear, time-invariant systems, controllability and observability have particularly nice characterizations which make this duality quite clear. The results can be summarized by the following theorems:

Theorem (Kalman Controllability Criteria)

Consider the $n^{th}$ order linear time-invariant control system with feedback of the form

$\dot{x} = Ax(t) = Bu(t,y), \qquad y=Cx(t)+Du(t).$

The, the system is controllable if and only if the Kalman matrix (or controllability matrix) given by

$K(A,B) = \begin{bmatrix} B & AB &A^2B &\hdots & A^{n-1}B \end{bmatrix}$

has full row rank (i.e. is of rank $n$). If this holds, we say that $(A,B)$ is a controllable pair.

(Note that feedback was not needed in this definition. We, however, opt to include the feedback to stress that this result does indeed also apply to systems with feedback, which will be our primary interest here.)

This result follows from the fact that the range of the Kalman matrix is the reachable set from $0$, i.e. $\mathcal{R}_T(0) = \textrm{range}(K(A,B))$. While it can be shown without much work that $\mathcal{R}_T(0) \subseteq \textrm{range}(K(A,B))$, we must construct the so-called controllability Gramian of $(A,B)$ to achieve our desired equality. This is outside the scope of our interests here, so we will omit the proof at present.

Now, we will see that a similar condition exists for observability.

Theorem (Observability Criteria)

Consider the $n^{th}$ order linear time-invariant control system with feedback of the form

$\dot{x} = Ax(t) = Bu(t,y), \qquad y=Cx(t)+Du(t).$

The, the system is observable if and only if the Observability matrix given by

$O(A,B) = \begin{bmatrix} C & CA &C^2A &\hdots & C^{n-1}A \end{bmatrix}$

has full row rank (i.e. is of rank $n$). If this holds, we say that $(A,C)$ is an observable pair.

Why this condition is necessary is perhaps less clear than the controllability criteria, and has to do with placing the eigenvalues of $A-LC$ in the left of the complex plane by a suitable choice of $L$ in order to make the estimation error of a so-called Luenberger observer go to zero as $t \rightarrow \infty$. We will not delve in to this here, as it is somewhat lengthy. The basic gist can be summarized as saying that, if $n$ rows of the observability matrix are linearly independent, then each of the $n$ states of the system can be obtained through linear combinations of the output variable $y$.

Finally, we must define the notion of a transfer function. In our context, a transfer function is simply a representation of the relation between the input and the output of a linear time-invariant system in terms of the temporal frequency. More precisely, in our case, the transfer function $\mathbf{G}(s)$ is the linear mapping of the Laplace transform of the input to the Laplace transform of the output. Let us now demonstrate what we mean.
Let $\mathbf{V}(s) = \mathcal{L}\left \{v(t)\right \} = \int_{0}^\infty v(s)e^{-st}dt$ denote the usual Laplace transform of a function $v(t)$ (where $s$ is used here to represent a complex variable instead of the standard $z$ purely by historical convention).
Taking the Laplace transform of $\dot{x} = Ax(t)+Bu(t)$ yields

$s\mathbf{X}(s) - x(0) = A\mathbf{X}(s)+B\mathbf{U}(s).$

Thus,

$\mathbf{X}(s) = (sI-A)^{-1}x(0)+(sI-A)^{-1}B\mathbf{U}(s).$

Substituting this into the output, we yield

$\mathbf{Y}(s) = C\left ( (sI-A)^{-1}x(0)+(sI-A)^{-1}B\mathbf{U}(s) \right ) + D\mathbf{U}(s).$

Thus, if $\mathbf{G}(s)$ is the transfer function, it is a linear mapping of $\mathbf{X}(s)$ to $\mathbf{Y}(s)$, and we result in the following definition:

Definition (Transfer Function of a Linear, Time-Invariant System)

Consider the $n^{th}$ order linear time-invariant control system with feedback of the form

$\dot{x} = Ax(t) = Bu(t,y), \qquad y=Cx(t)+Du(t).$

Then, the transfer function of the system is the linear mapping of $\mathbf{X}(s)$ to $\mathbf{Y}(s)$ given by

$\mathbf{G}(s) = C(sI-A)^{-1}B+D.$

This, combined with the rest of the terminology given here, turns out (surprisingly) to be enough to give an alternate derivation of the trace parameterization of Hermitian trigonometric polynomials.

Trace Parameterization:  A Harmonic Analysis Perspective

Let us begin with some notation. For shorthand, we will denote the vector containing the canonical basis elements for a polynomial of degree $n$ by

Noting that $\psi_n(z) \in \mathbb{C}[z]^{(n+1) \times 1}$, a familiarity with basic linear algebra allows us to see that the action of a vector of dimension $1 \times (n+1)$ (i.e. a linear functional) on $\psi_n(z)$ generates a polynomial. That is, if $b_n(z) = \sum_{k=0}^n b_k q_k(z) \in \mathbb{C}[z]^{1\times (n+1)}$ where $q_k(z) \in \mathbb{C}[z]$ for each $k$, then a new polynomial $p(z)$ is generated by the following action:

$\displaystyle b_n(z)\psi_n(z) = \sum_{i=0}^n b_iq_i(z)z^i = p(z). \qquad\qquad(*)$

The standard proof of the Riesz-Fejèr theorem uses this observation precisely to characterize nonnegative Hermitian trigonometric polynomials. The theorem is stated as follows:

Theorem (Riesz-Fejèr)

Let $R(z) \in \mathbb{C}[z]$ be a Hermitian trigonometric polynomial. Then, $R$ is nonnegative on the unit circle if and only if there exists a causal polynomial

$\displaystyle H(z) = \sum_{k=0}^n h_kz^{-k}$

such that

$R(z) = H(z)H^*(z^{-1}).$

Then, for a nonnegative Hermitian trigonometric polynomial, it is useful to note here that if $z = e^{i\theta}$ (that is, if $z$ is restricted to the unit circle), the following holds:

That is, all nonnegative Hermitian trigonometric polynomials are the squared moduli of causal polynomials.

We may also note that, as $r_{-k} = r_k^*$, we then have that

$\displaystyle r_k = \sum_{i=k}^n h_ih_{i-k}^*.$

So, a characterization of all Hermitian trigonometric polynomials nonnegative trigonometric polynomials is given by a choice of their respective causal polynomials. However, in some important computational circumstances, this characterization may not encode the coefficients $r_k$ in a particularly optimal way. For those circumstances in which the characterization of $r_k$ is indeed non-optimal, we may wish to consider a parameterization which relies on a particularly nice case of the observation $(*)$ — namely, when $p(z) = \psi_n^T(z^{-1})Q\psi_n(z)$. This leads to a definition:

Definition (Gram Matrix of a Hermitian Trigonometric polynomial)

A Hermitian matrix $Q \in \mathbb{C}^{(n+1) \times (n+1)}$ is called a Gram matrix associated with a Hermitian trigonometric polynomial $R \in \mathbb{C}_n[z]$ if

$R(z) = \psi_n^T(z^{-1})Q\psi_n(z).$

Moreover, we denote the set of all Gram matrices associated with $R(z)$ by $\mathcal{G}(R)$.

As suggested by the notation $\mathcal{G}(R)$, a Gram matrix associated with a given Hermitian trigonometric polynomial $R$ need not be unique.

Following the definition of a Gram matrix, we may now provide a theorem yielding the so-called trace parameterization of a Hermitian trigonometric polynomial $R(z)$, which allows for a different method of encoding the coefficients $r_k$.

Theorem (Trace Parameterization)

Let $R(z) \in \mathbb{C}_n[z]$ be a Hermitian trigonometric polynomial, and let $Q \in \mathcal{G}(R)$ be an associated Gram matrix of $R$. Then, the trace parameterization of $R(z)$ is given by

$\displaystyle r_k = tr[\Theta_k Q] = \sum_{i=max(0,k)}^{min(n+k,n)} q_{i,(i-k)} , \quad k \in \left \{-n, -(n-1) \hdots, n\right \}$

where $\Theta_k$ is the elementary Toeplitz matrix with $1$‘s along the $k^{th}$ diagonal and $0$‘s elsewhere.

Proof

First, let us observe that the following holds:

$R(z) = \psi_n^T(z^{-1})Q\psi_n(z) = tr[\psi_n^T(z^{-1})\psi_n(z)Q].$

Note now that

$\psi_n^T(z)\psi_n(z) = \begin{bmatrix} 1 &z^{-1} &\hdots &z^{-n} \\ z & 1 &\ddots &z^{-n+1} \\ \vdots &\ddots &\ddots &\vdots \\ z^n &z^{n-1} &\hdots &1 \end{bmatrix} = \sum_{k=-n}^n \Theta_k z^{-k}$

Combining, this yields:

which does indeed yield the desired form of the coefficients $r_k$$\blacksquare$

A keen observer might now conjecture that there may be a connection between the non-negativity of $R(z)$ and the properties of a given Gram matrix $Q$ associated with $R(z)$. This is indeed the case, and can be summarized by the following theorem (whose proof we omit, as this is not our central focus):

Theorem (Trace Parameterization of Nonnegative Trigonometric Polynomials)

Let $R(z) \in \mathbb{C}_n[z]$ be Hermitian trigonometric polynomial. Then, $R$ is nonnegative on the unit circle if and only if there exists a positive semidefinite matrix $Q \in \mathcal{G}(R)$.

In essence, the characterizations of trigonometric polynomials which are nonnegative on the unit circle given by trace parameterization and the Reisz-Fejèr theorem yields a link between those polynomials and positive definite matrices. This is particularly useful in a computation setting as both of these sets are convex (a fact which can be easily checked, but which we will not demonstrate here). As convexity is a required condition for optimization, this sheds light on the comment that trace parameterization may be a more useful method to encode these polynomials in some contexts.

Trace Parameterization:  A Control Theory Perspective

Now, we will show that the trace parameterization of Hermitian trigonometric polynomials can, in fact, be derived in a control theory perspective. This relies on the so-called Kalman-Yakubovich-Popov lemma (or positive real lemma, as it is sometimes referred to). A specific, nicer case of the lemma — which is important enough that we will think of it as a theorem here — can be stated as follows:

Theorem (Kalman-Yakubovich-Popov Lemma for Controllable and Observable Systems)

Consider the $n^{th}$ order linear time-invariant control system with feedback of the form

$\dot{x} = Ax(t) + Bu(t), \qquad y=Cx(t)+Du(t).$

Then, if $(A,B)$ are a controllable pair and $(A,C)$ are an observable pair, the transfer function $\mathbf{G}(s) = C(sI-A)^{-1}B+D$ is positive real (i.e. $Re\left ( G(s) \right ) \geq 0$) if and only if there exists a positive semidefinite matrix $P$ such that the matrix

$Q = \begin{bmatrix} P-A^TPA & C^T-A^TPB \\ C-B^TPA &(D+D^T)-B^TPB \end{bmatrix}$

is positive semidefinite.

To use this, let us recall that the causal part of a Hermitian trigonometric polynomial which is nonnegative on the unit circle is positive real. Let $R(z)\in \mathbb{C}_n[z]$ be such a polynomial. We now will place this causal part of $R(z)$ into a linear time-invariant control system with $(A,B)$ a controllable pair and $(A,C)$ an observable pair. That is, we realize a controllable and observable linear time-invariant system such that $G(s)=R_+(z)$. How this is done is slightly involved (we utilize the so called Rosenbrock system matrix to realize our desired transfer function), but for our purposes may be achieved by setting

$A = \Theta_1 = \begin{bmatrix} 0 & 1 & \hdots & 0 \\ \vdots & \ddots &\ddots &\vdots \\ \vdots & \ddots &\ddots &1 \\ 0 & \hdots & \hdots &0 \end{bmatrix}, \quad B = \begin{bmatrix} 0 \\ \ddots \\ 0 \\ 1 \end{bmatrix},$

$C= \begin{bmatrix} &r_n &\hdots &r_2 &r_1 \end{bmatrix}, \quad D = \frac{r_0}{2}.$

Then, for some positive definite matrix $P$, the matrix $Q$ in the Kalman-Yakubovich-Popov Lemma for Observable Systems is positive definite and becomes

In fact, this turns out to be a Gram matrix of $R(z)$, which can be easily verified by simply computing $\psi^T(z^{-1})Q\psi(z)$.

Now, we may also note that

for $k \in \left \{0, 1, \hdots, n\right \}$. But this then yields that

Which is precisely the trace parameterization of Hermitian trigonometric polynomials we had previously derived!

References

1. B. Dumitrescu, Positive Trigonometric Polynomials and Signal Processing Applications. Springer (2007).
2. R. Lozano, B. Brogliato, O. Egeland, B. Maschke, Dissipative Systems Analysis and Control: Theory and Applications. Springer (2000)
3. F. Jafari, Topics in Harmonic Analysis. University of Wyoming, Lecture Notes, Spring 2017 (currently unpublished).
4. D. Meyer, Nonlinear Trajectory Generation and Control: Course Notes from an Introduction. University of Wyoming, Lecture notes Spring 2017 (currently unpublished).