# linear time invariant system

LTI system $$S$$ fulfills the properties

• $$S(\alpha x_1 + \beta x_2) = \alpha S(x_1) + \beta S(x_2)$$
• $$S(x) = x(t-\tau)$$

The output $$y$$ of an LTI system is defined as the linear convolution of the impulse response and the input signal.

For a discrete system,

\begin{equation} S(x[n]) = \Sigma_{i = 0}^{N} x[i] h[i-n] = (x * h)[n] \end{equation}

## 1. Exponentials as eigenfunctions for the continuous case

We show that the exponential functions, $$z(t) = e^{st}$$ where $$s \in \mathbb{C}$$.

Consider a system $$S$$ mapping a signal $$x(t) = z(t)$$ to $$y(t)$$, now

\begin{align*} y(t) &= (x * h)(t) \\ &= \int_{\tau = -\infty}^{\infty} z(t - \tau)h(\tau) \\ &= \int_{\tau = -\infty}^{\infty} e^{s (n-m)} h(\tau) \\ &= \int_{\tau = -\infty}^{\infty} e^{st} e^{-s\tau} h(\tau) && \text{Exponent laws} \\ &= e^{s t} \int_{\tau = -\infty}^{\infty} e^{-s\tau} h(\tau) && \text{Term does not depend on n} \\ &= z(n) \int_{\tau = -\infty}^{\infty} z(-m)h(m) && \text{Substitute z back} \\ \end{align*}

Where $$H(s) = \int_{\tau = \infty}^{\infty} e^{-s\tau} h(\tau)$$

The whole summation on the right is a constant scaling factor. This means that after putting in the exponential to the system, we got back the same exponential scaled by amplitude or phase, but never frequency.

Note that $$H(s)$$ is obtained by the two sided laplace transform of the impulse response in general.

This also implies that if $$s = j\omega$$, i.e $$z[n]$$ representing a sinusoid, then the eigenvalue is given by the fourier transform.

In either case, $$H(s)$$ is known as the transfer function.

## 2. Roots of unity as eigenvectors in discrete time

From the above, it appears that every vector given by any exponential would be an eigenvector, i.e satisfies $$Cx = \lambda x$$ for a circulant matrix $$C$$.

This isn't the case for the discrete scenario. Because the circulant matrix is diagonalizable in the Fourier basis (a result stemming from the convolution theorem), we can only have $$n$$ eigenvectors for an $$n$$ point DFT.

where specifically, the eigenvectors are exactly

\begin{equation} v_i = [1, \omega^i, \omega^2i, \dots, \omega^{(N-1)i}] \end{equation}

For all indices $$0 \leq i \leq N-i$$.

Where, from the fourier matrix, $$\omega = e^{j \frac{2\pi}{n}}$$. So, $$\omega^{2i} = e^{j\frac{2\pi}{n} 2i}$$. Each eigenvector is a row of the dft matrix, but that specificity is only for convention because the matrix is symmetric.

We can interpret each row of the dft matrix (of the form $$e^{j \frac{2\pi}{n} kn}$$) where $$k$$ is the row index and $$n$$ is the column index as really being a sequence of complex phasors rotating at frequency $$k$$. This is why for an $$N$$ point dft, there are going to be $$N$$ complex exponentials with different frequencies that are eigenvectors.

The larger the DFT matrix, the more exponentials we can fit into the basis set of the eigenspace, but note that the since roots of unity are modular, the actual exponentials can't keep growing to infinity, since there is a fixed space ($$2\pi$$ radians) around the complex plane at a fixed magnitude.

As we approach an ∞-point DFT, it turns out that from the above result, all complex exponentials, (meaning, every rotating phasors regardless of frequency), are eigenvectors.

1. Proof outline

Consider a simple $$3 \times 3$$ circulant matrix:

\begin{equation} \begin{bmatrix} h(0) & h(1) & h(2) \\ h(2) & h(0) & h(1) \\ h(1) & h(2) & h(0) \\ \end{bmatrix} \begin{bmatrix} \omega^0 \\ \omega^1 \\ \omega^2 \end{bmatrix} \end{equation}

The product is then

\begin{align*} \begin{bmatrix} \omega^0 h(0) + \omega^1 h(1) + \omega^2 h(2) \\ \omega^0 h(2) + \omega^1 h(0) + \omega^2 h(1) \\ \dots \\ \end{bmatrix} &= \begin{bmatrix} \omega^0 h(0) + \omega^1 h(1) + \omega^2 h(2) \\ \omega (\omega^{-1} h(2) + \omega^0 h(0) + \omega h(1) \\ \dots \\ \end{bmatrix} && \text{Factor out root}\\ &= \begin{bmatrix} \omega^0 h(0) + \omega^1 h(1) + \omega^2 h(2) \\ \omega (\omega^2 h(2) + \omega^0 h(0) + \omega h(1)) \\ \dots \\ \end{bmatrix} && \text{3rd root of unity modular symmetry} \\ &= \begin{bmatrix} \omega^0 h(0) + \omega^1 h(1) + \omega^2 h(2) \\ \omega (\omega^0 h(0) + \omega^1 h(1) + \omega^2 h(2)) \\ \dots \\ \end{bmatrix} && \text{Rearranging terms} \\ &= \left(h(0) + \omega h(1) + \omega^2 h(2)\right) \cdot \begin{bmatrix} \omega^0 \\ \omega^1 \\ \omega^2 \end{bmatrix} \end{align*}

From this it is clearly visible that the second row is really $$\omega$$ times the entry above it.