% SIGNAL = sinusoids + noise % Setting up the signal parameters and time history N=100; a0=1.8; a1=1.5; theta1=1.3; a2=2; theta2=1.35; k=1:N; k=k(:); y=a0*randn(N,1)+a1*exp(1i*(theta1*k+2*pi*rand))+a2*exp(1i*(theta2*k+2*pi*rand));

## High resolution tools for spectral analysisResearch supported by AFSOR, NSF. Maintained by L. Ning and T.T. Georgiou
## IntroductionConsider a discrete-time zero-mean stationary process , , and let represents power density over frequency. Under suitable smoothness conditions, it is also Spectral estimation refers to esimating when only a finite-sized observation record of the time series is available. Different schools of thought have evolved over the years based on varying assumptions and formalisms. Classical methods began first based on Fourier transform techniques and the periodogram, followed by the so called modern spectral estimation methods such as the Burg method, MUSIC, ESPRIT, etc. The mathematics relates to the theory of the Szeg orthogonal polynomials and the structure of Toeplitz covariances. In contrast, recently, the analysis of state covariance matrices, see e.g. [1, 2], suggested a general framework which allows even higher resolution. A series of generalized spectral estimation tools have been developed generalizing Burg, Capon, MUSIC, ESPRIT, etc.. In the following, we will overview some of these high resolution methods and the relevant computer software. Their use is shown via a representative academic examples. We want to resolve two sinusoids in based on noisy measurements of the time series where and . These are closer than the theoretical resolution limit of Fourier method. % SIGNAL = sinusoids + noise % Setting up the signal parameters and time history N=100; a0=1.8; a1=1.5; theta1=1.3; a2=2; theta2=1.35; k=1:N; k=k(:); y=a0*randn(N,1)+a1*exp(1i*(theta1*k+2*pi*rand))+a2*exp(1i*(theta2*k+2*pi*rand));
## FFT-based methodsWe begin with two classical formulae.
The where denotes an estimate of the covariance lag , whereas the The two coincide when the correlation lag is estimated via where . The periodogram can be efficiently computed using the fast Fourier transform (FFT).
There is a variety of methods, such as Using the above example, we pad up by adding zeros. The corresponding
FFT-based spectrum is shown in the following figures.
Rudimentary code is displayed as a demonstration. A file to reproduce the following results can
be downloaded. Alternative matlab build-in routines for periodograms are ```
% the fft-based spectra
NN=2048; th=linspace(0,2*pi,NN);
Y =abs(fft(y,NN))/sqrt(N);
Y = Y.^2;
plot(th,Y);
```
## Input-to-state filter and state covarianceThe first step to explain the high resolution spectral analysis tools is to consider the Then the filter transfer function is A positive semi-definite matrix is the state covariance of the above filter, i.e. for some stationary input
process , for some row-vector . Starting from a finite number of samples, we denote the sample state covariance matrix. If the matrix is the the filter is a tapped delay line: The corresponding state covariance is Toeplitz matrix. Thus the state-covariance formalism subsumes the theory of Toeplitz matrices. The The routine % setting up filter parameters and the svd of the input-to-state response thetamid=1.325; [A,B]=cjordan(5,0.88*exp(thetamid*1i)); % obtaining state statistics R=dlsim_complex(A,B,y'); sv=Rsigma(A,B,th); plot(th(:),sv(:));
## Maximum entropy spectrumThe entropy of a probability distribution function with , quantifies the uncertainty of the corresponding random variable. When is a multi-dimensional Gaussian distribution with zero mean and covariance matrix , the entropy becomes For a zero-mean stationary Gaussian process , corresponding to infinite-sized Toeplitz structured covariance, the entropy “diverges”. Thus, one uses instead the ‘‘entropy rate". Let be a dimensional principle sub-matrix of the covariance matrix Then the entropy rate is The limit of converges to the optimal one-step prediction error variance, and from the Given state statistics, the maximum entropy power spectrum is The solution to this problem (see [2]) is where and is the The maximum entropy spectrum is obtained using the routine ```
% maximum entropy spectrum and Burg spectrum
me_spectrum=me(R,A,B,th);
plot(th,me_spectrum);
me_burg=pburg(y,5,th);
hold on;
plot(th,me_burg);
```
## Subspace methodsWe start with the where Accordingly, the power spectrum decomposes as where is the Dirac function. The decomposition corresponds to a set of white noise. See MA decomposition for a decomposition corresponds moving-average (MA) noise. The above result can be generalized to the case of state covariances [1]. More specifically, let be the unique solution to the Lyapunov equation The matrix is the state covariance when the input is pure white noise. Let now be an arbitrary state covariance, let be the smallest eigenvalues of the matrix pencil , and assume that the rank of is , then where is generalization of . The subspace spectral analysis methods rely on the singular value decomposition where is a unitary matrix and , . Partition where and are the first and the last columns of , respectively. Based on this decomposition, there are two ways we can proceed generalizing the MUSIC and ESPRIT methods, respectively [P. Stoica, R.L. Moses, 1997]. ## Noise subspace analysisThe columns of span the null space of , while the signal , is in the span of the columns of . So the nonnegative trigonometric polynomial has roots at . Given a sample state covariance matrix and an estimate on the number of signals , we let be the matrix of singular vectors of the smallest singular values, and be the corresponding trigonometric polynomial for . Two possible generalization of the MUSIC method are as follows. Spectral MUSIC: identify , for as the values on where achieves the -highest local maxima. Root MUSIC: identify , for as the angle of the -roots of which have amplitude and are closest to unit circle.
## Signal subspace analysisA signal subspace method relies on the fact that for the pair , and given above, there is a unique solution to the following matrix equation where is row vector and a matrix. The eigenvalues of are precisely for . If is a Toeplitz matrix and , given as in with a companion matrix, then we recover the ESPRIT result [P. Stoica, R.L. Moses, 1997]. The signal subspace estimation is computed using ```
% subspace method
[thetas,residues]=sm(R,A,B,2);
arrowb(thetas,residues); hold on;
Ac=compan(eye(1,6));
Bc=eye(5,1);
That=dlsim_complex(Ac,Bc,y');
[th_esprit,r_esprit]=sm(That,Ac,Bc,2); % ESPRIT spectral lines
arrowg(th_esprit,r_esprit);
```
## Spectral envelopLet denote a spectral measure of the stochastic process , the envelop of maximal spectral power is defined as where represents the state covariance and is the input-to-state filter. In other words, represents the maximal spectral “mass” located at which is consistent with the covariance matrix. It can also be shown that where represents the state covariance for the sinusoidal input . The optimal solution is and implemented in For real valued processes with a symmetric spectrum with respect to the origin, a symmetrized version of the spectral envelop [1] can be similarly obtained: ```
% spectral envelop
rhohalf = envlp(R,A,B,th);
rho = rhohalf.^2;
plot(th,rho);
```
## Reference[1] T.T. Georgiou, “Spectral Estimation via Selective Harmonic Amplification,”
[2] T.T. Georgiou, “Spectral analysis based on the state covariance: the maximum entropy spectrum and linear fractional parameterization,” |