Unit - 5
Applications of Digital Signal Processing
Q1) Explain properties of auto-correlation function?
A1)
Random | Periodic | Aperiodic |
Q2) Explain properties of power spectra?
A2)
Random | Periodic | Aperiodic |
Cont. Real. Spect+ | Real discrete spect at | Const real spect+ |
Q3) Explain properties of cross-correlation function?
A3)
Random | Periodic | Aperiodic |
Uncorrelated | if no common frequencies | Uncorrelated |
Q4) Explain properties of Cross-Power Spectra?
A4)
Random | Periodic | Aperiodic |
if no common frequencies | ||
Comp. Cont. Spectrum+ | Compl. Discrete spectrum at | Compl. Discrete spectrum+ |
Q5) Explain auto-correlation function for WSS process?
A5) Let X (t) be a WSS process. Relabel RX(t1,t2) as RX(τ) where τ = t1− t2
RX(τ) is real and even, i.e., RX(τ) = RX(−τ) for every τ
|RX(τ)| ≤ RX(0) = E[X2(t)], the “average power” of X ( t )
(RX (τ))2= [E(X(t)X(t+τ))]2
≤ E[X2(t)] E[X2 (t+ τ)] by Schwarz inequality
= (RX (0))2 by stationarity
If R X (T) = RX(0) for some T≠ 0, then R X ( τ ) is periodic with period T and so is X ( t )
RX (τ) = RX (τ + T), X(τ) = X (τ + T) w.p.1 for every τ
The necessary and sufficient conditions for a function to be an autocorrelation function for a WSS process is that it be real, even, and nonnegative definite By nonnegative definite we mean that for any n, any t1, t2, . . . , tn and any real vector a = (a1, . . . , an),
∑ni=1 ∑ nj=1aiaj R(ti – tj) ≥0
To see why this is necessary, recall that the correlation matrix for a random vector must be nonnegative definite, so if we take a set of n samples from the WSS random process, their correlation matrix must be nonnegative definite. The condition is sufficient since such an R(τ) can specify a zero mean stationary Gaussian random process. The nonnegative definite condition may be difficult to verify directly. It turns out, however, to be equivalent to the condition that the Fourier transform of RX(τ), which is called the power spectral density SX(f), is nonnegative for all frequencies f.
Q6) What is stationary ergodic random process?
A6) Ergodicity refers to certain time averages of random processes converging to their respective statistical averages. We focus only on mean ergodicity of WSS processes. Let Xn, n = 1, 2, . . ., be a discrete time WSS process with mean µ and autocorrelation function RX(n). To estimate the mean of Xn, we form the sample mean
The process Xn is said to be mean ergodic if
in mean square,
Since,
Q7) Explain optimal filtering using ARMA model?
A7) The all-pole lattice provides the basic building block for lattice type structures that implement IIR systems that has both poles and zeros. The system transfer function for IIR filter is given as
H(z) = ∑qk=0cq(k)z-k / 1 + ∑pk=1ap(k)z-k = Cq(z) / Ap(z)
The difference equation for the system is given by
(η) = -∑pk=1ap(k) 𝛖(η – k) + 𝓍(η)
(η) = ∑qk=0cq(k) 𝛖(η – k)
The output y(n) is simply linear combination of delayed outputs from all-pole systems. Since zeros result from forming a linear combination of previous outputs, we can carry over this observation to construct a pole-zero system using all-pole lattice structure as the basis of building block. The gm(n) in all-pole lattice can be expressed as a linear combination of present and past outputs. The system becomes
Hb(z) ≣ Gm(z) / Y(z) = Bm(z)
Let the all-pole lattice filter with coefficients Km and add a ladder part by taking as the output, a weighed linear combination of {gm(n)}. The result is a pole-zero filter that has the lattice-ladder structure shown below with output as
(η) = ∑qk=0βkgk (η)
H(z) = Y(z) / X (z) = ∑qk=0 βkGk(z) / X(z)
Fig 1 Lattice-Ladder structure for pole-zero system
As X(z) = FP(z) and F0(z) = G0(z)
H(z) = ∑qk=0 βkGk(z) / G0 (z) × F0(z) / Fp (z)
= 1/Ap(z) ∑qk=0 βkBk (z)
Cq(z) = ∑qk=0 βkBk (z)
Cm(z) = ∑m-1k=0 βkBk (z) +βmBm (z)
= Cm-1(z) + βmBm (z)
When excited by a white noise sequence, this lattice-ladder filter structure generates an ARMA(p,q) process that has a power spectral density spectrum
𝗿xx () = 𝛔2 |Cq(𝒇 )|2 / |Ap(𝒇 )|2
Q8) Explain linear mean square estimation by wiener filter?
A8) The normal equation for the optimal filter coefficient is given by
∑M-1k=0h(k)𝛄xx (𝒍 – k) = 𝛄dx(𝒍) 𝒍 = 0, 1, …….., M – 1
The above equation can be obtained directly by applying the orthogonality principle in linear mean square estimation. The mean square error M in below equation is minimum if the filter coefficients {h(k)} are selected such that the error is orthogonal to each of the data point estimation.
𝓔M = E|e(𝒏)|2
= E|d(n) - ∑M-1k=0 h(k)𝒙(𝒏 – k)|
E[e(𝒏)𝒙* (𝒏 - 𝒍)] = 0 𝒍 = 0, 1, ……, M – 1
e(𝒏) = d(𝒏) - ∑M-1k=0 h(k)𝒙(𝒏 – k)
Geometrically the output of the filter is
d(n) = ∑M-1k=0 h(k)x(n-k)
The error e(n) is the vector from d(n) to as shown in figure below.
Fig 2 Geometric representation of linear MSE
The orthogonality principle states that
𝓔M = E|e(n)|2
Is minimum when e(n) is perpendicular to the data subspace. In this case the estimated can be expressed as a linear combination of the reduced set of linearly independent data points equal to the rank ΓM. The MSE is minimised by selecting the filter coefficients to satisfy the orthogonality principle the residual minimum MSE is
MMSEM = E[e(n)d*(n)]
Q9) What is power spectrum of any signal define?
A9) The power spectrum of a signal gives the distribution of the signal power among various frequencies. The power spectrum is the Fourier transform of the correlation function, and reveals information on the correlation structure of the signal. The strength of the Fourier transform in signal analysis and pattern recognition is its ability to reveal spectral structures that may be used to characterise a signal.
In general, the more correlated or predictable a signal, the more concentrated its power spectrum, and conversely the more random or unpredictable a signal, the more spread its power spectrum. Therefore, the power spectrum of a signal can be used to deduce the existence of repetitive structures or correlated patterns in the signal process. Such information is crucial in detection, decision making and estimation problems, and in systems analysis.
Q10) Write analog random signals and discrete time random signal properties?
A10)
- Analog Random Signal
𝛄x y (τ) = E[x(t)y(t – τ)] = LimTp⟶∞ 1/ Tp ∫Tp/2– Tp/2 x(t)y(t – τ) dt
𝗿xy (F) = ∫∞-∞𝛄xy(τ)e-j2𝜋F𝜏dτ
𝛄xy (τ) = ∫∞-∞ 𝗿xy (F)ej2𝜋F𝜏dF
- Discrete time Random Signal
𝛄xy(m) = E[x(n)y(n-m)] = LimN⟶∞ 1/N ∑N-1n=0 x(n)y(n-m)
𝗿xy (𝒇) = ∑∞m=-∞𝛄xy (m)e-j2𝜋𝒇m
𝛄xy(m) = ∫1/2-1/2𝗿xy(𝒇)ej2 𝜋𝒇md𝒇