Np y(k) = SUM p(i) x(k-i) , i=1where x(k) is the input signal. The prediction error is
e(k) = x(k) - y(k) .
To minimize the mean-square prediction error, solve
R p = r,
where R is a symmetric positive definite covariance matrix, p is a vector of predictor coefficients and r is a vector of correlation values. The matrix R and and vector r are defined as follows
R(i,j) = E[x(k-i) x(k-j)], for 1 <= i,j <= Np, r(i) = E[x(k) x(k-i)], for 1 <= i <= Np.
The resulting mean-square prediction error can be expressed as
perr = Ex - 2 p'r + p'R p = Ex - p'r ,
where Ex is the mean-square value of the input signal,
Ex = E[x(k)^2].
For this routine, the matrix R must be symmetric and Toeplitz. Then
R(i,j) = rxx(|i-j|) r(i) = rxx(i)
If the correlation matrix is numerically not positive definite, or if the prediction error energy becomes negative at some stage in the calculation, the remaining predictor coefficients are set to zero. This is equivalent to truncating the autocorrelation coefficient vector at the point at which it is positive definite.
This subroutine solves for the predictor coefficients using Durbin's recursion. This algorithm requires
Np divides, Np*Np multiplies, and Np*Np adds.
Predictor coefficients are usually expressed algebraically as vectors with 1-offset indexing. The correspondence to the 0-offset C-arrays is as follows.
p(1) <==> pc[0] predictor coefficient corresponding to lag 1 p(i) <==> pc[i-1] 1 <= i < Np