Recursive Least Squares (RLS) Algorithm

Resource Overview

The Recursive Least Squares (RLS) algorithm aims to minimize the weighted sum of squared errors for all input signals at each time instant, making it more adaptable to non-stationary signals. Unlike LMS algorithms that use ensemble averaging and produce consistent results under stable conditions, RLS employs time averaging where the optimal filter depends on the number of samples used for averaging. The algorithm implementation involves iterative updates of the inverse correlation matrix using the matrix inversion lemma, typically achieving faster convergence than LMS variants at the cost of higher computational complexity.

Detailed Documentation

The fundamental concept of the Recursive Least Squares (RLS) algorithm is to minimize the weighted sum of squared errors for all previously received input signals at each time instant. This weighting mechanism, often implemented through a forgetting factor (0 < λ ≤ 1) that discounts older data, enables better adaptation to non-stationary signals. In code implementations, the RLS algorithm maintains and recursively updates an inverse correlation matrix using the matrix inversion lemma, avoiding direct matrix inversion at each step.

Compared to the LMS algorithm, RLS utilizes time averaging where the optimal filter coefficients depend on the number of samples used for averaging. The computational procedure involves initializing the inverse correlation matrix P(0) = δI (where δ is a small positive constant) and weight vector w(0) = 0, then iteratively updating the gain vector, filter weights, and inverse correlation matrix for each new data point. In contrast, LMS and NLMS algorithms employ ensemble averaging and demonstrate consistent performance across different computational conditions in stationary environments due to their statistical averaging properties.

One significant advantage of the RLS algorithm is its superior tracking capability for non-stationary signals. By recursively minimizing the weighted squared error, the algorithm dynamically adjusts filter coefficients to match signal statistics variations. The key operations in each iteration include calculating the innovation (prediction error), computing the Kalman gain vector, updating the filter weights, and recursively adjusting the inverse correlation matrix. However, the algorithm's O(N²) computational complexity per iteration (where N is the filter length) represents a practical consideration compared to LMS's O(N) complexity.

In summary, the Recursive Least Squares algorithm provides enhanced adaptation to non-stationary signals through its time-averaging approach and recursive error minimization. The implementation requires careful handling of numerical stability through regularization techniques when updating the inverse correlation matrix. While offering faster convergence than LMS algorithms, engineers must consider its computational demands and the selection of appropriate forgetting factors based on application requirements.