Robust Locally Weighted Regression Algorithm

Resource Overview

The Robust Locally Weighted Regression algorithm, introduced by Cleveland[7], utilizes local observation data to perform polynomial-weighted fitting for target points and employs least squares estimation. This algorithm combines traditional local polynomial fitting, locally weighted regression, and a robust fitting procedure with strong resistance to outliers.

Detailed Documentation

This section further explains the working principle of the Robust Locally Weighted Regression algorithm. Proposed by Cleveland[7], this algorithm aims to perform polynomial-weighted fitting for target points using local observation data. Specifically, the algorithm selects a subset of data points within a specified bandwidth around the target point, then applies weight calculations to each point in this subset. The weight assigned to each point depends on its distance from the target point - points farther away receive smaller weights. In code implementation, this is typically achieved using kernel functions (such as tricube or Gaussian kernels) to compute distance-based weights. The algorithm then performs polynomial fitting using these weighted points and employs least squares estimation for parameter calculation. Implementation-wise, this involves solving weighted least squares equations where the design matrix contains polynomial terms of the local coordinates, and the weight matrix contains the kernel-calculated weights. This process integrates traditional local polynomial fitting, locally weighted regression, and a robust fitting procedure that includes iterative reweighting to handle outliers. The robustness is typically enhanced through an iterative process that down-weights outliers in subsequent iterations, often using bisquare weight functions. This comprehensive approach provides more accurate and robust fitting results compared to standard regression methods.