Radial Basis Function Neural Network MATLAB Source Code
- Login to Download
- 1 Credits
Resource Overview
Application Background
When BP networks are used for function approximation, the weight adjustment employs negative gradient descent method which has limitations such as slow convergence and local minima. Radial Basis Function (RBF) networks outperform BP networks in approximation capability, classification performance, and learning speed.
MATLAB provides four radial basis function-related functions that create two-layer neural networks, with the first layer being radial basis layers and the second layer being either linear or competitive layers. The primary differences lie in their weight/threshold calculation methods and the presence of thresholds.
Key Technology
RBF networks can approximate arbitrary nonlinear functions, handle complex system patterns that are difficult to analyze analytically, demonstrate excellent generalization capability, and achieve rapid learning convergence speeds.
Detailed Documentation
Application Background
It is well known that when BP networks are used for function approximation, weight adjustment employs the negative gradient descent method. However, this weight adjustment approach has certain limitations, including relatively slow convergence and susceptibility to local minima. To overcome these issues, we can utilize Radial Basis Function (RBF) networks for function approximation. Compared to BP networks, RBF networks demonstrate superior performance in approximation capability, classification ability, and learning speed.
In MATLAB, four radial basis function-related functions are provided, all of which can create two-layer neural networks. These neural networks consistently feature radial basis layers as their first layer, while the second layer can be either linear or competitive layers. The main differences among these functions lie in their weight and threshold calculation methods, as well as whether thresholds are implemented. For code implementation, these functions typically require specifying parameters like spread constants and maximum neuron counts during network configuration.
Key Technology
RBF networks possess the capability to approximate arbitrary nonlinear functions and can handle complex patterns within systems that are difficult to analyze analytically. They not only exhibit excellent generalization capability but also achieve rapid learning convergence speeds. Consequently, RBF networks have been successfully applied in numerous fields including nonlinear function approximation, time series analysis, data classification, pattern recognition, information processing, image processing, system modeling, control systems, and fault diagnosis.
Let us briefly explain why RBF networks achieve faster learning convergence. When one or more adjustable parameters (such as weights or thresholds) in a network affect every output, we call it a global approximation network. However, global approximation networks typically exhibit slow learning speeds because every weight in the network must be adjusted for each input. In contrast, local approximation networks have only a few connection weights that influence the output. Common local approximation networks include RBF networks, Cerebellar Model Articulation Controller (CMAC) networks, and B-spline networks. The MATLAB implementation typically uses Gaussian functions as activation functions in the hidden layer, where the network automatically determines center positions and widths during training.
- Login to Download
- 1 Credits