Extreme Learning Machine (ELM): A Fast Neural Network Algorithm for Single-Hidden Layer Feedforward Networks
Extreme Learning Machine (ELM) is a simple, efficient learning algorithm for Single-hidden Layer Feedforward Networks (SLFNs) proposed by Associate Professor Guang-Bin Huang from Nanyang Technological University in 2006. Unlike traditional neural network training algorithms (e.g., Backpropagation), ELM requires minimal parameter tuning—only the number of hidden nodes needs specification—and avoids local optima by randomly initializing input weights and biases without iterative adjustments. The algorithm computes output weights analytically via Moore-Penrose pseudoinverse, ensuring unique optimal solutions and delivering rapid training with strong generalization. Code implementations typically involve random weight initialization, hidden layer activation (e.g., sigmoid), and linear output solving.