SVM Multi-Class Classification Using One-vs-One Strategy
- Login to Download
- 1 Credits
Resource Overview
Detailed Documentation
In this approach, we implement the "One-vs-One" strategy where each binary classifier is trained using samples from one class as positive examples and samples from a single different class as negative examples. This design effectively addresses dataset imbalance issues. During training, we construct multiple classifiers: the first classifier distinguishes between Class 1 and Class 2, the second between Class 1 and Class 3, and so forth. According to the mathematical formula, for k classes, we require k(k-1)/2 binary classifiers (resulting in 10 classifiers for 4 classes). While the number of classifiers increases, the training computational complexity is substantially lower than the "One-vs-Rest" approach because each classifier trains on only two classes' data.
During classification inference, an input sample is processed by all binary classifiers. Each classifier votes for one of its two designated classes - the first classifier votes for either Class 1 or 2, the second for Class 1 or 3, etc. The implementation typically uses a voting mechanism where all classifiers contribute their predictions. The class receiving the maximum votes is selected as the final prediction. This method may occasionally produce classification overlaps where multiple classes receive votes, but it guarantees that unclassified cases cannot occur since at least one class must receive votes from its paired classifiers.
- Login to Download
- 1 Credits