Support Vector Machine Lab
Compare linear max-margin classification and kernel-based nonlinear separation on interactive 2D data.
Linear SVM learns a hyperplane \(f(x)=w^Tx+b\) that separates classes while maximizing margin.
$$\min_{w,b}\ \frac{1}{2}\|w\|^2 + C\sum_i \max(0, 1-y_i(w^Tx_i+b))$$
In this simulator, the linear model uses Pegasos-style stochastic updates to optimize hinge-loss behavior.
When data is not linearly separable, kernel methods compare points in transformed feature space.
Try RBF for smooth local boundaries or Polynomial for curved global boundaries.
Support vectors are highlighted as emphasized samples that define the decision surface.
lambda: regularization strength for linear Pegasos updates.lr: step size controlling update speed and stability.gamma(RBF): larger values create tighter, more local boundaries.degree(Poly): higher degree increases boundary complexity.
- Load demo data and train Linear SVM for several epochs.
- Inspect margin lines and training accuracy trend.
- Switch to Kernel Perceptron and compare nonlinear region behavior.
- Tune kernel parameters and observe support vector growth.
Click to place samples. Use Class A/B toggle first, then train and inspect boundaries.
Status
Points: 0 A | 0 B
Epoch: 0
Training Accuracy: -