Logistic Regression Visual Lab
Binary classification simulation with sigmoid boundary learning.
Logistic regression predicts the probability of class membership for binary targets.
$$ \hat{y} = \sigma(z), \quad z = mx + b, \quad \sigma(z)=\frac{1}{1+e^{-z}} $$
The model outputs values between 0 and 1. A threshold such as 0.5 converts probability into class labels.
The training objective minimizes negative log likelihood:
$$ L(m,b)=-\frac{1}{n}\sum_{i=1}^{n}\left[y_i\log(\hat{y_i})+(1-y_i)\log(1-\hat{y_i})\right] $$
This loss penalizes confident wrong predictions heavily, which improves calibrated probability outputs.
For one-feature logistic regression, batch gradients are:
$$ \frac{\partial L}{\partial m}=\frac{1}{n}\sum(\hat{y_i}-y_i)x_i,\quad \frac{\partial L}{\partial b}=\frac{1}{n}\sum(\hat{y_i}-y_i) $$
$$ m \leftarrow m-\eta\frac{\partial L}{\partial m},\quad b \leftarrow b-\eta\frac{\partial L}{\partial b} $$
Lower learning rates improve stability, while larger rates converge faster but may oscillate.
- Add class points manually (lower band for class 0, upper band for class 1) or load random data.
- Train with Auto Train and monitor loss decay.
- Use Step to inspect per-iteration behavior.
- Enable Test Mode and click to inspect predicted probability at any input position.
Click canvas to add points. Points near y=0 represent class 0 and points near y=1 represent class 1.
Loss Curve
Interpretation Guide
- The yellow S-curve is the learned probability function.
- Red points are class 1, cyan points are class 0.
- The center transition zone approximates the decision boundary.
- Decreasing loss indicates improving classification confidence.