Nonlinear Neural Regression Lab
Fit nonlinear curves with a multilayer perceptron, then observe train/validation divergence under overtraining.
Unlike linear regression \(y=ax+b\), this lab uses hidden layers to learn nonlinear mappings \(x \mapsto y\).
$$\hat{y}=W_L\,\phi(W_{L-1}\phi(\cdots\phi(W_1x+b_1)\cdots)+b_{L-1})+b_L$$
Choose depth, width, and activation to control model capacity.
The objective is mean squared error on the training subset:
$$\text{MSE}=\frac{1}{n}\sum_{i=1}^{n}(y_i-\hat{y}_i)^2$$
Overfitting appears when train loss keeps decreasing while validation loss flattens or rises.
Click to add samples. In Test Mode, click an x-position to inspect predicted y and residual.
Training Status
Points: 0
Train / Val: 0 / 0
Epoch: 0
Train Loss: -
Val Loss: -
Interpretation
- Blue points: training subset, orange points: validation subset.
- Yellow marker in Test Mode: predicted output at clicked x.
- If train loss drops but val loss rises, capacity is too high or training is too long.
- Try stronger L2 regularization or smaller hidden layers to reduce overfitting.