XOR Neural Lab

AI Simulator Platform

XOR Neural Network Lab

Forward/Backward pass visualization for a tiny multi-layer perceptron.

XOR cannot be solved by a single linear separator. A model must learn a non-linear decision surface.

This makes XOR the standard toy problem for demonstrating hidden layers and non-linear activations.

The simulator uses a compact MLP:

Input(2) -> Hidden(4) -> Hidden(2) -> Output(1)

Output activation is sigmoid for binary probability. Hidden activation can be tanh or ReLU.

Each step samples one XOR case, performs a forward pass, computes loss, then applies backpropagation updates.

$$ z^{(l)} = a^{(l-1)}W^{(l)} + b^{(l)}, \quad a^{(l)} = f(z^{(l)}) $$

$$ W \leftarrow W - \eta \nabla_W L, \quad b \leftarrow b - \eta \nabla_b L $$

  • Loss chart shows convergence trend over training steps.
  • Prediction panel displays output confidence for all four XOR inputs.
  • Calculation panel logs the latest forward/backward values for inspection.

Training Controls

Step: 0 | Loss: -

Forward/Backward Trace


            

Prediction Snapshot

Larger circles indicate higher output probability near class 1.

XOR Targets

  • (0,0) -> 0
  • (0,1) -> 1
  • (1,0) -> 1
  • (1,1) -> 0