Perceptron
This is a simple binary perceptron demo. Choose a classification color by clicking on the appropriate button, and click on the screen to add a new point. See the instructions below for help.
Learning Rate:
Perceptron output value for last point added: 0.717
Hint: Bad results? Try clicking the "Train" button several times!
Instructions:
The Blue and Red buttons are color class selectors. Click on the other color to start adding points of that class instead. Click Reset to remove all points and reset the perceptron.
As you add points, the perceptron will attempt to classify them based on their color. The line will be drawn where the perceptron believes the two classes are divided. More points may improve classification.
Each time you add a point, the perceptron's raw output value will be displayed. This value is used to identify which group the point belongs to. Values that are close to 0.5 imply a prediction with less certainty. The color of the number coincides with the predicted class of the point.
The perceptron is trained in real time with each point that is added. However, you can click the Train button to run the perceptron through all points on the screen again. This may improve the classification accuracy.
Alternatively, you can click Retrain. This will clear the perceptron's learned weights and re-train it from scratch. However, keep in mind that more training epochs (training several times) will usually improve results.
Finally, you can click Classify, which will draw circles around each point indicating which class the perceptron believes said point belongs to. This visualization can give insight into why the line is positioned the way it is.
The Learning Rate box allows you to set a learning rate value between 0 and 1 (other values will be ignored). The learning rate controls how much the weights change in each training iteration. A higher learning rate may increase training speed. This value does not matter much in the case of a single perceptron, but in more compex neural networks, the algorithm may diverge if the learning rate is too high due to oscillations.