Simple Perceptron
The perceptron is the fundamental building block of neural networks. Watch it learn to classify data points by adjusting its weights and bias.
The Perceptron
A perceptron is the simplest type of artificial neuron. It takes inputs, multiplies them by weights, adds a bias, and produces an output through an activation function.
output = activate(w₁·x + w₂·y + b)
The perceptron learns by adjusting weights and bias to minimize prediction errors.
Visualization
Parameters
Training Statistics
How the Perceptron Learns
Understanding the Perceptron
What is a Perceptron?
A perceptron is a simple artificial neuron that takes multiple inputs, applies weights to them, adds a bias, and produces a binary output (0 or 1). It was invented by Frank Rosenblatt in 1957 and is the foundation of modern neural networks.
How Does It Learn?
The perceptron uses supervised learning. For each training example:
- It makes a prediction based on current weights
- Compares the prediction with the actual label
- Calculates the error (difference between prediction and actual)
- Updates weights proportionally to the error and learning rate
Key Components
Weights (w₁, w₂): Control how much each input influences the output
Bias (b): Shifts the decision boundary, allowing the model to fit better
Activation Function: Converts the weighted sum to a binary output (step function)
Learning Rate: Controls how much weights change with each update
Mathematical Formulation
Forward Pass
z = w₁ × x + w₂ × y + b
output = activate(z) = {1 if z ≥ 0, 0 otherwise}
Weight Update Rule
error = actual_label - predicted_label
w₁ = w₁ + α × error × x
w₂ = w₂ + α × error × y
b = b + α × error
(where α is the learning rate)
Limitations of the Perceptron
Historical Significance
1957: Frank Rosenblatt invented the perceptron at the Cornell Aeronautical Laboratory.
1958: The Mark I Perceptron was the first machine to recognize simple shapes and patterns, sparking excitement about artificial intelligence.
1969: Minsky and Papert's book "Perceptrons" highlighted limitations, causing the first "AI Winter."
1980s: The development of backpropagation and multi-layer networks overcame these limitations, leading to modern deep learning.
Experiment Ideas
Try different learning rates and observe how quickly the perceptron converges
Generate new data and see how the perceptron adapts to different distributions
Manually adjust weights to understand how they affect the decision boundary
Watch how the bias parameter shifts the decision line up or down