Skip to content

A minimal neural network built from scratch in NumPy that learns the XOR function, demonstrating forward propagation, backpropagation, and gradient descent without any ML frameworks.

License

Notifications You must be signed in to change notification settings

willow788/Neural-network-from-scratch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Neural Network From Scratch (NumPy • XOR Classifier)

This project implements a simple feed-forward neural network from scratch using NumPy, without using any machine-learning frameworks. The network learns the XOR logic function, demonstrating forward propagation, loss computation, backpropagation, and gradient-descent weight updates.

It’s designed as an educational reference to understand how neural networks work internally.


🔹 Features

  • Fully manual neural-network implementation
  • Single hidden-layer architecture
  • Sigmoid activation function
  • Mean Squared Error (MSE) loss
  • Gradient Descent optimization
  • Clean and minimal NumPy code

🧠 Problem: XOR Logic

Input A Input B Output
0 0 0
0 1 1
1 0 1
1 1 0

The XOR problem cannot be solved by a linear model, so it’s a classic example for neural networks.


📦 Requirements

  • Python 3.8+
  • NumPy

Install NumPy if needed:

pip install numpy

▶️ How To Run

python main.py

You should see the loss decreasing during training and final predictions close to:

Input: [0 0] → 0
Input: [0 1] → 1
Input: [1 0] → 1
Input: [1 1] → 0

🏗️ Network Architecture

  • Input layer: 2 neurons
  • Hidden layer: 4 neurons
  • Output layer: 1 neuron
  • Activation: Sigmoid
  • Loss: Mean Squared Error
  • Optimizer: Gradient Descent

📘 How It Works (Summary)

  1. Forward Pass

    • Inputs → Hidden Layer → Output Layer
    • Sigmoid squashes outputs between 0–1
  2. Loss Calculation

    • Mean Squared Error compares predictions with expected outputs
  3. Backpropagation

    • Gradients are computed using the chain rule
    • Each weight is updated based on its contribution to the error
  4. Gradient Descent Update

    • Weights and biases are adjusted repeatedly to reduce loss

🧪 Example Output

Epoch 0      Loss: 0.25
Epoch 2000   Loss: 0.12
Epoch 4000   Loss: 0.05
Epoch 6000   Loss: 0.02
Epoch 8000   Loss: 0.01

Predictions:
[[0.02]
 [0.97]
 [0.97]
 [0.03]]

🎯 Learning Goals

This project helps you understand:

✔️ What forward propagation really does ✔️ How gradients are computed manually ✔️ How neural networks actually “learn” ✔️ Why activation functions matter


📁 Project Structure

.
├── main.py     # Neural network implementation
└── README.md   # Documentation

🤝 Contributing

Feel free to open issues or submit pull requests to improve or extend the project.


📜 License

This project is open-source. Use it for learning, demos, or further development.

About

A minimal neural network built from scratch in NumPy that learns the XOR function, demonstrating forward propagation, backpropagation, and gradient descent without any ML frameworks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages