Skip to content

A Jupyter notebook that walks through an implementation of a single-hidden layer MLP.

License

Notifications You must be signed in to change notification settings

colonialjelly/multilayer-perceptron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multilayer Perceptron

This repository contains the implementation for a blog post I wrote about Multilayer Perceptrons. This is an attempt to show what an MLP is doing during learning and what's the intuition behind the formulation.

To motivate the problem of going beyond linear classifiers, I'm using a non-linear synthetic dataset, two concentric circles (data generation code is in the notebook).

The notebook also contains visualizations of the learned units (pre-activation), the learned hidden transformation that makes the dataset linearly separable and finally the learned decision boundary in the original space.

Visualizations

Decision Boundary

The final decision boundary of the MLP in the original space.

Learned Transformation

A 3D visualization of the dataset after applying the hidden layer.

Learned Units

The three lines correspond to the 3 neurons that were learned. It is visualized before the activations are applied to them.

About

A Jupyter notebook that walks through an implementation of a single-hidden layer MLP.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published