This repository hosts a Multi-Layer Perceptron (MLP) implemented from scratch in C++. It was developed as a class project for a Master's level "Programming for Engineers" course.
A Multi-Layer Perceptron (MLP) is a type of artificial neural network composed of multiple layers of interconnected nodes or "neurons". It is an extension of the simple perceptron, which consists of a single layer of neurons. MLPs can approximate non-linear functions, making them versatile for various tasks.
Hidden Layers: Intermediate layers where calculations take place.
Through a process called "backpropagation" the network adjusts its weights based on the error of its predictions.
MLPs are foundational to deep learning, which has powered a variety of advancements in AI. Here are some areas that have been influenced by the concepts of MLPs:
Deep Neural Networks (DNNs): MLPs can be thought of as early DNNs with multiple layers. DNNs expand on this by having many hidden layers, making them "deep".
Convolutional Neural Networks (CNNs): Used primarily in image processing, CNNs build upon the layered structure of MLPs but add specialized layers for tasks like feature extraction.
Recurrent Neural Networks (RNNs): Applied mainly for sequence data like time series or natural language, RNNs are influenced by MLPs but add loops to retain information.
I embarked on this project to deepen my understanding of Neural Networks and their underlying optimization techniques.