From 8b1216c08b466a01e5345a1308117a7f4b5c2c9e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E2=80=9CDorsaRoh=E2=80=9D?= <“dorsa.rohani@deepgenomics.com”> Date: Thu, 1 Aug 2024 12:25:34 -0400 Subject: [PATCH] intial commit --- MNIST.ipynb | 54 +++++++++++++++++++++++++++++++++++++ Neural Networks/notes.ipynb | 47 ++++++++++++++++++++++++++++++++ 2 files changed, 101 insertions(+) create mode 100644 MNIST.ipynb create mode 100644 Neural Networks/notes.ipynb diff --git a/MNIST.ipynb b/MNIST.ipynb new file mode 100644 index 0000000..f090e72 --- /dev/null +++ b/MNIST.ipynb @@ -0,0 +1,54 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# MNIST Digit Classification (from scratch!)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Background: MNIST is a dataset of images of single digits. \n", + "\n", + "Here, you will find the code for digit classification of the MNIST dataset (digits 1-9) **from scratch**.\n", + "\n", + "
My goal is to investigate the differences and similarity between **multilayer perceptons (a type of artificial neural network) are to the human brain (biological neural networks)**." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Key Notes\n", + "\n", + "- Every progressive layer in the network corresponds to increasingly larger and larger edges. Etc. The second layer corresponds to small edges, and the next layer corresponds to overall shapes (ex. a loop) from those small edges\n", + " - pixels -> edges -> patterns -> digits\n", + "- " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/Neural Networks/notes.ipynb b/Neural Networks/notes.ipynb new file mode 100644 index 0000000..3d3111d --- /dev/null +++ b/Neural Networks/notes.ipynb @@ -0,0 +1,47 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Neural Networks" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Different types\n", + "\n", + "- Multilayer perceptron (plain vanilla neural network)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Key Vocabulary\n", + "\n", + "- **Neuron**: thing that holds a number\n", + " - The number held is called the neuron's **activation**\n", + "\n", + "- **Hidden Layers**: layers between the input & output layers\n", + " - the activations from one layer determine the activations of the next layer" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}