Code for the paper Gradient-Free Neural Network Training via Synaptic-Level Reinforcement Learning. The paper is currently available on arXiv.
@article{bhargava2021gradient,
title={Gradient-Free Neural Network Training via Synaptic-Level Reinforcement Learning},
author={Bhargava, Aman and Rezaei, Mohammad R and Lankarany, Milad},
journal={arXiv preprint arXiv:2105.14383},
year={2021}
}
- Policy file:
src/golden_pol2cy.jld2
contains the tabular policy matrix generated insrc/04 Multilayer Perceptron.ipynb
. - Simulated decision boundary experiments:
src/04 Multilayer Perceptron.ipynb
contains code for reproducing the neural network results trained on simulated decision boundaries. - Synaptic Reinforcement Learing Library:
src/SynRLv6.jl
is the final set of library functions invoked to train and validate neural networks using the proposed methodology. - OCR Experiment Script:
src/OCR_01.jl
is a script that performs an OCR classification experiment using the SynRL library and the hyperparameters passed in through the command line. Results are cached according to command line arguments as well. - Experiment Orchestration Script:
src/orch_02.py
is a python script that repeatedly invokesORC_01.jl
with different hyperparameters to run a large set of experiments.
- Single Layer Perceptron:
src/A1.1 TF Optimized SLP.ipynb
contains code used to train and validate the single-layer perceptron using gradient descent on the notMNIST dataset. - Multilayer Perceptron:
src/A2 MLP Gradient Descent.ipynb
contains code used to train and validate the multi-layer perceptron using gradiennt descent on the notMNNIST dataset.