KCNet : An Insect-Inspired Single-Hidden-Layer Neural Network with Randomized Binary Weights for Prediction and Classification Tasks
This repository is the official implementation of KCNet. All source codes were implented in Python 3.7.
Fruit flies are established model systems for studying olfactory learning as they will readily learn to associate odors with both electric shock or sugar rewards. The mechanisms of the insect brain apparently responsible for odor learning form a relatively shallow neuronal architecture. Olfactory inputs are received by the antennal lobe (AL) of the brain, which produces an encoding of each odor mixture across ~50 sub-units known as glomeruli. Each of these glomeruli then project its component of this feature vector to several of ~2000 so-called Kenyon Cells (KCs) in a region of the brain known as the mushroom body (MB). Fly responses to odors are generated by small downstream neuropils that decode the higher-order representation from the MB. Research has shown that there is no recognizable pattern in the glomeruli-KC connections (and thus the particular higher-order representations); they are akin to fingerprints- even isogenic flies have different projections. Leveraging insights from this architecture, we propose KCNet, a single-hidden-layer neural network that contains sparse, randomized, binary weights between the input layer and the hidden layer and analytically learned weights between the hidden layer and the output layer. Furthermore, we also propose a dynamic optimization algorithm that enables the KCNet to increase performance beyond its structural limits by searching a more efficient set of inputs. For odorant-perception tasks that predict perceptual properties of an odorant, we show that KCNet outperforms existing data-driven approaches, such as XGBoost. For image-classification tasks, KCNet achieves reasonable performance on benchmark datasets (MNIST, Fashion-MNIST, and EMNIST) without any data-augmentation methods or convolutional layers and shows particularly fast running time. Thus, neural networks inspired by the insect brain can be both economical and perform well.
To install requirements:
pip install -r requirements.txt
All the details of best hyperparameters for KCNet and its variants are described at the paper.
- To train and test the KCNet, run this command:
python odor_perception/run_KCNet.py --data=sweet --hsize=1300 --data_path=DATA_PATH
- To train and test the KCNet with DOA, run this command:
python odor_perception/run_KCNet_DOA.py --epoch=100 --data=sweet --data_path=DATA_PATH --hsize=1300 --lr=0.5 --stop_metric=0.87 --show_images=false
- To train and test the KCNet with Ensemble DOA, run this command:
python odor_perception/run_KCNet_DOA.py --n_submodels=13 --epoch=100 --data=sweet --data_path=DATA_PATH --hsize=100 --lr=0.5 --stop_metric=0.90 --show_images=false
- To train and test the KCNet for MNIST, run this command:
python image_classification/mnist_KCNet.py --hsize=6500
- To train and test the KCNet with DOA for Fashion-MNIST, run this command:
python image_classification/fashion-mnist_KCNet_DOA.py --epoch=5 --hsize=7000 --lr=1e-5 --stop_metric=0.90
- To train and test the KCNet with ensemble DOA for EMNIST-balanced, run this command:
python image_classification/emnist-balanced_KCNet_EnsembleDOA.py --n_submodels=10 --epoch=30 --hsize=450 --lr=1e-2 --stop_metric=0.85
Our model achieves the following performance on :
The weighted F1 score was used as an evaluation metric. MEAN (# of hidden units) for "KCNet" and "KCNet w/ DOA" and MEAN (# of submodels X # of hidden units for each submodel) for "KCNet w/ Ensemble DOA".
Model name | Sweet/Non-sweet | Musky/Non-musky |
---|---|---|
Adaboost | 0.7444 | 0.6436 |
Gradient Boosting Machine | 0.7538 | 0.7186 |
XGBoost | 0.7361 | 0.6627 |
Random Forest | 0.7427 | 0.6826 |
K-Nearest Neightbors | 0.7839 | 0.6538 |
Support Vector Machine | 0.7980 | 0.6629 |
KCNet | 0.8134 (1,300) | 0.7434 (700) |
KCNet w/ DOA | 0.8156 (1,300) | 0.7004 (700) |
KCNet w/ Ensemble DOA | 0.8193 (13 X 100) | 0.7247 (10 X 70) |
The accuracy was used an an evaluation metric. MEAN (# of hidden units) for "ELM", "KCNet" and "KCNet w/ DOA" and MEAN (# of submodels X # of hidden units for each submodel) for "KCNet w/ Ensemble DOA". MEAN (# of learnable parameters) for "FSHN".
Model name | MNIST | Fashion-MNIST | EMNIST-Balanced |
---|---|---|---|
Extreme Learning Machine (ELM) | 0.9658 (6,500) | 0.8813 (7,000) | 0.7747 (4,500) |
Fully-trained Single-Hidden-layer Neural network (FSHN) | 0.975 (65,200) | 0.8671 (70,765) | 0.8179 (211,375) |
KCNet | 0.9735 (6,500) | 0.8849 (7,000) | 0.7762 (4,500) |
KCNet w/ DOA | 0.9731 (6,500) | 0.8845 (7,000) | 0.7761 (4,500) |
KCNet w/ Ensemble DOA | 0.9776 (10 X 650) | 0.886 (10 X 700) | 0.8058 (10 X 450) |
All content in this repository is licensed under the MIT license.
You can cite our work
@article{hong2021kcnet,
title={{KCNet}: An insect-inspired single-hidden-layer neural network with randomized binary weights for prediction and classification tasks},
author={Hong, Jinyung and Pavlic, Theodore P.},
journal={arXiv preprint arXiv:2108.07554},
year={2021}
}