Documentation: https://analogvnn.readthedocs.io/
# Current stable release for CPU and GPU
pip install analogvnn
# For additional optional features
pip install analogvnn[full]
- Sample code with AnalogVNN: sample_code.py
- Sample code without AnalogVNN: sample_code_non_analog.py
- Sample code with AnalogVNN and Logs: sample_code_with_logs.py
- Jupyter Notebook: AnalogVNN_Demo.ipynb
AnalogVNN is a simulation framework built on PyTorch which can simulate the effects of optoelectronic noise, limited precision, and signal normalization present in photonic neural network accelerators. We use this framework to train and optimize linear and convolutional neural networks with up to 9 layers and ~1.7 million parameters, while gaining insights into how normalization, activation function, reduced precision, and noise influence accuracy in analog photonic neural networks. By following the same layer structure design present in PyTorch, the AnalogVNN framework allows users to convert most digital neural network models to their analog counterparts with just a few lines of code, taking full advantage of the open-source optimization, deep learning, and GPU acceleration libraries available through PyTorch.
AnalogVNN Paper: https://doi.org/10.1063/5.0134156
We would appreciate if you cite the following paper in your publications for which you used AnalogVNN:
@article{shah2023analogvnn,
title={AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks},
author={Shah, Vivswan and Youngblood, Nathan},
journal={APL Machine Learning},
volume={1},
number={2},
year={2023},
publisher={AIP Publishing}
}
Or in textual form:
Vivswan Shah, and Nathan Youngblood. "AnalogVNN: A fully modular framework for modeling
and optimizing photonic neural networks." APL Machine Learning 1.2 (2023).
DOI: 10.1063/5.0134156