This repository provides a comparison between:
- HSPMN (Hierarchical Shallow Predictive Matter Networks)
- Transformer (standard multi-head self-attention)
For the main HSPMN repository, see: https://github.com/NetBr3ak/HSPMN
compare.py- trains both models on identical data and reports MSE, FLOPs, parameters, and latency.verify.py- verification suite to check fairness, FLOPs counting, parameter counts, gradient flow, and reproducibility.
If using a POSIX shell (bash/zsh):
source venv/bin/activate
pip install -r requirements.txt
python compare.py
python verify.pyIf using Windows PowerShell:
./venv/Scripts/Activate.ps1
pip install -r requirements.txt
python .\compare.py
python .\verify.pyScripts emit summary plots (loss_curves.png, flops_comparison.png, mse_comparison.png, latency_comparison.png) and the latest metrics JSON under artifacts/.
- Identical data, optimizer (Adam, lr=0.001), epochs (100), batch size (128), loss (MSE), seed (42)
- Metrics: MSE (accuracy), FLOPs (compute), Parameters (size), Latency (inference time)
Multi-scale time series forecasting (sum of three sinusoids + noise). Predict at horizons t+1, t+5, t+20 using input length 64.
- HSPMN: MSE=0.00322, Params=0.41M, FLOPs=0.41M, Latency≈0.58ms
- Transformer: MSE=0.00423, Params=0.62M, FLOPs=40.92M, Latency≈0.23ms
Interpretation: HSPMN achieves lower MSE and ~101× fewer FLOPs on this task, but is slower in wall-clock time due to limited parallelism in the current implementation. Results are reproducible and verified by verify.py.
- HSPMN implements: hierarchical predictive modules, parallel shallow processing, non-reciprocal routing, and oscillatory synchronization (additive modulation)
- Dynamic connectivity weights are learned via backprop in this implementation; explicit active-matter update rules are planned as future work
MIT License.



