Darkon: Performance hacking for your deep learning models
Darkon is an open source toolkit for improving and debugging deep learning models. People think that deep neural network is a black-box that requires only large dataset and expect learning algorithms returns well-performing models. However, trained models often fail in real world usages, and it is difficult to fix such failure due to the black-box nature of deep neural networks. We are developing Darkon to ease effort to improve performance of deep learning models.
In this first release, we provide influence score calculation easily applicable to existing Tensorflow models (other models to be supported later) Influence score can be used for filtering bad training samples that affects test performance negatively. It can be used for prioritize potential mislabeled examples to be fixed, and debugging distribution mismatch between train and test samples.
Darkon will gradually provide performance hacking methods easily applicable to existing projects based on following technologies.
- Dataset inspection/filtering/management
- Continual learning
- Meta/transfer learning
- Interpretable ML
- Hyper parameter optimization
- Network architecture search
More features will be released soon. Feedback and feature request are always welcome, which help us to manage priorities. Please keep your eyes on Darkon.
- Tensorflow>=1.3.0
pip install darkon
inspector = darkon.Influence(workspace_path,
YourDataFeeder(),
loss_op_train,
loss_op_test,
x_placeholder,
y_placeholder)
scores = inspector.upweighting_influence_batch(sess,
test_indices,
test_batch_size,
approx_params,
train_batch_size,
train_iterations)
- Issues: report issues, bugs, and request new features
- Pull request
- Discuss: Gitter
- Email: darkon@neosapience.com
Apache License 2.0
[1] Pang Wei Koh and Percy Liang "Understanding Black-box Predictions via Influence Functions" ICML2017