Autotuning hyperparameters for Keras. The repository provides implementing autotuning deep learning hyper parameters using Keras with various optimization libraries. Major methods of hyper parameter tuning include GridSearchCV and RandomSearchCV of Scikit-learn, while there are other methods that are quite more sophisticated than these.
I implemented hyper parameter tunings with these methods.
The neural network going to be tuned is MNIST with one input layer, two hidden layers and one output layers. The tuners are going to explore and exploit the best values for these parameters:
- number of neurons in hidden layer1
- number of neurons in hidden layer2
- dropout rate of hidden layer1
- dropout rate of hidden layer2
- whether to have batchnormalization after hidden layer1
- whether to have batchnormalization after hidden layer2
- batch size
- epoch size
- validation split rate
These are the results: