This is the accompanying repo for the Bdx hyperparameters optimization meetup. It also contains the notebook and dataset for this Qucit blog post and also the hyperparameters optimization webinar.
To install the project dependencies, run:
pip install -r requirements
If you are familiar with Conda, I would suggest creating a virtual environnement and installing the dependencies in the following fashion:
conda create --name hyperparameters-optimization --file requirements.txt
Then activate the environment with the following command:
source activate hyperparameters-optimization
In this talk, we analyze an IMDB dataset that you could find here. Notice that the dataset is also available in the data folder (titled "movies_metadata.csv").
To get the processed dataset, run the following code (don't forget to activate your virtual env):
python scripts/imdb_data_processing.py
In the blog post, we analyze the Airlines Delay dataset that you could find here. Notice that the dataset is also available in the data folder (titled "DelayedFlights.csv.zip").
The live demo results are stored here.
The talk slides are available here.
The webinar slides are available here.
You can check the different notebooks used during the talk and the live demo by browsing the notebooks folder. It also contains the accompanying notebook for the blog post.
The /notebooks/webinar
folder contains the webinar notebook.
- You can find here a complete analysis from the dataset owner.
The MIT License (MIT)