Running Same Experiments is boring, Kuberenetes is hard to install.
For whom needs easily usable multiple-experiment environment, here's experiment-scheduler
Experiment-Scheduler is open-source, for automating repeated experiments.
In some environments like where k8s is not supported, where you can only use is ssh servers with python, repeatedly running same experiments with different parameters would be annoying and boring.
By minimum settings and minimum effort, we provide distributed multi-experiment environment without affecting your already-completed server setting.
Our goal is make you only concentrate on experiment by providing easily, fastly constructable experiment tool.
pip3 install experiment-scheduler
# sample.yaml
name: sample
tasks:
- cmd: torch train --lr 0.01
condition:
gpu: 1
name: hpo_1
- cmd: torch train --lr 0.02
condition:
gpu: 1
name: hpo_2
exs init_master
exs init_task_manager
exs run -f sample.yaml
Currently we support only a few reserved words. You can refer all of them in below example.
name : This is an experiment name
tasks : list of tasks
- cmd : sh command you want to run
name : task name
- exs execute -f(--file) : Request experiments to run. You should execute it with
-f(--file)
argument which is the yaml file depicting experiments. - exs delete -t(--task) : Delete a single task. It needs
-t(--task)
argument. - exs list : list all experiment. To list specific experiment, use
-e(--experiment)
argument with experiment id. Id values are truncated by default. For non-truncated value, use-v(--verbose)
argument. - exs status : Get status of tasks. It needs
-t(--task)
argument with task id. - exs init_master : Run master server. When executing the command, master server logs are printed continously. To run it as daemon, use
-d(--daemon)
argument. - exs init_task_manager : Run a task manager server. If there are more than one server, you need to execute it on each of them. Same as master, task manager server logs are printed as default. To run it as daemon, use
-d(--daemon)
argument.
Each server needs address to communicate with other servers. Although default setting exists, you can modify them. Currently, two elements are available:
- master_address : "IP:port"
- task_manager_address : ["IP:port", "IP:port", ...]
Experiement scheduler uses ConfigParser. So, you should write [default]
at head. task_manager_addresses
should be wrapped by square brackets even if you use a single node.
Below is default setting.
[default]
master_address = "localhost:50052"
task_manager_address = ["localhost:50051"]
What we are going to work on from v0.3 on the next few months :
- RUD on experiment (Start: Oct 3 2022, End: Dec 31 2022)
- Register in Pypi (Start: Oct 3 2022, End: Dec 31 2022)
- All comments on codes for future docs (Start: Oct 3 2022, End: Dec 31 2022)
- Detailed README.md (Feb 6 2023)
- Detailed --help command (Start: Feb 6 2023)
- Set local DB for master (Start: Feb 6 2023)
- Refined GPU selection algorithm (Start: Feb 6 2023)
- Additional yaml file syntax (Start: Feb 6 2023)
- Support multiple gRPC versions (Start: Feb 6 2023)
- Single execution for
exs excute
(Start: Feb 6 2023) - Specify package version. (Start: Feb 6 2023)
- Support multi node environment (v0.4)
- Improve test code coverage (v0.4)
- Detailed error log (v0.4)
Web Page for Experiment Tracking
- Create Web Page Running on localhost to check current status of master, task_manager.
- Home, Logs, Status, Experiments pages will be served.
Autotesting for further development and dockerization