Description
🚀 The feature
Motivation
To install torchserve today, users have to run python ts_scripts/dependencies.py
with the optional CUDA arg Ex: --cuda=cu102
and then install torchserve binaries on top. \
Example:
python ./ts_scripts/install_dependencies.py --cuda=cu102
pip install torchserve torch-model-archiver torch-workflow-archiver
The additional step of installing dependencies makes it confusing and its the responsibility of the user to run this additional step.
We want to automate this such that a single pip install or conda install command is sufficient to install torchserve ( including CUDA)
Current TorchServe Installation
PyPI
python ./ts_scripts/install_dependencies.py --cuda=cu102
pip install torchserve torch-model-archiver torch-workflow-archiver
Conda
python ./ts_scripts/install_dependencies.py --cuda=cu102
conda install -c pytorch torchserve torch-model-archiver torch-workflow-archiver
Proposed TorchServe Installation
PyPI
Specify dependencies like CUDA version, dev as extras to the pip install command
pip install torchserve[cu102] torch-model-archiver[cu102] torch-workflow-archiver[cu102]
pip install torchserve[cu102, dev] torch-model-archiver[cu102, dev] torch-workflow-archiver[cu102, dev]
Conda
conda install -c pytorch torchserve torch-model-archiver torch-workflow-archiver cudatoolkit=10.2
Test
PyPI
Conda
Alternatives
No response
Additional context
The following tasks need to be completed
- Common Tasks
- Modify/cleanup the requirements.txt
- PyPI
- Set up Github action with for the build process with a CUDA enabled machine and we need to switch between different versions of CUDA and build binaries.
- Update Release scripts
- Update Nightly scripts
- Update README to support existing method for older binaries and new method for the newer ones
- For the existing binaries/ older versions of torchserve, we would need to continue to support the existing way of installing
- Conda
- Setup mutex package as described here
- Set up Github action with for the build process with a CUDA enabled machine and we need to switch between different versions of CUDA and build binaries.
- Update Release scripts
- Update Nightly scripts
- Update README to support existing method for older binaries and new method for the newer ones
- For the existing binaries/ older versions of torchserve, we would need to continue to support the existing way of