Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

warp CTC building is failed #35

Closed
sw005320 opened this issue Nov 30, 2020 · 8 comments · Fixed by espnet/espnet#2999
Closed

warp CTC building is failed #35

sw005320 opened this issue Nov 30, 2020 · 8 comments · Fixed by espnet/espnet#2999
Labels
bug Something isn't working

Comments

@sw005320
Copy link

@ysk24ok, warp CTC building is failed (possibly due to pep440 introduced in pip20.3???)
https://github.com/espnet/espnet/runs/1473939967
Could you check this?

@sw005320 sw005320 added the bug Something isn't working label Nov 30, 2020
@Fhrozen
Copy link
Member

Fhrozen commented Dec 2, 2020

@ysk24ok Could you check this please for CPU tag also.
I am having problems trying to build cpu based container because of the same problem:
warpctc-pytorch==0.2.1+torch14.cpu from ... has different version in metadata:0.2.1

@ysk24ok
Copy link
Collaborator

ysk24ok commented Dec 3, 2020

It seems pip 20.3 doesn't allow mismatch between version in METADATA (0.2.1) and that in the package name (0.2.1+torchXX.cudaYY).

When I apply the following patch to match the version and try to upload the wheel to testpypi.org ,

diff --git a/pytorch_binding/wheel/rename_wheels.py b/pytorch_binding/wheel/rename_wheels.py
index d6072a5..a08a810 100644
:
     return out.decode('utf-8').split()[-2][:-1].replace('.', '')


+def get_torch_version():
+    major_ver, minor_ver = torch.__version__.split('.')[:2]
+    return major_ver + minor_ver
+
+
+def get_local_version_identifier(enable_gpu):
+    local_version_identifier = '+torch{}'.format(get_torch_version())
+    if enable_gpu:
+        local_version_identifier += ".cuda{}".format(get_cuda_version())
+    else:
+        local_version_identifier += ".cpu"
+    return local_version_identifier
+
+
 if torch.cuda.is_available() or "CUDA_HOME" in os.environ:
     enable_gpu = True
     # For CUDA10.1, libcublas-10-2 is installed
@@ -75,9 +89,10 @@ ext_modules = [
     )
 ]

+base_version = "88.77.66"
 setup(
     name="warpctc_pytorch",
-    version="0.2.1",
+    version=base_version + get_local_version_identifier(enable_gpu),
     description="Pytorch Bindings for warp-ctc maintained by ESPnet",
     url="https://github.com/espnet/warp-ctc",
     author=','.join([

the following error occurs.

$ twine upload -r testpypi dist/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl
Uploading distributions to https://test.pypi.org/legacy/
Enter your username: espnet
/opt/pyenv/versions/3.8.5/lib/python3.8/site-packages/twine/auth.py:72: UserWarning: No recommended backend was available. Install a recommended 3rd party backend package; or, install the keyrings.alt package if you want to use the non-recommended backends. See https://pypi.org/project/keyring for details.
  warnings.warn(str(exc))
Enter your password:
Uploading warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.00M/3.00M [00:02<00:00, 1.55MB/s]
NOTE: Try --verbose to see response content.
HTTPError: 400 Bad Request from https://test.pypi.org/legacy/
'88.77.66+torch16.cuda102' is an invalid value for Version. Error: Can't use PEP 440 local versions. See https://packaging.python.org/specifications/core-metadata for more information.

The error is raised here in wheelhouse.
We can't upload wheels which use PEP440 local version identifier (so far we manage to avoid this because of version mismatch).

The solution is:

  1. use pip<20.3, then we can download wheels from PyPI.
  2. serve wheels outside of PyPI. PyTorch serve wheels which uses PEP440 local versions here.

@ysk24ok
Copy link
Collaborator

ysk24ok commented Dec 3, 2020

Oh, I found an another solution,

  1. stop using PEP440 local versions, but this leads to lots of wheel packages, such as warpctc_pytorchXX_cudaYY .

@ysk24ok
Copy link
Collaborator

ysk24ok commented Dec 15, 2020

I have uploaded warpctc_pytorch wheel to https://github.com/ysk24ok/wheel_serving_test and found that we can download a wheel like this even when pip >= 20.3.

[root@545fd607d18e pytorch_binding]# pip3 --version
pip 20.3.1 from /opt/pyenv/versions/3.8.5/lib/python3.8/site-packages/pip (python 3.8)
[root@545fd607d18e pytorch_binding]# pip3 install warpctc_pytorch==88.77.66+torch16.cuda102 -f https://github.com/ysk24ok/wheel_serving_test/blob/main/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl?raw=true
Looking in links: https://github.com/ysk24ok/wheel_serving_test/blob/main/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl?raw=true
WARNING: Skipping page https://github.com/ysk24ok/wheel_serving_test/blob/main/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl?raw=true because the HEAD request got Content-Type: application/octet-stream.The only supported Content-Type is text/html
Collecting warpctc_pytorch==88.77.66+torch16.cuda102
  Using cached https://github.com/ysk24ok/wheel_serving_test/blob/main/warpctc_pytorch-88.77.66+torch16.cuda102-cp38-cp38-manylinux1_x86_64.whl?raw=true (3.1 MB)
Installing collected packages: warpctc-pytorch
Successfully installed warpctc-pytorch-88.77.66+torch16.cuda102

It's a hassle for users to specify the full URL for -f option, but in this way we don't need a server to host wheels.

If there is no objection, I'll proceed in this way.

@sw005320
Copy link
Author

That sounds like the best solution for now.
Please go ahead.

@chintu619
Copy link

chintu619 commented Jan 16, 2021

Until pip 21 is released, we can use the old resolver by adding --use-deprecated=legacy-resolver during installation of warpctc-pytorch. This successfully does the installation as needed. Reference.

user@ip-xx-xx-xx-xx:~/projects/espnet/tools$ pip install --use-deprecated=legacy-resolver warpctc-pytorch==0.2.1+torch14.cuda100
Collecting warpctc-pytorch==0.2.1+torch14.cuda100
  Using cached warpctc_pytorch-0.2.1%2Btorch14.cuda100-cp38-cp38-manylinux1_x86_64.whl (3.0 MB)
Installing collected packages: warpctc-pytorch
Successfully installed warpctc-pytorch-0.2.1

@sw005320
Copy link
Author

That sounds good. Thanks, @chintu619!
@ysk24ok, how about this solution?

@ysk24ok
Copy link
Collaborator

ysk24ok commented Jan 19, 2021

@chintu619 Thanks for sharing.
But I think using the old resolver by --use-deprecated=legacy-resolver is only the short term solution.
Sooner or later, we should move our warpctc_pytorch wheels from PyPI so that the new resolver can fetch them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants