Skip to content

File Limit Request: cutensor-cu13 - 400 MiB #6631

Open
@ChrisPsa

Description

@ChrisPsa

Project URL

https://pypi.org/project/cutensor-cu13/

Does this project already exist?

  • Yes

New Limit

400 MiB

Update issue title

  • I have updated the title.

Which indexes

PyPI

About the project

This is a follow-up of #5643. NVIDIA releases CUDA 13, and this project is to support CUDA 13 users to use cuTENSOR. Like cutensor-cu12 we're requesting the same size limit for cutensor-cu13. Separation of CUDA 12/13 wheels is necessary as CUDA only guarantees minor version compatibility, not major version. We will drop support for cutensor-cu11 moving forward.

Reasons for the request

Similar to cutensor-cu12, adding support for the NVIDIA Blackwell architecture that was just released takes us over the 200MB limit. Currently, we expect wheels to be around ~250MB, and 400MB would give us breathing room to support our python users while we also investigate ways to reduce binary size.

Code of Conduct

  • I agree to follow the PSF Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions