We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello,
When I compile flashinfer directly from the repo, running begin_forward of BatchPrefillWithPagedKVCacheWrapper crashes with the error:
*** stack smashing detected ***: terminated
Running the same code with the version installed from pip works without a problem. Any suggestions what could be the problem?
Thank you.
The text was updated successfully, but these errors were encountered:
@mkrima Please provide more env info such as os, cuda, torch version.
I believe the pip wheels are compiled va this ci code for cuda 11-8/12.1 and torch 2.1/2.2:
https://github.com/flashinfer-ai/flashinfer/blob/main/scripts/run-ci-build-wheel.sh
I would recommend not deviating too much from the min cuda 11.8 and min torch 2.1 requirements if you are doing custom compile.
Sorry, something went wrong.
No branches or pull requests
Hello,
When I compile flashinfer directly from the repo, running begin_forward of BatchPrefillWithPagedKVCacheWrapper crashes with the error:
Running the same code with the version installed from pip works without a problem. Any suggestions what could be the problem?
Thank you.
The text was updated successfully, but these errors were encountered: