-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v0.4.3 Release Tracker #4895
Comments
Hi, is it possible to include the following PRs? |
Thanks for bring these up @sasha0552! #4167 is unlikely to be finished in time. We do commit to biweekly release cadence so don't worry many of these will get into soon enough! |
re: #4409 --> I did not have any issues running an fp16 model on a P40 when I installed from source. |
I am going to try to get these in
probably will not make it but tracking to v0.4.4: |
Yeah +1 on that PR @njhill |
|
Hi @robertgshaw2-neuralmagic - was this without the patch? I couldn't get a source build to run on P100's without the patch of #4409. With the patch, like you, running fp16 models (Mistral 7B for example) with no issues. |
Not only fp16, but AQLM works well too (#5058) |
P40 requires building with the patch. |
Is there any particular PR that we're waiting for before cutting the release? |
The model support for Phi and Deepseek |
Excuse me, when will VLLM support embedding input? |
Hi, is Deepseek v2 supported now? |
ETA May 30 (due to some blockers and US holiday).
Blockers
Nice to have
The text was updated successfully, but these errors were encountered: