Skip to content

Conversation

@Blaizzy
Copy link
Owner

@Blaizzy Blaizzy commented Jan 11, 2025

Closes #176
Closes #173

@Blaizzy Blaizzy merged commit 21fc1b2 into main Jan 11, 2025
1 check passed
@qinxuye
Copy link

qinxuye commented Jan 13, 2025

Any plan to release a new version? mlx-lm 0.21.0 has pinned mlx>=0.22.0.

Once mlx-vlm released a new version, we could loose the pin of mlx<0.22.0 in Xinference.

@Blaizzy
Copy link
Owner Author

Blaizzy commented Jan 13, 2025

Yes, there will be a new release this week :)

@Blaizzy
Copy link
Owner Author

Blaizzy commented Jan 13, 2025

Once mlx-vlm released a new version, we could loose the pin of mlx<0.22.0 in Xinference

Not sure I follow. Could you elaborate?

@Blaizzy
Copy link
Owner Author

Blaizzy commented Jan 13, 2025

I will pin v0.22.0 as well. There are some recent models and changes that require the latest MLX release.

@qinxuye
Copy link

qinxuye commented Jan 13, 2025

Once mlx-vlm released a new version, we could loose the pin of mlx<0.22.0 in Xinference

Not sure I follow. Could you elaborate?

In which part do you need help?

@Blaizzy
Copy link
Owner Author

Blaizzy commented Jan 13, 2025

I need more context...

For instance, what is Xinference ?

@qinxuye
Copy link

qinxuye commented Jan 13, 2025

I need more context...

For instance, what is Xinference ?

Oh, I misunderstood, https://github.com/xorbitsai/inference is our project, and we have adopted your project for the Mac VLM engine.

@qinxuye
Copy link

qinxuye commented Jan 13, 2025

https://github.com/xorbitsai/inference/blob/7c6249a3383b2841a9e96243c6a900ce19a7f1d7/setup.cfg#L101

We have pinned mlx version, but the better option is that mlx-vlm can be compatible with mlx>=0.22.0

@Blaizzy
Copy link
Owner Author

Blaizzy commented Jan 13, 2025

Oh, I misunderstood, https://github.com/xorbitsai/inference is our project, and we have adopted your project for the Mac VLM engine.

No worries,

So awesome, really happy to see this!🔥

Sure thing, I will update the dependency to use >=0.22.0 on the next release

@qinxuye
Copy link

qinxuye commented Jan 18, 2025

@Blaizzy when will the next release come? Our CI is broken, and really appreciate you can release new version as soon as possible.

@Blaizzy
Copy link
Owner Author

Blaizzy commented Jan 18, 2025

Hey,

It' done v0.1.11 is out :)

@Blaizzy Blaizzy deleted the pc/bug-fixes branch February 2, 2025 19:23
Garry-TI pushed a commit to Garry-TI/mlx-vlm that referenced this pull request Sep 23, 2025
* fix arange (qwen2-vl)

* fix trainer prepare inputs error
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LoRa fine-tuning is not working with Llama 3.2 vision model. QVQ-72B-Preview-8bit: arange(): incompatible function arguments

3 participants