You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the model page, I cannot run the server. I have tried restarting the app. There are no settings to see how the mlx server is run nor can I find any logs.
I have tried running the mlx server manually with the same host and port and it works:
% python -m mlx_lm.server \ > --host localhost \
> --port 21001 \
> --model ./mlx_model
UserWarning: mlx_lm.server is not recommended for production as it only implements basic security checks.
2024-08-07 11:08:16,215 - INFO - Starting httpd at localhost on port 21001...
Please see screenshot
The text was updated successfully, but these errors were encountered:
Sorry for the unhelpful message. Just to make sure I understand...you converted the model on the command line first and then imported in to TransformerLab? Using the Import functionality? Just want to try to reproduce.
Have you run other models successfully? The error seems to be a problem hitting the fastchat server behind the scenes. Not sure if that's a general error or something caused specifically by this model.
One possible path forward might be to try downloading mistralai/Mistral-Nemo-Instruct-2407 using the download field at the bottom of the Model Zoo page and converting it to MLX using the Export tab in TransformerLab. Or...actually it looks like mlx-community also has versions posted of the model with different quantizations.
Regardless, I'd still love to understand what's causing the error.
Running Transformer Lab v0.4.0 on Macbook Pro M1
In the model page, I cannot run the server. I have tried restarting the app. There are no settings to see how the mlx server is run nor can I find any logs.
I have tried running the mlx server manually with the same host and port and it works:
Please see screenshot
The text was updated successfully, but these errors were encountered: