You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, we use OpenAI's sdk, so only models that have the same output as OpenAI can be used (you can set the OPENAI_API_BASE environment variable to point to wherever your other model is running; check out our README for more information). However, once PR #450, which adds litellm, is merged, it should allow you to use any model!
Edit: Due to a few issues with litellm, we decided to not integrate it with mentat after all; however, if you check the readme now, I just updated it with an example on how to use a litellm proxy server with mentat. I haven't used mixtral personally before, but I assume it should work with litellm fairly easily.
Tried with Mistral along with litellm as per instructions. Added OPEN_API_BASE in .env file, When I start mentat, I still get a message "No openai API key detected"
Hi, is possible config files to using mixtral models in mentat?
The text was updated successfully, but these errors were encountered: