Skip to content

Add MiniMax as LLM provider via OpenAI-compatible API#73

Open
octo-patch wants to merge 1 commit intorun-llama:mainfrom
octo-patch:feature/add-minimax-provider
Open

Add MiniMax as LLM provider via OpenAI-compatible API#73
octo-patch wants to merge 1 commit intorun-llama:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax as a new LLM provider using the "minimax:<model_name>" prefix format, consistent with existing providers (OpenAI, Anthropic, Replicate)
  • Leverage LlamaIndex's OpenAI class with a custom api_base pointing to MiniMax's OpenAI-compatible endpoint (https://api.minimax.io/v1)
  • Support two models: MiniMax-M2.7 (default) and MiniMax-M2.7-highspeed (faster variant)

Changes

File Change
core/utils.py Add minimax case to _resolve_llm() with temperature=1.0 default
README.md Add MiniMax to supported LLMs list and API key setup instructions
tests/test_minimax_provider.py 14 unit tests covering provider routing, params, and regression
tests/test_minimax_integration.py 3 integration tests (chat completion, highspeed model, streaming)

Usage

  1. Add minimax_key = "<your_key>" to .streamlit/secrets.toml
  2. Set the LLM to minimax:MiniMax-M2.7 or minimax:MiniMax-M2.7-highspeed in the RAG Config page

Test plan

  • 14 unit tests pass (pytest tests/test_minimax_provider.py)
  • 3 integration tests pass with real MiniMax API key
  • Existing provider tests unaffected (OpenAI, Anthropic, Replicate regression tests included)

Add MiniMax AI (MiniMax-M2.7, MiniMax-M2.7-highspeed) as a new LLM provider
using LlamaIndex's OpenAI class with a custom api_base pointing to
https://api.minimax.io/v1. Users can select MiniMax via the prefix format
"minimax:<model_name>" in the RAG config UI.

- Add "minimax" case to _resolve_llm() in core/utils.py
- Update README with MiniMax model IDs and API key setup
- Add 14 unit tests and 3 integration tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant