feat(mcp): add configurable small_model support for OpenAI provider #1156
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds support for configuring the
small_modelparameter in the MCP server's config.yaml file.Problem
Previously, the small model was hardcoded to either
gpt-5-nanoorgpt-4.1-minibased on the main model type (seefactories.pylines 122-135). This made it impossible to use local LLMs like Ollama which don't have these OpenAI-specific models available.When using Ollama with Graphiti MCP Server, users would get errors like:
Solution
small_modelfield toLLMConfiginschema.pysmall_modelinfactories.pyif set, otherwise auto-detect as beforebase_urlfrom provider config toCoreLLMConfig(was missing, caused issues with OpenAI-compatible endpoints)Changes
mcp_server/src/config/schema.pysmall_model: str | Nonefieldmcp_server/src/services/factories.pymcp_server/config/*.yamlmcp_server/README.mdExample config for Ollama
Test plan
small_modelnot set (auto-detection)small_modelto same model as main modelbase_urlis correctly passed to OpenAI clientFixes #1155
🤖 Generated with Claude Code