feat: propagate api_base property for local LLM support#239
feat: propagate api_base property for local LLM support#239ahmednabiled wants to merge 3 commits intomesa:mainfrom
Conversation
- Add api_base parameter through LLMAgent, Memory, and ModuleLLM layers - Support custom API endpoints for self-hosted LLMs (e.g. remote Ollama) - Fix hardcoded openai/gpt-4o-mini in example agent memory modules - Update documentation in module_llm.md, llm_agent.md, and memory.md - Remove deprecated api_key references from documentation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
for more information, see https://pre-commit.ci
IlamaranMagesh
left a comment
There was a problem hiding this comment.
This seems related to the earlier discussion in #172, sharing it here for context.
|
@jackiekazil @colinfrisch @IlamaranMagesh I'd appreciate any feedback on these changes. Are they relevant, good, or bad? I did this because when I first ran the examples, it was a bit confusing trying to run them using local models. This was my starting point. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #239 +/- ##
==========================================
+ Coverage 90.67% 90.78% +0.10%
==========================================
Files 19 19
Lines 1555 1573 +18
==========================================
+ Hits 1410 1428 +18
Misses 145 145 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Thanks for the improvement. I've moved the new |
Description
This PR propagates
api_baseacross the Mesa-LLM stack so users can configure local or self-hosted LLM endpoints (e.g., remote Ollama, vLLM, LM Studio) from application-level APIs.Previously,
api_basewas supported inModuleLLMbut not consistently exposed through higher-level abstractions such as LLMAgent and memory modules, which made custom endpoint configuration incomplete in end-to-end usage.This PR also removes hardcoded model usage in example memory initialization paths so summarization uses the configured
llm_model.Changes
api_basethrough LLMAgent, Memory, STLTMemory, EpisodicMemory, and LongTermMemory.api_basefrom top-level abstractions intoModuleLLM.api_basefrom app.py → model.py → agents.py.ModuleLLM, LLMAgent, and Memory withapi_baseusage examples.api_keyreferences from documentation.Checklist
CONTRIBUTINGdocument.