feat: support for codex and claude-code as llm#276
Merged
nicoloboschi merged 6 commits intomainfrom Feb 2, 2026
Merged
Conversation
- Add Anthropic models (Sonnet, Opus, Haiku) to MODEL_MATRIX - Remove separate test_anthropic_provider.py file - All Anthropic models now tested with standard memory operations
Each LLM provider now has a sensible default model that's used when HINDSIGHT_API_LLM_MODEL is not explicitly set. This simplifies configuration - users can specify just the provider and API key. Changes: - Add PROVIDER_DEFAULT_MODELS mapping in config.py - Update config logic to use provider defaults for both global and per-operation LLM configs - Add comprehensive tests for provider default model selection - Document provider defaults in models.md Example usage: export HINDSIGHT_API_LLM_PROVIDER=anthropic export HINDSIGHT_API_LLM_API_KEY=sk-ant-xxx # Automatically uses claude-sonnet-4-20250514 Provider defaults: - openai: gpt-5-mini - anthropic: claude-sonnet-4-20250514 - gemini: gemini-2.5-flash - groq: openai/gpt-oss-120b - ollama: gemma3:12b - lmstudio: local-model - vertexai: gemini-2.0-flash-001 - openai-codex: o3-mini - claude-code: claude-sonnet-4-20250514 - mock: mock-model
- openai: gpt-5-mini -> o3-mini - anthropic: claude-sonnet-4-20250514 -> claude-haiku-4-5-20251001 - openai-codex: o3-mini -> gpt-5.2-codex - claude-code: claude-sonnet-4-20250514 -> claude-sonnet-4-5-20250929 Updated tests and documentation to reflect new defaults.
Moved detailed setup instructions for OpenAI Codex and Claude Code from configuration.md to models.md where they better fit with model-specific documentation. Changes: - Move "OpenAI Codex Setup" section from configuration.md to models.md - Move "Claude Code Setup" section from configuration.md to models.md - Add cross-reference tip in configuration.md pointing to models.md - Update default model in Claude Code example to claude-sonnet-4-5-20250929 - Keep basic provider examples in configuration.md for quick reference This makes the configuration.md page more focused on environment variables while models.md contains provider-specific setup details.
slayoffer
added a commit
to slayoffer/hindsight
that referenced
this pull request
Feb 3, 2026
Merged 7 upstream commits: - Fix: load operation validator extension in worker process (vectorize-io#280) - fix: custom pg schema is not reliable (vectorize-io#278) - feat(embed): add hindsight-embed profiles (vectorize-io#277) - feat: support for codex and claude-code as llm (vectorize-io#276) - feat: print version during startup (vectorize-io#275) - feat(openclaw): add llmProvider/llmModel plugin config options (vectorize-io#274) - Propagate request context through async task payloads (vectorize-io#273) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.