PR: Harden Ollama Provider Config Parsing, Add Robust Logging & Diagnostics #46
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR: Harden Ollama Provider Config Parsing, Add Robust Logging & Diagnostics
Summary
This PR fixes brittle JSON parsing for Ollama provider settings, adds structured logging and clearer errors across the provider/registry/endpoint layers, and ships diagnostic scripts to investigate encryption/JSON issues in
settings_instances.value. The net effect is safer initialization, clearer failures, easier debugging, and zero changes to external API contracts.Problem / Motivation
ollama_servers_settingscould fail to parse (e.g., double-encoded JSON, undeciphered ciphertext), causing provider initialization and chat requests to break without actionable logs.ENCRYPTION_MASTER_KEYwas present/valid.What’s Changed
1) Provider:
backend/app/ai_providers/ollama.pyAdd initialization logs with server URL resolution and defaults.
Add request logs for chat calls (non-secret metadata only).
Centralize endpoint construction (
api_url) and debug-log outgoing URL.Add granular error handling:
httpx.ConnectError→ clear “server not reachable” message.httpx.HTTPStatusError 404→ “model not found” with server details.Keep return schema unchanged (
choices[0].message.content,finish_reason, etc.).2) Registry:
backend/app/ai_providers/registry.py3) API Endpoint Helper:
backend/app/api/v1/endpoints/ai_providers.py4) Utilities: New
backend/app/utils/json_parsing.pysafe_encrypted_json_parse(...): multi-strategy parser that handles:ENCRYPTION_MASTER_KEY,validate_ollama_settings_format(...): verifies expectedservers[]schema.create_default_ollama_settings(): minimal safe default ({"servers": []}).5) Diagnostics / Tooling (New)
backend/scripts/inspect_settings_encryption.pybackend/scripts/run_inspection.pybackend/scripts/test_ollama_fix.pyTwo example saved reports under
backend/scripts/*.jsonfrom local runs.User-Visible Behavior
When an Ollama server is missing or mis-IDed, clients now receive:
404with readable guidance and list of available servers (if any), or404with “No Ollama servers are configured” if none exist.When the server is down or unreachable:
When model is unknown:
404with explicit “model not found on server” message.No API shapes changed; only messages are clearer.
Configuration & Ops Notes
Ensure
ENCRYPTION_MASTER_KEYis set in the backend environment if settings encryption is enabled.Example (dev):
Backward Compatibility
Security Considerations
Performance Impact