fix: updated logger and model caching #release #895
+264
−364
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Optimize Model List Fetching and Improve Provider Management
Overview
This PR optimizes the LLM provider management by implementing lazy loading and caching of dynamic models. Instead of fetching models from all providers on every LLM call, we now only fetch models from the selected provider. Additionally, we've added a caching mechanism for model lists and improved logging throughout the system.
Key Changes
1. Optimized Model List Fetching
2. Universal Logging System
Technical Details
Model Fetching Optimization
Key implementation changes (simplified):
Caching Implementation
Enhanced Cross-Platform Logging
Migration Impact
Testing
Future Improvements