fix: provide fallback context window values for Ollama and LM Studio models#7676
fix: provide fallback context window values for Ollama and LM Studio models#7676roomote[bot] wants to merge 1 commit intomainfrom
Conversation
…models - Add fallback ModelInfo when routerModels.ollama or lmStudioModels return undefined - Fixes context window display showing "used/1" instead of actual max tokens - Ensures proper context window management for Ollama and LM Studio providers Fixes #7674
| contextWindow: 8192, | ||
| supportsImages: false, | ||
| supportsPromptCache: true, | ||
| } |
There was a problem hiding this comment.
The fallback logic here is duplicated with the LM Studio case below. Could we extract this into a shared helper function to reduce duplication? Something like:
| info || | ||
| (id | ||
| ? { | ||
| maxTokens: 8192, |
There was a problem hiding this comment.
Is 8192 the right default for all Ollama models? Some models support much larger context windows. Could we consider making this configurable or perhaps use a more generous default like 32768?
| } | ||
| case "lmstudio": { | ||
| const id = apiConfiguration.lmStudioModelId ?? "" | ||
| const info = lmStudioModels && lmStudioModels[apiConfiguration.lmStudioModelId!] |
There was a problem hiding this comment.
The non-null assertion here could be avoided with better type checking. Consider:
| case "ollama": { | ||
| const id = apiConfiguration.ollamaModelId ?? "" | ||
| const info = routerModels.ollama && routerModels.ollama[id] | ||
| // Provide fallback values when info is undefined to fix context window display |
There was a problem hiding this comment.
Should we add test coverage for these fallback scenarios? I noticed there are no tests for Ollama or LM Studio providers in the test file. This would help ensure the fallback behavior works correctly and prevent regressions.
|
Closing in favor of #7679 which provides a more comprehensive solution by fixing the root cause rather than using hardcoded fallback values. |
This PR attempts to address Issue #7674. Feedback and guidance are welcome.
Problem
The task header was displaying incorrect max context window for Ollama (showing "used/1" instead of actual max tokens). The issue was that
infowas always undefined at line 257 inuseSelectedModel.tswhenrouterModels.ollamawas an empty object or the specific model wasn't found.Solution
Changes
webview-ui/src/components/ui/hooks/useSelectedModel.tsto provide fallback values when model info is undefinedTesting
Fixes #7674
Important
Fixes incorrect context window display for Ollama and LM Studio models by providing fallback values in
useSelectedModel.ts.useSelectedModel.tsby providing fallback values when model info is undefined.This description was created by
for 5b55f6e. You can customize this summary. It will automatically update as commits are pushed.