-
Notifications
You must be signed in to change notification settings - Fork 69
Open
Labels
enhancementNew feature or functionalityNew feature or functionality
Description
Problem
Mux currently uses hardcoded or provider-derived values for API request parameters like max_output_tokens. Users have no way to override these on a per-model basis. For example, in streamManager.ts:
// Use the runtime model's max_output_tokens if available and caller didn't
// specify. This must be the runtime model (not the mapped metadata model)
// because max_output_tokens is a request parameter sent to the provider —
// a custom model's provider may not support the mapped model's output cap.This is limiting for users who want to:
- Increase
max_output_tokensbeyond the default for long-form generation - Set custom
temperature,top_p, or other sampling parameters per model - Pass provider-specific parameters (e.g. OpenRouter's
transforms, Anthropic'stop_k) - Tune parameters differently for different use cases (e.g. coding vs. creative writing)
Proposal
Allow users to define custom API parameters on a per-model or per-provider basis in ~/.mux/providers.jsonc (or a dedicated config section). These would be merged into the API request payload when calling that model/provider.
Example config
Precedence
- Explicit per-request parameters (e.g. from thinking level, tool use)
- Per-model custom parameters from config
- Per-provider wildcard (
*) parameters from config - Runtime model metadata defaults (current behavior)
Related
- Depends on Allow Chat with Mux to configure models, providers, and other config #2572 (JSON Schema for config files) for safe agent-driven configuration
Created on behalf of @ibetitsmike
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or functionalityNew feature or functionality
{ "anthropic": { "apiKey": "sk-ant-...", "modelParameters": { "claude-sonnet-4-20250514": { "max_output_tokens": 16384 }, "claude-opus-4-20250414": { "max_output_tokens": 32768, "temperature": 0.7 } } }, "openai": { "apiKey": "sk-...", "modelParameters": { "*": { "max_output_tokens": 8192 } } } }