Skip to content

Conversation

RoacherM
Copy link

@RoacherM RoacherM commented Sep 26, 2025

  • When you choose a provider, Codex should talk about that provider’s models. Now it does.
  • /model shows built‑in GPT‑5 presets only when provider=openai. For every other provider (including openrouter), it shows only your config‑defined models.
  • No protocol/core changes. TUI-only, zero breakage, dramatically better signal for non‑OpenAI users.

Motivation

  • The core of this change is to make the user experience more seamless and intuitive. Currently, when a user selects a specific model_provider (e.g., openrouter) or launches a session with a --profile, the /model command continues to display the default built-in GPT-5 presets.

  • This creates a confusing and inelegant user experience. The UI should intelligently adapt to the user's chosen context. If I've explicitly configured my session for a specific provider, the model list should present me with relevant options for that context, rather than forcing me to navigate a list of models that are not applicable to my current session.

  • This PR aligns the TUI's behavior with the user's configuration, ensuring the interface is always relevant, focused, and "silky smooth."

    What Changed

    • Built‑in GPT‑5 presets are shown only when provider=openai.
    • For all other providers (including openrouter), the /model popup shows only config‑derived models from ~/.codex/config.toml, filtered to the current provider.
    • When launched with --profile, /model always shows config‑derived models only (no built‑ins), regardless of provider.
    • Selecting an item still updates only model + reasoning effort and persists to the active profile (or top‑level). Provider is
      not changed.

    Scope and Impact

    • TUI-only. No protocol or core changes.
    • openai users: unchanged behavior for built‑ins; if you’ve defined models in config, those are appended when provider=openai.
    • Third‑party provider users: /model lists only your relevant models. Cleaner and accurate.

    Implementation Notes

    • Code path: codex-rs/tui/src/chatwidget.rs:1553
    • Reads config via codex_core::config::load_config_as_toml, filters by the current provider, dedups by (model, effort).
    • Logic:
      • Without profile: provider=openai → show built‑ins; otherwise → show config‑derived only.
      • With profile: always show config‑derived only.

    Configure (standard approach)

    • Keep default provider openai so a bare launch shows official GPT‑5 presets:
      • model_provider = "openai"
      • model = "gpt-5-codex"
    • Put third‑party models in profiles so provider‑specific sessions show only those:
      • OpenRouter (namespace slugs): openai/gpt‑5, google/gemini‑2.5‑pro, deepseek/deepseek‑chat
      • DeepSeek (direct, bare slugs): deepseek‑chat, deepseek‑reasoner
      • Moonshot: kimi‑k2‑turbo‑preview (or your official slug)

    Build

    • From codex-rs:
      • cargo build --release
    • Optional: create a separate launcher name to avoid npm global conflicts:
      • ln -sf "$(pwd)/target/release/codex" ~/.local/bin/codexx
      • Ensure ~/.local/bin is in PATH

    Launch

    • Bare (official presets only, provider=openai):
      • codexx tui
    • Provider‑specific (config‑derived models only):
      • OpenRouter: codexx tui --profile or-gpt5 (or or-gemini, or-deepseek-chat)
      • DeepSeek direct: codexx tui --profile deepseek-chat
      • Moonshot: codexx tui --profile kimi

    Environment

    • export OPENAI_API_KEY=…
    • export OPENROUTER_API_KEY=…
    • export DEEPSEEK_API_KEY=…
    • export MOONSHOT_API_KEY=…

    Validation

    • Tests: cargo test -p codex-tui (passed locally)
    • Manual:
      • DeepSeek session: codexx tui --profile deepseek-chat → /model lists only DeepSeek config models
      • OpenRouter + OpenAI: codexx tui --profile or-gpt5 → /model lists only the OpenRouter config models (no built‑ins)
      • OpenAI default: codexx tui → /model lists GPT‑5 presets

    Why It Matters

    • Simplicity: Don’t show me models I can’t (or won’t) use in this session.
    • Accuracy: The UI aligns with the provider I actually chose.
    • Inclusion: Codex isn’t just one ecosystem—it should feel native across all of them.

    Follow‑ups (optional)

    • Add --provider as a shorthand for -c model_provider=…
    • Group entries in /model by “Built‑in” vs “From config” for clarity.
    • Discuss in‑session provider switching (would require protocol changes).
image

Copy link

github-actions bot commented Sep 26, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@RoacherM RoacherM force-pushed the feat/tui-config-presets branch from c172c48 to 366d0db Compare September 26, 2025 09:27
@RoacherM
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request Sep 26, 2025
@RoacherM RoacherM changed the title derive model presets from config in /model popup; hide built-ins for non‑OpenAI providers Refine /model list to be context-aware of the active provider and profile Sep 26, 2025
…rrent provider; hide built-in presets for non-OpenAI providers; fix unused import
…r and others as third-party; config-derived presets shown when profile or third-party
@RoacherM RoacherM force-pushed the feat/tui-config-presets branch from b0603b5 to bb0ddfe Compare September 28, 2025 02:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant