Skip to content

Conversation

@Junyi-99
Copy link
Member

@Junyi-99 Junyi-99 commented Jan 28, 2026

This pull request introduces significant improvements to model selection, streaming response handling, and tool registration in the chat system, along with prompt updates and minor bug fixes. The most important changes are grouped below by theme:

Model Selection and Configuration:

  • Refactored model listing logic in ListSupportedModels to use a unified modelConfig struct and a single source of truth (allModels), supporting more models and dynamic disabling based on user API key presence. Models requiring a user-provided API key are now marked as disabled if the user hasn’t configured one. (internal/api/chat/list_supported_models_v2.go)

Streaming Response Handling:

  • Improved streaming response logic to ensure StreamPartBegin is sent before any assistant content (including reasoning), and enhanced reasoning content handling for models that may send it before regular content. The handler now supports both reasoning_content and reasoning fields, and passes both answer and reasoning to the text done handler. (internal/services/toolkit/client/completion_v2.go) [1] [2] [3]

Tool Registration and Toolkit Initialization:

  • Enabled and registered additional file and LaTeX tools by uncommenting and activating their initialization, making them available for use in the toolkit. This includes tools for reading files, listing folders, searching files/strings, and LaTeX document structure operations. (internal/services/toolkit/client/utils_v2.go)

Prompt and Instruction Updates:

  • Updated system prompts to clarify tool call limits and revised text formatting requirements, including a new <PaperDebugger> tag for revised text and stricter separation of explanations. (internal/services/system_prompt_debug.tmpl, internal/services/system_prompt_default.tmpl) [1] [2]

Bug Fixes and Minor Improvements:

  • Standardized message IDs by removing the "openai_" prefix in various places to ensure consistency across OpenAI and in-app chat histories and streaming responses. (internal/services/toolkit/client/utils.go, internal/services/toolkit/client/utils_v2.go, internal/services/toolkit/handler/stream.go) [1] [2] [3] [4]
  • Minor fix to ensure project instructions are handled properly in debug mode. (internal/api/chat/create_conversation_message_stream_v2.go) [1] [2] [3]

- Introduced a new modelConfig struct to encapsulate model details including API key requirements.
- Expanded the list of supported models with various configurations, including pricing and context limits.
- Updated ListSupportedModels method to dynamically generate model responses based on user API key availability, marking models as disabled when necessary.
- Added disabled and disabled_reason fields to the SupportedModel message in proto definitions for better client handling.
- Added HandleAssistantPartBegin to ensure the frontend prepares for assistant messages before receiving content, addressing models that send reasoning before content.
- Updated HandleTextDoneItem to include reasoning content when sending the final message.
- Introduced HandleReasoningDelta to manage reasoning chunks separately during streaming.
- Introduced a new rule for @typescript-eslint/no-unused-vars to enforce error reporting on unused variables, allowing exceptions for variables and arguments that start with an underscore.
- Added new styles for chat message components and tool cards.
- Integrated Streamdown for enhanced markdown rendering, replacing the previous Markdown component.
- Improved responsiveness and visual consistency across message boxes and actions.
…feature

- Implemented auto-collapse functionality for reasoning content based on message state.
- Updated GeneralToolCard to support external collapse state and auto-scroll behavior.
- Refactored ErrorToolCard to utilize GeneralToolCard for consistent error display.
- Improved message rendering logic for better user experience.
- Updated MessageCard component to utilize the new DisplayMessage type for improved message handling.
- Refactored useSendMessageStream hook to streamline message streaming logic and reduce dependencies.
- Introduced a unified message store to consolidate message state management, enhancing the overall architecture.
- Added message converters for bidirectional transformation between API and internal message types.
- Removed deprecated streaming message store and related handlers to simplify the codebase.
- Enhanced chat components to leverage the new message store and improve rendering efficiency.
- Moved the @typescript-eslint/no-unused-vars rule to a new position in the ESLint configuration for better organization.
- Maintained the existing settings that allow exceptions for variables and arguments starting with an underscore.
@Junyi-99 Junyi-99 changed the base branch from main to staging January 28, 2026 08:48
@Junyi-99 Junyi-99 merged commit 50a5f0b into staging Jan 28, 2026
3 checks passed
@Junyi-99 Junyi-99 deleted the feat/gpt-5.2 branch January 28, 2026 08:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants