Skip to content

How about adding CoT to the prompts of Program structure and displaying it? #367

@FunMelon

Description

@FunMelon

I am refactoring the user prompt and attempting to analyze the reference of LLM to my current prompt fragment.
Some models like Qwen3 can output CoT and response separately when in thinking mode, and I think saving CoT is helpful for analysising evolution:

  • For someone like me who wants to refactor the current user prompt, I can analyze the reference situation of LLM for the current user prompt.
  • For others who want to improve the evolution effect of specific scenarios, they can analyze and improve artifacts and system prompts.

However, due to the fact that the thinking mode is internal to the extra-body of the OpenAI library, different API providers may have different implementations, so it may not be easy.

What I want to do is:
1.Add better support for different API service provider thinking patterns in config while being compatible with existing tasks.
2.Store CoT for the evolutionary independent response that opens the thinking mode and display it on the visualization page (add an item in the following figure)

Image

What 's your opinion? @codelion

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions