-
Notifications
You must be signed in to change notification settings - Fork 773
Description
I am refactoring the user prompt and attempting to analyze the reference of LLM to my current prompt fragment.
Some models like Qwen3 can output CoT and response separately when in thinking mode, and I think saving CoT is helpful for analysising evolution:
- For someone like me who wants to refactor the current user prompt, I can analyze the reference situation of LLM for the current user prompt.
- For others who want to improve the evolution effect of specific scenarios, they can analyze and improve artifacts and system prompts.
However, due to the fact that the thinking mode is internal to the extra-body of the OpenAI library, different API providers may have different implementations, so it may not be easy.
What I want to do is:
1.Add better support for different API service provider thinking patterns in config while being compatible with existing tasks.
2.Store CoT for the evolutionary independent response that opens the thinking mode and display it on the visualization page (add an item in the following figure)
What 's your opinion? @codelion