You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/// Configuration for forcing the model output to adhere to the specified format. Supported on [Command R 03-2024](https://docs.cohere.com/docs/command-r), [Command R+ 04-2024](https://docs.cohere.com/docs/command-r-plus) and newer models.<br/>
745
750
/// The model can be forced into outputting JSON objects (with up to 5 levels of nesting) by setting `{ "type": "json_object" }`.<br/>
/// Defaults to `false`. When set to `true`, the log probabilities of the generated tokens will be included in the response.
649
649
/// </param>
650
650
/// <param name="maxTokens">
651
-
/// The maximum number of tokens the model will generate as part of the response.<br/>
652
-
/// **Note**: Setting a low value may result in incomplete generations.
651
+
/// The maximum number of output tokens the model will generate in the response. If not set, `max_tokens` defaults to the model's maximum output token limit. You can find the maximum output token limits for each model in the [model documentation](https://docs.cohere.com/docs/models).<br/>
652
+
/// **Note**: Setting a low value may result in incomplete generations. In such cases, the `finish_reason` field in the response will be set to `"MAX_TOKENS"`.<br/>
653
+
/// **Note**: If `max_tokens` is set higher than the model's maximum output token limit, the generation will be capped at that model-specific maximum limit.
653
654
/// </param>
654
655
/// <param name="messages">
655
656
/// A list of chat messages in chronological order, representing a conversation between the user and the model.<br/>
/// Defaults to `0.0`, min value of `0.0`, max value of `1.0`.<br/>
668
669
/// Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
669
670
/// </param>
671
+
/// <param name="rawPrompting">
672
+
/// When enabled, the user's prompt will be sent to the model without<br/>
/// Configuration for forcing the model output to adhere to the specified format. Supported on [Command R](https://docs.cohere.com/v2/docs/command-r), [Command R+](https://docs.cohere.com/v2/docs/command-r-plus) and newer models.<br/>
672
678
/// The model can be forced into outputting JSON objects by setting `{ "type": "json_object" }`.<br/>
/// A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations, and higher temperatures mean more random generations.<br/>
706
712
/// Randomness can be further maximized by increasing the value of the `p` parameter.
707
713
/// </param>
714
+
/// <param name="thinking">
715
+
/// Thinking gives the model enhanced reasoning capabilities for complex tasks, while also providing transparency into its step-by-step thought process before it delivers its final answer.<br/>
716
+
/// When thinking is turned on, the model creates thinking content blocks where it outputs its internal reasoning. The model will incorporate insights from this reasoning before crafting a final response.<br/>
717
+
/// When thinking is used without tools, the API response will include both thinking and text content blocks. Meanwhile, when thinking is used alongside tools and the model makes tool calls, the API response will include the thinking content block and `tool_calls`.
718
+
/// </param>
708
719
/// <param name="toolChoice">
709
720
/// Used to control whether or not the model will be forced to use a tool when answering. When `REQUIRED` is specified, the model will be forced to use at least one of the user-defined tools, and the `tools` parameter must be passed in the request.<br/>
710
721
/// When `NONE` is specified, the model will be forced **not** to use one of the specified tools, and give a direct response.<br/>
/// Configuration for forcing the model output to adhere to the specified format. Supported on [Command R 03-2024](https://docs.cohere.com/docs/command-r), [Command R+ 04-2024](https://docs.cohere.com/docs/command-r-plus) and newer models.<br/>
132
137
/// The model can be forced into outputting JSON objects (with up to 5 levels of nesting) by setting `{ "type": "json_object" }`.<br/>
@@ -221,6 +226,7 @@ public partial interface ICohereClient
Copy file name to clipboardExpand all lines: src/libs/Cohere/Generated/Cohere.ICohereClient.Chatv2.g.cs
+15-2Lines changed: 15 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -43,8 +43,9 @@ public partial interface ICohereClient
43
43
/// Defaults to `false`. When set to `true`, the log probabilities of the generated tokens will be included in the response.
44
44
/// </param>
45
45
/// <param name="maxTokens">
46
-
/// The maximum number of tokens the model will generate as part of the response.<br/>
47
-
/// **Note**: Setting a low value may result in incomplete generations.
46
+
/// The maximum number of output tokens the model will generate in the response. If not set, `max_tokens` defaults to the model's maximum output token limit. You can find the maximum output token limits for each model in the [model documentation](https://docs.cohere.com/docs/models).<br/>
47
+
/// **Note**: Setting a low value may result in incomplete generations. In such cases, the `finish_reason` field in the response will be set to `"MAX_TOKENS"`.<br/>
48
+
/// **Note**: If `max_tokens` is set higher than the model's maximum output token limit, the generation will be capped at that model-specific maximum limit.
48
49
/// </param>
49
50
/// <param name="messages">
50
51
/// A list of chat messages in chronological order, representing a conversation between the user and the model.<br/>
@@ -62,6 +63,11 @@ public partial interface ICohereClient
62
63
/// Defaults to `0.0`, min value of `0.0`, max value of `1.0`.<br/>
63
64
/// Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
64
65
/// </param>
66
+
/// <param name="rawPrompting">
67
+
/// When enabled, the user's prompt will be sent to the model without<br/>
/// Configuration for forcing the model output to adhere to the specified format. Supported on [Command R](https://docs.cohere.com/v2/docs/command-r), [Command R+](https://docs.cohere.com/v2/docs/command-r-plus) and newer models.<br/>
67
73
/// The model can be forced into outputting JSON objects by setting `{ "type": "json_object" }`.<br/>
@@ -100,6 +106,11 @@ public partial interface ICohereClient
100
106
/// A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations, and higher temperatures mean more random generations.<br/>
101
107
/// Randomness can be further maximized by increasing the value of the `p` parameter.
102
108
/// </param>
109
+
/// <param name="thinking">
110
+
/// Thinking gives the model enhanced reasoning capabilities for complex tasks, while also providing transparency into its step-by-step thought process before it delivers its final answer.<br/>
111
+
/// When thinking is turned on, the model creates thinking content blocks where it outputs its internal reasoning. The model will incorporate insights from this reasoning before crafting a final response.<br/>
112
+
/// When thinking is used without tools, the API response will include both thinking and text content blocks. Meanwhile, when thinking is used alongside tools and the model makes tool calls, the API response will include the thinking content block and `tool_calls`.
113
+
/// </param>
103
114
/// <param name="toolChoice">
104
115
/// Used to control whether or not the model will be forced to use a tool when answering. When `REQUIRED` is specified, the model will be forced to use at least one of the user-defined tools, and the `tools` parameter must be passed in the request.<br/>
105
116
/// When `NONE` is specified, the model will be forced **not** to use one of the specified tools, and give a direct response.<br/>
@@ -125,13 +136,15 @@ public partial interface ICohereClient
0 commit comments