Skip to content

Update Cody faq page. #897

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from
Closed
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions docs/cody/faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,19 @@ cody chat --model '$name_of_the_model' -m 'Hi Cody!'

For example, to use Claude 3.5 Sonnet, you'd pass the following command in your terminal, `cody chat --model 'claude-3.5-sonnet' -m 'Hi Cody!'

### Is it possible to set separate token usage just for programmatic access( via API) and not to be used via IDE?

Yes, it's possible to set separate token usage for API but only for completions (not chat). To achieve this you need to use the custom [model configuration](https://sourcegraph.com/docs/cody/enterprise/model-configuration#model-overrides). You need to configure the below settings

maxInputTokens: Specifies the maximum number of tokens for the contextual data in the prompt (e.g., question, relevant snippets

maxOutputTokens: Specifies the maximum number of tokens allowed in the response

Also, you need to set the "capabilities" list as empty in the model configuration.

"capabilities": []


## OpenAI o1

### What are OpenAI o1 best practices?
Expand Down
Loading