Skip to content

refactor: OpenAI integration for token limit control #100

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Mar 18, 2023
Merged

refactor: OpenAI integration for token limit control #100

merged 4 commits into from
Mar 18, 2023

Conversation

zurawiki
Copy link
Owner

No description provided.

- Update several dependencies to their latest versions.
- Improve overall stability and performance of the project.
- Remove unused `lazy_static` macro from `src/main.rs`
- Refactor `get_completions` and `get_chat_completions` in `src/llms/openai.rs` to use a new token limit constant and maximum tokens functions
- Skip computing completion if prompt exceeds token limit in `src/llms/openai.rs`
…unction

- Add an import to `openai.rs`
- Eliminate code duplication in `OpenAIClient::completions`
- Improve the return value of `OpenAIClient::completions`
- Update README and Cargo.toml to use broader term "large language models" instead of specific "GPT-3"
- Improve readability in user_config_path assignment in config.rs
- Revise commit amend message and output message for summarization in prepare_commit_msg.rs
- Adjust format of the original commit message when allow_amend is enabled
@zurawiki zurawiki merged commit 08a8287 into main Mar 18, 2023
@zurawiki zurawiki deleted the gpt4-2 branch March 18, 2023 01:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant