Skip to content

Conversation

@oleander
Copy link
Owner

No description provided.

@oleander oleander marked this pull request as ready for review May 17, 2024 22:31
oleander added 14 commits May 28, 2024 18:58
Change the trigger types for the continuous delivery pipeline in the GitHub Actions workflow file (cd.yml). Restrict triggers to the 'closed' event type rather than 'opened', 'synchronize', 'reopened', and 'closed'. Also, add 'workflow_dispatch' to allow manual running of the workflow.
The main function now has a better error handling mechanism using the standard process exit call. In the event of an error, the program prints the error and exits with a status code of 1. Also, debug log messages have been added to give additional insight into the execution duration of the program whether it ends successfully or encounters an error.
This commit adds a debug log to the Args implementation, making it easier to monitor if a commit message has already been provided. In addition, it also handles invalid sources, ensuring program stability by returning Ok(()) in such cases.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces proper token counting based on specific AI models by adding a new Model enum and related infrastructure. The changes replace the generic string-based model configuration with a type-safe model system that includes model-specific token counting capabilities using the tiktoken-rs library.

Key changes:

  • Added model-aware token counting and text truncation functionality
  • Refactored OpenAI API integration to use structured request/response types
  • Updated configuration system to use optional fields with sensible defaults

Reviewed Changes

Copilot reviewed 13 out of 15 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
src/model.rs New module defining Model enum with token counting and truncation methods
src/openai.rs New OpenAI API client with structured Request/Response types
src/hook.rs Updated diff processing to use model-specific token counting and improved truncation logic
src/commit.rs Simplified commit generation by delegating to new openai module
src/config.rs Changed configuration fields to Optional with default value handling
src/lib.rs Added new model and openai modules to library exports
Cargo.toml Added tiktoken-rs dependency for token counting
tools/demo.sh Removed demo script file
scripts/integration-tests Added comprehensive integration test script
Dockerfile Simplified Docker configuration for testing
Justfile Added integration test recipe
.github/workflows/ci.yml Removed CI workflow file
.github/workflows/cd.yml Simplified conditional expressions in CD workflow

Comment on lines +48 to +53
let choise = chat
.choices
.first()
.context(format!("Failed to get response: {:?}", chat))?;

let response = choise
Copy link

Copilot AI Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'choise' to 'choice'.

Suggested change
let choise = chat
.choices
.first()
.context(format!("Failed to get response: {:?}", chat))?;
let response = choise
let choice = chat
.choices
.first()
.context(format!("Failed to get response: {:?}", chat))?;
let response = choice

Copilot uses AI. Check for mistakes.
}).context("Failed to print diff")?;

Ok(patch_acc.to_utf8())
// TODO: Grouo arguments
Copy link

Copilot AI Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'Grouo' to 'Group'.

Suggested change
// TODO: Grouo arguments
// TODO: Group arguments

Copilot uses AI. Check for mistakes.
let mut diffs: Vec<_> = files.values().collect();

// TODO: No unwrap
diffs.sort_by_key(|diff| model.count_tokens(diff).unwrap());
Copy link

Copilot AI Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using unwrap() can cause panic if token counting fails. Consider using unwrap_or_default() or proper error handling.

Suggested change
diffs.sort_by_key(|diff| model.count_tokens(diff).unwrap());
diffs.sort_by_key(|diff| model.count_tokens(diff).unwrap_or_default());

Copilot uses AI. Check for mistakes.
let truncated_diff = if file_token_count > *file_allocated_tokens {
model.truncate(diff, *file_allocated_tokens)
} else {
Ok((*diff).clone().to_owned()) // TODO: Better way?
Copy link

Copilot AI Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The double cloning ((*diff).clone().to_owned()) is redundant. Use either diff.to_string() or (*diff).to_owned().

Suggested change
Ok((*diff).clone().to_owned()) // TODO: Better way?
Ok((*diff).to_owned()) // Fixed: removed redundant clone()

Copilot uses AI. Check for mistakes.
.build()?;
pub async fn generate(diff: String, max_tokens: usize, model: Model) -> Result<openai::Response> {
if max_tokens == 0 {
bail!("Max can't be zero (2)")
Copy link

Copilot AI Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message 'Max can't be zero (2)' is unclear. The '(2)' suffix appears to be a debugging artifact and should be removed or explained.

Suggested change
bail!("Max can't be zero (2)")
bail!("Max can't be zero")

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants