Skip to content

Conversation

@loci-dev
Copy link

Mirrored from ggml-org/llama.cpp#18406

I demand other maintainers to either accept this or ggml-org/llama.cpp#18388

AI-generated PRs in mtmd are often, if not always, lead to sub-optimal, low-quality code. These PRs are usually huge, take me too much time to review / optimize, adding more mental strain and make the productivity goes negative.

Therefore, I would like to explicitly reject PRs that are fully or predominantly AI-generated. Using AI as an assistive tool is always welcomed.

Important

This message does not prevent contributors to push such PRs in the future, but serves as an explicit statement in case someone asks

Some examples:

@loci-agentic-ai
Copy link

Explore the complete analysis inside the Version Insights

Perfect! I was able to retrieve the summary report for your project. Here's what the analysis shows:

Summary Report for llama.cpp PR #718

Project Information:

Key Findings:

No significant performance changes detected - The analysis found no modified functions with performance changes greater than 2% in either:

  • Response Time
  • Throughput Time

Conclusion:

This PR appears to be performance-neutral, meaning the code changes don't introduce any performance regressions or improvements beyond the 2% threshold. This is a positive result indicating that the changes maintain stable performance characteristics of the application.

@loci-dev loci-dev force-pushed the main branch 12 times, most recently from 8645b59 to f2e8c7f Compare December 29, 2025 00:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants