-
Notifications
You must be signed in to change notification settings - Fork 664
LLM Benchmarking #3486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
LLM Benchmarking #3486
+57,008
−171
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ain permissions Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> Signed-off-by: bradleyshep <148254416+bradleyshep@users.noreply.github.com>
Signed-off-by: bradleyshep <148254416+bradleyshep@users.noreply.github.com>
Add retry logic for signal-killed processes (SIGSEGV) with up to 2 retries and 500ms delay between attempts. Also reduce C# build concurrency from 8 to 4 by default to prevent resource contention in dotnet/WASI SDK builds. The C# concurrency can be configured via LLM_BENCH_CSHARP_CONCURRENCY env var.
Set MSBUILDDISABLENODEREUSE=1 and DOTNET_CLI_USE_MSBUILD_SERVER=0 to prevent resource contention when running multiple dotnet publish commands in parallel on GitHub Actions runners. See: dotnet/msbuild#6657
Collaborator
LLM Benchmark Results (ci-quickfix)
Generated at: 2026-01-06T00:39:43.087Z |
Contributor
|
I think we're okay to merge this now that |
cloutiertyler
approved these changes
Jan 6, 2026
e51b4e2 to
04eb91a
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of Changes
Introduce a new LLM benchmarking app and supporting code.
llmwith subcommandsrun,routes list,diff,ci-check.--lang,--categories,--tasks,--providers,--models.provider:model) with HTTP LLM Vendor clients; env-driven keys/base URLs.DEVELOP.mdincludescargo llm …usage.This PR is the initial addition of the app and its modules (runner, config, routes, prompt/segmentation, scorers, schema/types, defaults/constants/paths/hashing/combine, publishers, spacetime guard, HTML stats viewer).
How it works
Pick what to run
--tasks 0,7,12), or a language (--lang rust|csharp), or categories (--categories basics,schema).--providers …,--models …).Resolve routes
openai:gpt-5).Build context
Execute calls
Score outputs
Update results file
API and ABI breaking changes
None. New application and modules; no existing public APIs/ABIs altered.
Expected complexity level and risk
4/5. New CLI, routing, evaluation, and artifact format.
LLM_BENCH_CONCURRENCY/LLM_BENCH_ROUTE_CONCURRENCY.Testing
I ran the full test matrix and generated results for every task against every vendor, model, and language (rust + C#). I also tested the CI check locally using act.
Please verify
llm run --tasks 0,1,2(explicitrun)llm run --lang rust --categories basics(filters)llm run --categories basics,schema(multiple categories)llm run --lang csharp(language switch)llm run --providers openai,anthropic --models "openai:gpt-5 anthropic:claude-sonnet-4-5"(provider/model limits)llm run --hash-only(dry integrity)llm run --goldens-only(test goldens only)llm run --force(skip hash check)llm ci-check