-
Couldn't load subscription status.
- Fork 286
[CI][Refactor] Refactor non-test CI workflow files #971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
👋 Hi! Thank you for contributing to the TileLang project. Please remember to run We appreciate you taking this step! Our team will review your contribution, and we look forward to your awesome work! 🚀 |
WalkthroughReplaces and reorganizes CI workflows: removes legacy bot, publish_docs, and reminder workflows; adds a new performance benchmark workflow (dual-env comparison), a PR reminder workflow, and a revamped docs publish workflow targeting self-hosted runners and an external docs repo. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User as GitHub User
participant GH as GitHub (issue_comment)
participant WF as GHA Workflow (pr-perfbench-bot)
participant Repo as Repository
participant EnvMerged as Env (merged)
participant EnvMain as Env (main)
participant Perf as ci_performance.py
participant PR as PR Comments
User->>GH: Post comment "/perf" or "/performance-report"
GH-->>WF: Trigger workflow (issue_comment.created)
WF->>WF: Validate comment body & PR context
WF->>Repo: Checkout PR merge commit (with submodules)
WF->>EnvMerged: Setup Python 3.9 venv, install reqs & package (merged)
WF->>Repo: Dry-run clean / switch to `main`
WF->>EnvMain: Setup Python 3.9 venv, install reqs & package (main)
WF->>Perf: Run performance script (capture stdout)
Perf-->>WF: Return stdout/results
WF->>PR: Post comment with run URL and captured results
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
/perf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.github/workflows/bot.yml(0 hunks).github/workflows/pr-perfbench-bot.yml(1 hunks).github/workflows/pr-reminder-bot.yml(1 hunks).github/workflows/publish-docs.yml(1 hunks).github/workflows/publish_docs.yml(0 hunks).github/workflows/reminder.yml(0 hunks)
💤 Files with no reviewable changes (3)
- .github/workflows/bot.yml
- .github/workflows/reminder.yml
- .github/workflows/publish_docs.yml
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/pr-perfbench-bot.yml
21-21: label "nvidia" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
.github/workflows/publish-docs.yml
18-18: label "nvidia" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-test-metal
- GitHub Check: format-check
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/pr-perfbench-bot.yml (1)
53-70: Capture the perfbench log as a real step output.
steps.perfbench.outputs.stdoutis always empty—runsteps do not populate astdoutoutput—so the posted comment contains no benchmark data. Persist the script output yourself (e.g.teeinto a file and write it to$GITHUB_OUTPUT) and read that named output in the github‑script step.Apply this diff:
- name: Run performance test id: perfbench run: | source tl/bin/activate - python maint/scripts/ci_performance.py + python maint/scripts/ci_performance.py | tee perfbench.log + { + echo "report<<'EOF'" + cat perfbench.log + echo EOF + } >> "$GITHUB_OUTPUT" @@ - body: '📊 **Performance Test Results** (triggered by @' + context.payload.comment.user.login + '):\n\n' + - 'Run listed here: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}\n\n' + - "${{ steps.perfbench.outputs.stdout }}" + body: '📊 **Performance Test Results** (triggered by @' + context.payload.comment.user.login + '):\n\n' + + 'Run listed here: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}\n\n' + + `${{ steps.perfbench.outputs.report }}`
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/pr-perfbench-bot.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/pr-perfbench-bot.yml
21-21: label "nvidia" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: format-check
- GitHub Check: format-check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
.github/workflows/pr-perfbench-bot.yml (1)
53-70: Surface the benchmark log via an explicit step output.
steps.perfbench.outputs.stdoutis always empty—runsteps don’t expose stdout automatically—so the PR comment still drops the benchmark results. Capture the script output yourself, write it to$GITHUB_OUTPUT, and read that in the github-script step.- name: Run performance test id: perfbench run: | source tl/bin/activate - python maint/scripts/ci_performance.py + python maint/scripts/ci_performance.py | tee perfbench.log + { + echo "report<<'EOF'" + cat perfbench.log + echo EOF + } >> "$GITHUB_OUTPUT" @@ - "${{ steps.perfbench.outputs.stdout }}" + "${{ steps.perfbench.outputs.report }}"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/pr-perfbench-bot.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/pr-perfbench-bot.yml
21-21: label "nvidia" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-test-metal
- GitHub Check: build-test-amd
Changes
github-scriptinstead of a raw cURL request to improve security::add-mask::to improve secret hiding in log outputs.pre-commitcommands in the comment instead of./format.sh.I'm working on a follow-up PR to merge the three test CI files into a more generic one and improve the cache handling logic.
Summary by CodeRabbit