Skip to content

Conversation

@XuehaiPan
Copy link
Collaborator

@XuehaiPan XuehaiPan commented Oct 10, 2025

Changes

  1. Rename workflow files to improve consistency
  2. Refactor perfbench bot workflow:
    • Use GHA step output instead of a text file to simplify workflow logic
    • Use github-script instead of a raw cURL request to improve security
    • Improve the generated comment format.
  3. Refactor publish docs workflow:
    • Use ::add-mask:: to improve secret hiding in log outputs.
  4. Update PR reminder bot:
    • Use pre-commit commands in the comment instead of ./format.sh.
  5. Some other minor changes, such as adding missing step inputs and using step inputs instead of env vars for secrets.

I'm working on a follow-up PR to merge the three test CI files into a more generic one and improve the cache handling logic.

Summary by CodeRabbit

  • New Features
    • Added a Performance Benchmark Bot that runs on /perf requests and posts benchmark results to PRs.
    • Added a PR Reminder Bot that comments formatting/linting guidance when PRs are opened.
  • Documentation
    • Added an automated docs build-and-publish workflow triggered on merges and manual runs.
  • Chores
    • Removed deprecated CI workflows and consolidated benchmark, reminder, and docs automation.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the TileLang project.

Please remember to run bash format.sh in the root directory of the project to ensure your changes are properly linted and formatted. This will help ensure your contribution passes the format check.

We appreciate you taking this step! Our team will review your contribution, and we look forward to your awesome work!

🚀

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 10, 2025

Walkthrough

Replaces and reorganizes CI workflows: removes legacy bot, publish_docs, and reminder workflows; adds a new performance benchmark workflow (dual-env comparison), a PR reminder workflow, and a revamped docs publish workflow targeting self-hosted runners and an external docs repo.

Changes

Cohort / File(s) Summary of Changes
Removed legacy bot
.github/workflows/bot.yml
Deleted an issue_comment-triggered perf bot that ran a performance-test job on PRs and posted results as PR comments.
Performance Benchmark Bot (new)
.github/workflows/pr-perfbench-bot.yml
Added an issue_comment-triggered workflow that validates comment content, checks out the PR merge commit, creates two Python 3.9 venvs (merged vs main), runs maint/scripts/ci_performance.py, captures stdout, and posts a formatted comment with run URL and results.
Removed docs publish workflow
.github/workflows/publish_docs.yml
Deleted the prior docs build/publish workflow that built HTML docs and pushed them to a target repo on merge or manual dispatch.
Documentation publish (new)
.github/workflows/publish-docs.yml
Added a Documentation workflow running on a self-hosted NVIDIA runner (Python 3.10) that checks out code, builds docs, clones the target docs repo, replaces built content, and conditionally commits/pushes changes.
Removed reminder workflow
.github/workflows/reminder.yml
Deleted the PR reminder workflow that commented to remind contributors to run format/format.sh on PR open.
PR Reminder Bot (new)
.github/workflows/pr-reminder-bot.yml
Added a pull_request_target-triggered workflow that posts a comment on PR open via actions/github-script instructing contributors to run pre-commit/formatting.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    actor User as GitHub User
    participant GH as GitHub (issue_comment)
    participant WF as GHA Workflow (pr-perfbench-bot)
    participant Repo as Repository
    participant EnvMerged as Env (merged)
    participant EnvMain as Env (main)
    participant Perf as ci_performance.py
    participant PR as PR Comments

    User->>GH: Post comment "/perf" or "/performance-report"
    GH-->>WF: Trigger workflow (issue_comment.created)
    WF->>WF: Validate comment body & PR context
    WF->>Repo: Checkout PR merge commit (with submodules)
    WF->>EnvMerged: Setup Python 3.9 venv, install reqs & package (merged)
    WF->>Repo: Dry-run clean / switch to `main`
    WF->>EnvMain: Setup Python 3.9 venv, install reqs & package (main)
    WF->>Perf: Run performance script (capture stdout)
    Perf-->>WF: Return stdout/results
    WF->>PR: Post comment with run URL and captured results
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • LeiWang1999

Poem

I twitch my nose and tap the keys,
Two venvs race beneath the trees.
Docs blossom where the branches merge,
A gentle nudge: pre-commit surge.
I hop, I post — CI hums with glee. 🐇

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly communicates that this pull request refactors the CI workflow files unrelated to tests, matching the core changes described in the PR objectives. It clearly identifies the scope of the update without extraneous detail and uses concise phrasing appropriate for commit history. Therefore, the title provides a clear and accurate summary of the main change.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@XuehaiPan
Copy link
Collaborator Author

/perf

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7913fb1 and 1d50ff2.

📒 Files selected for processing (6)
  • .github/workflows/bot.yml (0 hunks)
  • .github/workflows/pr-perfbench-bot.yml (1 hunks)
  • .github/workflows/pr-reminder-bot.yml (1 hunks)
  • .github/workflows/publish-docs.yml (1 hunks)
  • .github/workflows/publish_docs.yml (0 hunks)
  • .github/workflows/reminder.yml (0 hunks)
💤 Files with no reviewable changes (3)
  • .github/workflows/bot.yml
  • .github/workflows/reminder.yml
  • .github/workflows/publish_docs.yml
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/pr-perfbench-bot.yml

21-21: label "nvidia" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

.github/workflows/publish-docs.yml

18-18: label "nvidia" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-test-metal
  • GitHub Check: format-check

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
.github/workflows/pr-perfbench-bot.yml (1)

53-70: Capture the perfbench log as a real step output.

steps.perfbench.outputs.stdout is always empty—run steps do not populate a stdout output—so the posted comment contains no benchmark data. Persist the script output yourself (e.g. tee into a file and write it to $GITHUB_OUTPUT) and read that named output in the github‑script step.

Apply this diff:

       - name: Run performance test
         id: perfbench
         run: |
           source tl/bin/activate
-          python maint/scripts/ci_performance.py
+          python maint/scripts/ci_performance.py | tee perfbench.log
+          {
+            echo "report<<'EOF'"
+            cat perfbench.log
+            echo EOF
+          } >> "$GITHUB_OUTPUT"
@@
-              body: '📊 ​**Performance Test Results** (triggered by @' + context.payload.comment.user.login + '):\n\n' +
-                'Run listed here: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}\n\n' +
-                "${{ steps.perfbench.outputs.stdout }}"
+              body: '📊 ​**Performance Test Results** (triggered by @' + context.payload.comment.user.login + '):\n\n' +
+                'Run listed here: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}\n\n' +
+                `${{ steps.perfbench.outputs.report }}`
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1d50ff2 and 0eb2ae4.

📒 Files selected for processing (1)
  • .github/workflows/pr-perfbench-bot.yml (1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/pr-perfbench-bot.yml

21-21: label "nvidia" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: format-check
  • GitHub Check: format-check

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
.github/workflows/pr-perfbench-bot.yml (1)

53-70: Surface the benchmark log via an explicit step output.

steps.perfbench.outputs.stdout is always empty—run steps don’t expose stdout automatically—so the PR comment still drops the benchmark results. Capture the script output yourself, write it to $GITHUB_OUTPUT, and read that in the github-script step.

       - name: Run performance test
         id: perfbench
         run: |
           source tl/bin/activate
-          python maint/scripts/ci_performance.py
+          python maint/scripts/ci_performance.py | tee perfbench.log
+          {
+            echo "report<<'EOF'"
+            cat perfbench.log
+            echo EOF
+          } >> "$GITHUB_OUTPUT"
@@
-                "${{ steps.perfbench.outputs.stdout }}"
+                "${{ steps.perfbench.outputs.report }}"
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0eb2ae4 and 58a2ce2.

📒 Files selected for processing (1)
  • .github/workflows/pr-perfbench-bot.yml (1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/pr-perfbench-bot.yml

21-21: label "nvidia" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-test-metal
  • GitHub Check: build-test-amd

@LeiWang1999 LeiWang1999 merged commit 0ae183d into tile-ai:main Oct 11, 2025
8 of 10 checks passed
@XuehaiPan XuehaiPan deleted the housekeep-workflows branch October 11, 2025 05:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants