-
Notifications
You must be signed in to change notification settings - Fork 8
Automate the process of syncing changes from handout to student repos #370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: staging
Are you sure you want to change the base?
Conversation
WalkthroughAdds end-to-end repository handout sync: UI for repository status and bulk sync, row-selection support, DB schema and RPCs to queue syncs, async worker sync handling, webhook handling for merged sync PRs, sync helpers/wrappers, a push-sync script, and updated shared types. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Instructor
participant UI as Repositories Page (UI)
participant RPC as Supabase RPC
participant Q as pgmq Queue
participant W as GitHub Async Worker
participant GH as GitHub
participant DB as Postgres
Instructor->>UI: Select rows → "Sync Selected"
UI->>RPC: call public.queue_repository_syncs([repo_ids])
RPC->>DB: validate permissions, set desired_handout_sha where needed
RPC->>Q: enqueue sync_repo_to_handout jobs
RPC-->>UI: return queued/skipped/errors
UI-->>Instructor: show toasts + refetch table
loop per queued job
W->>Q: dequeue sync_repo_to_handout
W->>DB: read repo, template/latest SHAs
W->>GH: create branch, commit files, open PR
alt auto-mergeable
W->>GH: attempt auto-merge
end
W->>DB: update synced_handout_sha, synced_repo_sha, desired_handout_sha, sync_data
end
sequenceDiagram
autonumber
participant GH as GitHub
participant WH as Repo Webhook Function
participant DB as Postgres
GH-->>WH: pull_request (closed & merged) on "sync-to-…"
WH->>DB: find repository by full_name
alt repo found
WH->>DB: update synced_handout_sha, synced_repo_sha, sync_data
WH-->>GH: log/update
else
WH-->>GH: ignore/log
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~55 minutes Possibly related issues
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
app/course/[course_id]/manage/assignments/[assignment_id]/layout.tsx
(1 hunks)app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx
(8 hunks)hooks/useTableControllerTable.tsx
(3 hunks)supabase/functions/_shared/GitHubAsyncTypes.ts
(2 hunks)supabase/functions/_shared/SupabaseTypes.d.ts
(4 hunks)supabase/functions/github-async-worker/index.ts
(3 hunks)supabase/functions/github-repo-webhook/index.ts
(3 hunks)supabase/functions/scripts/PushChangesToRepoFromHandout.ts
(1 hunks)supabase/migrations/20251004163000_add-repo-sync-fields.sql
(1 hunks)utils/supabase/SupabaseTypes.d.ts
(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
supabase/functions/github-repo-webhook/index.ts (1)
supabase/functions/_shared/SupabaseTypes.d.ts (1)
Database
(3-9751)
app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx (5)
components/ui/link.tsx (1)
Link
(4-42)utils/supabase/client.ts (1)
createClient
(6-17)components/ui/toaster.tsx (1)
toaster
(5-8)hooks/useCourseController.tsx (2)
useCourse
(1382-1385)useCourseController
(1387-1393)hooks/useTableControllerTable.tsx (1)
useTableControllerTable
(70-182)
utils/supabase/SupabaseTypes.d.ts (1)
supabase/functions/_shared/SupabaseTypes.d.ts (1)
Json
(1-1)
supabase/functions/scripts/PushChangesToRepoFromHandout.ts (1)
supabase/functions/_shared/GitHubWrapper.ts (1)
getOctoKit
(187-251)
supabase/functions/_shared/SupabaseTypes.d.ts (1)
utils/supabase/SupabaseTypes.d.ts (1)
Json
(1-1)
supabase/functions/github-async-worker/index.ts (1)
supabase/functions/_shared/GitHubAsyncTypes.ts (1)
SyncRepoToHandoutArgs
(82-89)
⏰ Context from checks skipped due to timeout of 180000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: deploy
app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx
Show resolved
Hide resolved
// Update repository with sync status | ||
if (result.no_changes) { | ||
await adminSupabase | ||
.from("repositories") | ||
.update({ | ||
synced_handout_sha: to_sha, | ||
desired_handout_sha: to_sha, | ||
sync_data: { | ||
last_sync_attempt: new Date().toISOString(), | ||
status: "no_changes_needed" | ||
} | ||
}) | ||
.eq("id", repository_id); | ||
} else { | ||
await adminSupabase | ||
.from("repositories") | ||
.update({ | ||
synced_handout_sha: result.merged ? to_sha : from_sha, | ||
synced_repo_sha: result.merged ? result.merge_sha : undefined, | ||
desired_handout_sha: to_sha, | ||
sync_data: { | ||
pr_number: result.pr_number, | ||
pr_url: result.pr_url, | ||
pr_state: result.merged ? "merged" : "open", | ||
branch_name: `sync-to-${to_sha.substring(0, 7)}`, | ||
last_sync_attempt: new Date().toISOString(), | ||
merge_sha: result.merge_sha | ||
} | ||
}) | ||
.eq("id", repository_id); | ||
} | ||
|
||
recordMetric( | ||
adminSupabase, | ||
{ | ||
method: envelope.method, | ||
status_code: 200, | ||
class_id: envelope.class_id, | ||
debug_id: envelope.debug_id, | ||
enqueued_at: meta.enqueued_at, | ||
log_id: envelope.log_id | ||
}, | ||
scope | ||
); | ||
return true; | ||
} catch (error) { | ||
// Update repository with error status | ||
await adminSupabase | ||
.from("repositories") | ||
.update({ | ||
sync_data: { | ||
last_sync_attempt: new Date().toISOString(), | ||
last_sync_error: error instanceof Error ? error.message : String(error), | ||
status: "error" | ||
} | ||
}) | ||
.eq("id", repository_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surface supabase update errors instead of ignoring them.
Both success and error paths call .update(...)
without inspecting the returned { error }
. If that update fails (row missing, permission issue, etc.), the worker will report success and the UI never sees the new sync state. Capture the response and throw on failure so the job requeues with a clear reason.
- await adminSupabase
- .from("repositories")
- .update({
+ const { error: updateError } = await adminSupabase
+ .from("repositories")
+ .update({
synced_handout_sha: to_sha,
desired_handout_sha: to_sha,
sync_data: {
last_sync_attempt: new Date().toISOString(),
status: "no_changes_needed"
}
})
.eq("id", repository_id);
+ if (updateError) throw new Error(`Failed to mark sync success: ${updateError.message}`);
Apply the same pattern to the other update
blocks, including the error branch, so any write failure aborts the job instead of silently succeeding.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Update repository with sync status | |
if (result.no_changes) { | |
await adminSupabase | |
.from("repositories") | |
.update({ | |
synced_handout_sha: to_sha, | |
desired_handout_sha: to_sha, | |
sync_data: { | |
last_sync_attempt: new Date().toISOString(), | |
status: "no_changes_needed" | |
} | |
}) | |
.eq("id", repository_id); | |
} else { | |
await adminSupabase | |
.from("repositories") | |
.update({ | |
synced_handout_sha: result.merged ? to_sha : from_sha, | |
synced_repo_sha: result.merged ? result.merge_sha : undefined, | |
desired_handout_sha: to_sha, | |
sync_data: { | |
pr_number: result.pr_number, | |
pr_url: result.pr_url, | |
pr_state: result.merged ? "merged" : "open", | |
branch_name: `sync-to-${to_sha.substring(0, 7)}`, | |
last_sync_attempt: new Date().toISOString(), | |
merge_sha: result.merge_sha | |
} | |
}) | |
.eq("id", repository_id); | |
} | |
recordMetric( | |
adminSupabase, | |
{ | |
method: envelope.method, | |
status_code: 200, | |
class_id: envelope.class_id, | |
debug_id: envelope.debug_id, | |
enqueued_at: meta.enqueued_at, | |
log_id: envelope.log_id | |
}, | |
scope | |
); | |
return true; | |
} catch (error) { | |
// Update repository with error status | |
await adminSupabase | |
.from("repositories") | |
.update({ | |
sync_data: { | |
last_sync_attempt: new Date().toISOString(), | |
last_sync_error: error instanceof Error ? error.message : String(error), | |
status: "error" | |
} | |
}) | |
.eq("id", repository_id); | |
// Update repository with sync status | |
if (result.no_changes) { | |
const { error: updateError } = await adminSupabase | |
.from("repositories") | |
.update({ | |
synced_handout_sha: to_sha, | |
desired_handout_sha: to_sha, | |
sync_data: { | |
last_sync_attempt: new Date().toISOString(), | |
status: "no_changes_needed" | |
} | |
}) | |
.eq("id", repository_id); | |
if (updateError) throw new Error(`Failed to mark sync success: ${updateError.message}`); | |
} else { | |
await adminSupabase | |
.from("repositories") | |
.update({ | |
synced_handout_sha: result.merged ? to_sha : from_sha, | |
synced_repo_sha: result.merged ? result.merge_sha : undefined, | |
desired_handout_sha: to_sha, | |
sync_data: { | |
pr_number: result.pr_number, | |
pr_url: result.pr_url, | |
pr_state: result.merged ? "merged" : "open", | |
branch_name: `sync-to-${to_sha.substring(0, 7)}`, | |
last_sync_attempt: new Date().toISOString(), | |
merge_sha: result.merge_sha | |
} | |
}) | |
.eq("id", repository_id); | |
} |
const mergeSha = payload.pull_request.merge_commit_sha; | ||
|
||
scope.setTag("synced_sha", syncedSha); | ||
scope.setTag("merge_sha", mergeSha || "none"); | ||
|
||
// Update the repository sync status | ||
const { error: updateError } = await adminSupabase | ||
.from("repositories") | ||
.update({ | ||
synced_handout_sha: syncedSha, | ||
synced_repo_sha: mergeSha, | ||
sync_data: { | ||
pr_number: payload.pull_request.number, | ||
pr_url: payload.pull_request.html_url, | ||
pr_state: "merged", | ||
branch_name: branchName, | ||
last_sync_attempt: new Date().toISOString(), | ||
merge_sha: mergeSha, | ||
merged_by: payload.pull_request.merged_by?.login, | ||
merged_at: payload.pull_request.merged_at | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle PR merges without merge commits
If someone uses “Rebase and merge” for the sync PR, GitHub sets merge_commit_sha
to null
. With the current code we would write synced_repo_sha
(and merge_sha
in sync_data
) as null
, even though the repo is fully synced. Please fall back to the head SHA so we always persist a real commit.
- const mergeSha = payload.pull_request.merge_commit_sha;
+ const mergeSha =
+ payload.pull_request.merge_commit_sha ?? payload.pull_request.head.sha ?? null;
@@
- synced_repo_sha: mergeSha,
+ synced_repo_sha: mergeSha,
@@
- merge_sha: mergeSha,
+ merge_sha: mergeSha,
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const mergeSha = payload.pull_request.merge_commit_sha; | |
scope.setTag("synced_sha", syncedSha); | |
scope.setTag("merge_sha", mergeSha || "none"); | |
// Update the repository sync status | |
const { error: updateError } = await adminSupabase | |
.from("repositories") | |
.update({ | |
synced_handout_sha: syncedSha, | |
synced_repo_sha: mergeSha, | |
sync_data: { | |
pr_number: payload.pull_request.number, | |
pr_url: payload.pull_request.html_url, | |
pr_state: "merged", | |
branch_name: branchName, | |
last_sync_attempt: new Date().toISOString(), | |
merge_sha: mergeSha, | |
merged_by: payload.pull_request.merged_by?.login, | |
merged_at: payload.pull_request.merged_at | |
} | |
const mergeSha = | |
payload.pull_request.merge_commit_sha ?? payload.pull_request.head.sha ?? null; | |
scope.setTag("synced_sha", syncedSha); | |
scope.setTag("merge_sha", mergeSha || "none"); | |
// Update the repository sync status | |
const { error: updateError } = await adminSupabase | |
.from("repositories") | |
.update({ | |
synced_handout_sha: syncedSha, | |
synced_repo_sha: mergeSha, | |
sync_data: { | |
pr_number: payload.pull_request.number, | |
pr_url: payload.pull_request.html_url, | |
pr_state: "merged", | |
branch_name: branchName, | |
last_sync_attempt: new Date().toISOString(), | |
merge_sha: mergeSha, | |
merged_by: payload.pull_request.merged_by?.login, | |
merged_at: payload.pull_request.merged_at | |
} |
🤖 Prompt for AI Agents
In supabase/functions/github-repo-webhook/index.ts around lines 1191–1211, the
code writes merge_commit_sha (mergeSha) directly into synced_repo_sha and
sync_data.merge_sha which becomes null for "Rebase and merge" PRs; change to
fall back to the PR head SHA (payload.pull_request.head.sha) when
merge_commit_sha is null, and use that effective SHA for
scope.setTag("merge_sha", ...), the synced_repo_sha update, and
sync_data.merge_sha so we always persist a real commit SHA.
for (const file of comparison.files || []) { | ||
// Handle deleted files - we'll mark them with null content to remove them | ||
if (file.status === "removed") { | ||
console.log(`File deleted: ${file.filename}`); | ||
fileChanges.push({ | ||
path: file.filename, | ||
sha: "", // Will be set to null when creating tree | ||
content: "" // Empty content indicates deletion | ||
}); | ||
continue; | ||
} | ||
|
||
// Get the file content from the new commit | ||
const { data: fileContent } = await octokit.request("GET /repos/{owner}/{repo}/contents/{path}", { | ||
owner, | ||
repo, | ||
path: file.filename, | ||
ref: toSha | ||
}); | ||
|
||
if ("content" in fileContent && fileContent.sha) { | ||
fileChanges.push({ | ||
path: file.filename, | ||
sha: fileContent.sha, | ||
content: fileContent.content | ||
}); | ||
} else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix rename handling to remove the old path.
When GitHub marks a file as renamed
, we only add the new filename to fileChanges
. The old path never gets deleted, so the target repo retains both copies after the sync. Please queue a deletion entry for previous_filename
whenever it’s provided so the rename results in the expected move.
for (const file of comparison.files || []) {
+ if (file.status === "renamed" && file.previous_filename) {
+ fileChanges.push({
+ path: file.previous_filename,
+ sha: "",
+ content: ""
+ });
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for (const file of comparison.files || []) { | |
// Handle deleted files - we'll mark them with null content to remove them | |
if (file.status === "removed") { | |
console.log(`File deleted: ${file.filename}`); | |
fileChanges.push({ | |
path: file.filename, | |
sha: "", // Will be set to null when creating tree | |
content: "" // Empty content indicates deletion | |
}); | |
continue; | |
} | |
// Get the file content from the new commit | |
const { data: fileContent } = await octokit.request("GET /repos/{owner}/{repo}/contents/{path}", { | |
owner, | |
repo, | |
path: file.filename, | |
ref: toSha | |
}); | |
if ("content" in fileContent && fileContent.sha) { | |
fileChanges.push({ | |
path: file.filename, | |
sha: fileContent.sha, | |
content: fileContent.content | |
}); | |
} else { | |
for (const file of comparison.files || []) { | |
if (file.status === "renamed" && file.previous_filename) { | |
fileChanges.push({ | |
path: file.previous_filename, | |
sha: "", | |
content: "" | |
}); | |
} | |
// Handle deleted files - we'll mark them with null content to remove them | |
if (file.status === "removed") { | |
console.log(`File deleted: ${file.filename}`); | |
fileChanges.push({ | |
path: file.filename, | |
sha: "", // Will be set to null when creating tree | |
content: "" // Empty content indicates deletion | |
}); | |
continue; | |
} | |
// Get the file content from the new commit | |
const { data: fileContent } = await octokit.request("GET /repos/{owner}/{repo}/contents/{path}", { | |
owner, | |
repo, | |
path: file.filename, | |
ref: toSha | |
}); | |
if ("content" in fileContent && fileContent.sha) { | |
fileChanges.push({ | |
path: file.filename, | |
sha: fileContent.sha, | |
content: fileContent.content | |
}); | |
} else { | |
// …rest of error handling… |
🤖 Prompt for AI Agents
In supabase/functions/scripts/PushChangesToRepoFromHandout.ts around lines 128
to 154, the rename case only adds the new filename so the old path is never
removed; update the loop to detect file.status === "renamed" and push a deletion
entry for file.previous_filename (set path to previous_filename and set
sha/content to indicate deletion consistent with the existing deletion format),
then continue to add the new filename entry as you do for regular files so the
rename results in a move rather than duplicating the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (4)
supabase/functions/scripts/PushChangesToRepoFromHandout.ts (2)
128-153
: Still missing deletion when handling renames.
file.status === "renamed"
never queues a removal for the old path, so the target repo keeps both copies after sync. Please push a deletion entry forfile.previous_filename
before adding the new path.for (const file of comparison.files || []) { + if (file.status === "renamed" && file.previous_filename) { + fileChanges.push({ + path: file.previous_filename, + sha: "", + content: "" + }); + } // Handle deleted files - we'll mark them with null content to remove them
235-244
: Don’t decode GitHub’s base64 payload.Decoding to UTF‑8 corrupts binary assets (images, archives, etc.). Send the original base64 through and set
encoding: "base64"
when creating the blob.- const content = Buffer.from(file.content, "base64").toString("utf-8"); - - const { data: blob } = await octokit.request("POST /repos/{owner}/{repo}/git/blobs", { + const { data: blob } = await octokit.request("POST /repos/{owner}/{repo}/git/blobs", { owner, repo, - content, - encoding: "utf-8" + content: file.content, + encoding: "base64" });supabase/functions/github-async-worker/index.ts (1)
732-821
: Surface supabase update errors instead of ignoring them.All three database update paths (Lines 764-774, 776-791, 809-818) call
.update(...)
without checking the returned{ error }
. If any update fails due to a missing row, permission issue, or constraint violation, the worker will continue as if the operation succeeded, leaving the UI with stale sync state.As mentioned in the previous review, capture the response and throw on failure:
- await adminSupabase + const { error: updateError } = await adminSupabase .from("repositories") .update({ synced_handout_sha: to_sha, desired_handout_sha: to_sha, sync_data: { last_sync_attempt: new Date().toISOString(), status: "no_changes_needed" } }) .eq("id", repository_id); + if (updateError) throw new Error(`Failed to mark sync success: ${updateError.message}`);Apply this pattern to all three update blocks (Lines 764-774, 776-791, 809-818).
app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx (1)
493-508
: Fix sync status filtering to match displayed badges.The previous review comment on this issue remains valid. The
filterFn
(lines 493-507) classifies rows into only 4 statuses ("No Sync", "Synced", "PR Open", "Sync in Progress"), butSyncStatusBadge
can render 7 distinct states including "Not Up-to-date", "Sync Finalizing", and "Error". Additionally, the filter dropdown options (lines 735-740) omit these missing states.This creates a confusing user experience where:
- Users see badges like "Not Up-to-date" but cannot filter for them
- Rows with "Not Up-to-date" incorrectly appear when filtering for "Synced"
- "Sync Finalizing" and "Error" states are invisible to filters
Apply this diff to align the filter logic with the badge states:
filterFn: (row, _id, filterValue) => { if (!filterValue || (Array.isArray(filterValue) && filterValue.length === 0)) return true; const values = Array.isArray(filterValue) ? filterValue : [filterValue]; const desiredSha = (row.original as RepositoryRow).desired_handout_sha; const syncedSha = (row.original as RepositoryRow).synced_handout_sha; const syncData = (row.original as RepositoryRow).sync_data as { pr_state?: string } | null; + const latestTemplateSha = assignment?.data?.latest_template_sha; let status = "No Sync"; - if (!desiredSha) status = "No Sync"; - else if (desiredSha === syncedSha) status = "Synced"; - else if (syncData?.pr_state === "open") status = "PR Open"; - else status = "Sync in Progress"; + if (!desiredSha) { + status = "No Sync"; + } else if (desiredSha === syncedSha) { + // Check if synced but not up-to-date with latest template + if (latestTemplateSha && syncedSha !== latestTemplateSha) { + status = "Not Up-to-date"; + } else { + status = "Synced"; + } + } else if (syncData?.pr_state === "open") { + status = "PR Open"; + } else if (syncData?.pr_state === "merged") { + status = "Sync Finalizing"; + } else if (syncData?.last_sync_error) { + status = "Error"; + } else { + status = "Sync in Progress"; + } return values.includes(status); }And update the filter dropdown options:
options={[ { label: "Synced", value: "Synced" }, + { label: "Not Up-to-date", value: "Not Up-to-date" }, { label: "Sync in Progress", value: "Sync in Progress" }, { label: "PR Open", value: "PR Open" }, + { label: "Sync Finalizing", value: "Sync Finalizing" }, + { label: "Error", value: "Error" }, { label: "No Sync", value: "No Sync" } ]}Also applies to: 735-743
🧹 Nitpick comments (1)
app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx (1)
95-97
: Guard substring operations against short SHAs.The
substring(0, 7)
calls assume SHAs are at least 7 characters long. If any SHA in the database is shorter (due to data corruption or testing data), this will not throw but could display unexpected values.Apply this diff:
- const desiredSha = row.desired_handout_sha?.substring(0, 7); - const syncedSha = row.synced_handout_sha?.substring(0, 7); - const latestSha = latestTemplateSha?.substring(0, 7); + const desiredSha = row.desired_handout_sha && row.desired_handout_sha.length >= 7 ? row.desired_handout_sha.substring(0, 7) : row.desired_handout_sha; + const syncedSha = row.synced_handout_sha && row.synced_handout_sha.length >= 7 ? row.synced_handout_sha.substring(0, 7) : row.synced_handout_sha; + const latestSha = latestTemplateSha && latestTemplateSha.length >= 7 ? latestTemplateSha.substring(0, 7) : latestTemplateSha;Or create a helper function:
const shortSha = (sha: string | null | undefined) => sha && sha.length >= 7 ? sha.substring(0, 7) : sha;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx
(8 hunks)supabase/functions/github-async-worker/index.ts
(6 hunks)supabase/functions/github-repo-webhook/index.ts
(3 hunks)supabase/functions/scripts/PushChangesToRepoFromHandout.ts
(1 hunks)supabase/migrations/20251004163000_add-repo-sync-fields.sql
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- supabase/functions/github-repo-webhook/index.ts
🧰 Additional context used
🧬 Code graph analysis (3)
app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx (5)
lib/TableController.ts (3)
TableController
(461-2222)table
(519-521)rows
(2215-2217)utils/supabase/client.ts (1)
createClient
(6-17)components/ui/toaster.tsx (1)
toaster
(5-8)hooks/useCourseController.tsx (3)
useCourse
(1382-1385)useCourseController
(1387-1393)repositories
(602-612)hooks/useTableControllerTable.tsx (1)
useTableControllerTable
(70-182)
supabase/functions/scripts/PushChangesToRepoFromHandout.ts (2)
supabase/functions/_shared/SupabaseTypes.d.ts (1)
Database
(3-9751)supabase/functions/_shared/GitHubWrapper.ts (1)
getOctoKit
(187-262)
supabase/functions/github-async-worker/index.ts (1)
supabase/functions/_shared/GitHubAsyncTypes.ts (1)
SyncRepoToHandoutArgs
(82-89)
⏰ Context from checks skipped due to timeout of 180000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: deploy
🔇 Additional comments (8)
supabase/functions/github-async-worker/index.ts (3)
1-20
: LGTM!The import additions are appropriate for the new sync functionality and properly typed.
38-38
: LGTM!Exporting
getCreateRepoLimiter
expands the public API surface as intended. The function is already well-implemented with proper rate limiting and Redis clustering support.
840-843
: LGTM!The org extraction logic correctly splits
repository_full_name
on/
to extract the organization, consistent with the existing pattern used forrerun_autograder
.app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx (5)
1-37
: LGTM!The imports are well-organized and all appear to be used in the component. The addition of date-fns and @date-fns/tz libraries aligns with the new timezone-aware commit history feature.
185-242
: LGTM!The sync button implementation is well-structured with proper error handling, loading states, and user feedback via toasts. The use of
tableController.invalidate()
ensures the UI stays in sync with the database state.
300-300
: Verify TZDate.tz() API usage.The call
TZDate.tz(time_zone || "America/New_York")
appears to be using a static method that may not exist or may not return the expected type forformatRelative
. Based on the @date-fns/tz documentation, the recommended pattern is to use timezone-aware functions or create TZDate instances directly.Please verify that this code executes without errors. If
TZDate.tz()
is not the correct API, consider using:formatRelative(commitDate, new TZDate(new Date(), time_zone || "America/New_York"))Or alternatively, use the standard
new Date()
as the base date sincecommitDate
is already timezone-aware:formatRelative(commitDate, new Date())Based on learnings.
543-584
: LGTM!The bulk sync implementation is well-structured and handles edge cases properly:
- Validates selection before proceeding
- Shows clear feedback via toasts with counts
- Deselects all rows after sync
- Uses
refetchByIds
to efficiently update only affected rows- Proper error handling and loading states
344-344
: Appropriate debounce interval for bulk operations.Setting
debounceInterval: 500
is a good choice for this use case. It prevents UI thrashing during rapid bulk sync updates while keeping the interface responsive.
const syncData = row.sync_data as { | ||
pr_url?: string; | ||
pr_number?: number; | ||
pr_state?: string; | ||
last_sync_attempt?: string; | ||
last_sync_error?: string; | ||
merge_sha?: string; | ||
} | null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add type guard for sync_data.
The type assertion for sync_data
lacks runtime validation. If the database schema changes or the data is malformed, this will fail silently or produce runtime errors.
Apply this diff to add a type guard:
+function isSyncData(data: unknown): data is {
+ pr_url?: string;
+ pr_number?: number;
+ pr_state?: string;
+ last_sync_attempt?: string;
+ last_sync_error?: string;
+ merge_sha?: string;
+} {
+ if (data === null || typeof data !== 'object') return false;
+ return true;
+}
+
function SyncStatusBadge({ row, latestTemplateSha }: { row: RepositoryRow; latestTemplateSha?: string | null }) {
- const syncData = row.sync_data as {
- pr_url?: string;
- pr_number?: number;
- pr_state?: string;
- last_sync_attempt?: string;
- last_sync_error?: string;
- merge_sha?: string;
- } | null;
+ const syncData = isSyncData(row.sync_data) ? row.sync_data : null;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const syncData = row.sync_data as { | |
pr_url?: string; | |
pr_number?: number; | |
pr_state?: string; | |
last_sync_attempt?: string; | |
last_sync_error?: string; | |
merge_sha?: string; | |
} | null; | |
function isSyncData(data: unknown): data is { | |
pr_url?: string; | |
pr_number?: number; | |
pr_state?: string; | |
last_sync_attempt?: string; | |
last_sync_error?: string; | |
merge_sha?: string; | |
} { | |
if (data === null || typeof data !== 'object') return false; | |
return true; | |
} | |
function SyncStatusBadge({ | |
row, | |
latestTemplateSha, | |
}: { | |
row: RepositoryRow; | |
latestTemplateSha?: string | null; | |
}) { | |
const syncData = isSyncData(row.sync_data) ? row.sync_data : null; | |
// …rest of the component… | |
} |
const { data: latestHandoutCommit } = await adminSupabase | ||
.from("assignments") | ||
.select("latest_template_sha") | ||
.eq("template_repo", templateRepo) | ||
.maybeSingle(); | ||
if (envelope.repo_id) { | ||
// Direct update using repo_id (more efficient and reliable) | ||
await adminSupabase | ||
.from("repositories") | ||
.update({ | ||
is_github_ready: true, | ||
synced_repo_sha: headSha | ||
synced_repo_sha: headSha, | ||
synced_handout_sha: latestHandoutCommit?.latest_template_sha |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check errors and validate data before using.
The query to fetch latest_template_sha
doesn't check for errors, and the result could be null or contain undefined fields. If the query fails or returns no data, synced_handout_sha
will be set to undefined without any indication of the problem.
Apply this diff to add error checking and validation:
- const { data: latestHandoutCommit } = await adminSupabase
+ const { data: latestHandoutCommit, error: templateError } = await adminSupabase
.from("assignments")
.select("latest_template_sha")
.eq("template_repo", templateRepo)
.maybeSingle();
+ if (templateError) {
+ scope.setContext("template_query_error", {
+ template_repo: templateRepo,
+ error_message: templateError.message
+ });
+ Sentry.captureException(templateError, scope);
+ }
if (envelope.repo_id) {
// Direct update using repo_id (more efficient and reliable)
await adminSupabase
.from("repositories")
.update({
is_github_ready: true,
synced_repo_sha: headSha,
- synced_handout_sha: latestHandoutCommit?.latest_template_sha
+ synced_handout_sha: latestHandoutCommit?.latest_template_sha || null
})
.eq("id", envelope.repo_id);
Apply the same pattern to the fallback path at Line 601.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const { data: latestHandoutCommit } = await adminSupabase | |
.from("assignments") | |
.select("latest_template_sha") | |
.eq("template_repo", templateRepo) | |
.maybeSingle(); | |
if (envelope.repo_id) { | |
// Direct update using repo_id (more efficient and reliable) | |
await adminSupabase | |
.from("repositories") | |
.update({ | |
is_github_ready: true, | |
synced_repo_sha: headSha | |
synced_repo_sha: headSha, | |
synced_handout_sha: latestHandoutCommit?.latest_template_sha | |
const { data: latestHandoutCommit, error: templateError } = await adminSupabase | |
.from("assignments") | |
.select("latest_template_sha") | |
.eq("template_repo", templateRepo) | |
.maybeSingle(); | |
if (templateError) { | |
scope.setContext("template_query_error", { | |
template_repo: templateRepo, | |
error_message: templateError.message | |
}); | |
Sentry.captureException(templateError, scope); | |
} | |
if (envelope.repo_id) { | |
// Direct update using repo_id (more efficient and reliable) | |
await adminSupabase | |
.from("repositories") | |
.update({ | |
is_github_ready: true, | |
synced_repo_sha: headSha, | |
synced_handout_sha: latestHandoutCommit?.latest_template_sha || null | |
}) | |
.eq("id", envelope.repo_id); |
SELECT DISTINCT r.class_id INTO v_class_id | ||
FROM public.repositories r | ||
WHERE r.id = ANY(p_repository_ids); | ||
|
||
IF v_class_id IS NULL THEN | ||
RAISE EXCEPTION 'No repositories found with provided IDs'; | ||
END IF; | ||
|
||
-- Check for multiple classes (not allowed) | ||
IF (SELECT COUNT(DISTINCT r.class_id) | ||
FROM public.repositories r | ||
WHERE r.id = ANY(p_repository_ids)) > 1 THEN | ||
RAISE EXCEPTION 'All repositories must belong to the same class'; | ||
END IF; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle multi-class input before selecting the class id.
SELECT DISTINCT ... INTO v_class_id
will raise Postgres’ too_many_rows
exception as soon as two different class_id
s are present, so the nicer “All repositories must belong to the same class” message below never runs. The RPC will bubble up a generic database error to the client instead of the intended instructor-facing error. Please grab one class id first (e.g., with LIMIT 1
) and then compare every repo against it before raising your custom exception.
- SELECT DISTINCT r.class_id INTO v_class_id
- FROM public.repositories r
- WHERE r.id = ANY(p_repository_ids);
+ SELECT r.class_id
+ INTO v_class_id
+ FROM public.repositories r
+ WHERE r.id = ANY(p_repository_ids)
+ LIMIT 1;
@@
- IF (SELECT COUNT(DISTINCT r.class_id)
- FROM public.repositories r
- WHERE r.id = ANY(p_repository_ids)) > 1 THEN
+ IF EXISTS (
+ SELECT 1
+ FROM public.repositories r
+ WHERE r.id = ANY(p_repository_ids)
+ AND r.class_id <> v_class_id
+ ) THEN
RAISE EXCEPTION 'All repositories must belong to the same class';
END IF;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
SELECT DISTINCT r.class_id INTO v_class_id | |
FROM public.repositories r | |
WHERE r.id = ANY(p_repository_ids); | |
IF v_class_id IS NULL THEN | |
RAISE EXCEPTION 'No repositories found with provided IDs'; | |
END IF; | |
-- Check for multiple classes (not allowed) | |
IF (SELECT COUNT(DISTINCT r.class_id) | |
FROM public.repositories r | |
WHERE r.id = ANY(p_repository_ids)) > 1 THEN | |
RAISE EXCEPTION 'All repositories must belong to the same class'; | |
END IF; | |
-- Grab one class_id first | |
SELECT r.class_id | |
INTO v_class_id | |
FROM public.repositories r | |
WHERE r.id = ANY(p_repository_ids) | |
LIMIT 1; | |
IF v_class_id IS NULL THEN | |
RAISE EXCEPTION 'No repositories found with provided IDs'; | |
END IF; | |
-- Check for multiple classes (not allowed) | |
IF EXISTS ( | |
SELECT 1 | |
FROM public.repositories r | |
WHERE r.id = ANY(p_repository_ids) | |
AND r.class_id <> v_class_id | |
) THEN | |
RAISE EXCEPTION 'All repositories must belong to the same class'; | |
END IF; |
🤖 Prompt for AI Agents
In supabase/migrations/20251004163000_add-repo-sync-fields.sql around lines 81
to 94, the SELECT DISTINCT ... INTO v_class_id can raise Postgres' too_many_rows
when multiple class_id values exist, preventing the custom "All repositories
must belong to the same class" error; change the flow to first select a single
class_id (e.g., SELECT r.class_id INTO v_class_id FROM public.repositories r
WHERE r.id = ANY(p_repository_ids) LIMIT 1), then check for NULL and afterwards
run a COUNT(DISTINCT r.class_id) > 1 check (or compare all rows against
v_class_id) to raise the intended custom exception if multiple classes are
present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (1)
supabase/functions/github-async-worker/index.ts (1)
814-841
: Surface Supabase update failures in the sync success path.Both update branches ignore the
{ error }
returned by Supabase. If the write fails (missing row, perms, etc.) we’ll happily report success while the DB stays stale, and the worker never retries. Capture the response and throw on failure so the job requeues with a clear cause. Apply the same guard to the other update blocks in this method.- if (result.no_changes) { - await adminSupabase - .from("repositories") - .update({ + if (result.no_changes) { + const { error: updateError } = await adminSupabase + .from("repositories") + .update({ synced_handout_sha: to_sha, desired_handout_sha: to_sha, sync_data: { last_sync_attempt: new Date().toISOString(), status: "no_changes_needed" } }) .eq("id", repository_id); + if (updateError) throw new Error(`Failed to update repository after sync: ${updateError.message}`); } else { - await adminSupabase - .from("repositories") - .update({ + const { error: updateError } = await adminSupabase + .from("repositories") + .update({ synced_handout_sha: result.merged ? to_sha : from_sha, synced_repo_sha: result.merged ? result.merge_sha : undefined, desired_handout_sha: to_sha, sync_data: { pr_number: result.pr_number, pr_url: result.pr_url, pr_state: result.merged ? "merged" : "open", branch_name: `sync-to-${to_sha.substring(0, 7)}`, last_sync_attempt: new Date().toISOString(), merge_sha: result.merge_sha } }) .eq("id", repository_id); + if (updateError) throw new Error(`Failed to persist repository sync result: ${updateError.message}`); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
supabase/functions/deno.lock
is excluded by!**/*.lock
📒 Files selected for processing (6)
app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx
(8 hunks)supabase/functions/_shared/GitHubSyncHelpers.ts
(1 hunks)supabase/functions/_shared/GitHubWrapper.ts
(1 hunks)supabase/functions/_shared/Redis.ts
(6 hunks)supabase/functions/github-async-worker/index.ts
(17 hunks)supabase/functions/scripts/PushChangesToRepoFromHandout.ts
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
supabase/functions/scripts/PushChangesToRepoFromHandout.ts (3)
supabase/functions/_shared/SupabaseTypes.d.ts (1)
Database
(3-9751)supabase/functions/_shared/GitHubWrapper.ts (1)
getOctoKit
(192-267)supabase/functions/_shared/GitHubSyncHelpers.ts (1)
syncRepositoryToHandout
(410-546)
supabase/functions/_shared/GitHubSyncHelpers.ts (1)
supabase/functions/github-async-worker/index.ts (1)
getCreateContentLimiter
(43-84)
app/course/[course_id]/manage/assignments/[assignment_id]/repositories/page.tsx (5)
components/ui/link.tsx (1)
Link
(4-42)lib/TableController.ts (3)
TableController
(461-2222)table
(519-521)rows
(2215-2217)utils/supabase/client.ts (1)
createClient
(6-17)hooks/useCourseController.tsx (2)
useCourse
(1382-1385)useCourseController
(1387-1393)hooks/useTableControllerTable.tsx (1)
useTableControllerTable
(70-182)
supabase/functions/github-async-worker/index.ts (2)
supabase/functions/_shared/GitHubAsyncTypes.ts (2)
GitHubAsyncMethod
(1-8)SyncRepoToHandoutArgs
(82-89)supabase/functions/_shared/GitHubSyncHelpers.ts (1)
syncRepositoryToHandout
(410-546)
🪛 Biome (2.1.2)
supabase/functions/_shared/Redis.ts
[error] 47-66: The constructor should not return a value.
The constructor is here:
Returning a value from a constructor may confuse users of the class.
(lint/correctness/noConstructorReturn)
⏰ Context from checks skipped due to timeout of 180000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: lint (22.x)
- GitHub Check: deploy
- GitHub Check: lint (22.x)
for (const file of comparison.files || []) { | ||
if (file.status === "removed") { | ||
fileChanges.push({ | ||
path: file.filename, | ||
sha: "", | ||
content: "" | ||
}); | ||
continue; | ||
} | ||
|
||
const { data: fileContent } = await octokit.request("GET /repos/{owner}/{repo}/contents/{path}", { | ||
owner, | ||
repo, | ||
path: file.filename, | ||
ref: toSha | ||
}); | ||
|
||
if ("content" in fileContent && fileContent.sha) { | ||
fileChanges.push({ | ||
path: file.filename, | ||
sha: fileContent.sha, | ||
content: fileContent.content | ||
}); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle renamed files by removing the old path.
Renamed files still leave the old path behind because we never enqueue a deletion for previous_filename
. GitHub’s tree merge keeps the old blob unless we explicitly delete it, so every rename ends up duplicating the file. Add a deletion entry before pushing the new filename so the sync actually performs a move.
- for (const file of comparison.files || []) {
+ for (const file of comparison.files || []) {
+ if (file.status === "renamed" && file.previous_filename) {
+ fileChanges.push({
+ path: file.previous_filename,
+ sha: "",
+ content: ""
+ });
+ }
if (file.status === "removed") {
fileChanges.push({
path: file.filename,
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for (const file of comparison.files || []) { | |
if (file.status === "removed") { | |
fileChanges.push({ | |
path: file.filename, | |
sha: "", | |
content: "" | |
}); | |
continue; | |
} | |
const { data: fileContent } = await octokit.request("GET /repos/{owner}/{repo}/contents/{path}", { | |
owner, | |
repo, | |
path: file.filename, | |
ref: toSha | |
}); | |
if ("content" in fileContent && fileContent.sha) { | |
fileChanges.push({ | |
path: file.filename, | |
sha: fileContent.sha, | |
content: fileContent.content | |
}); | |
} | |
} | |
for (const file of comparison.files || []) { | |
if (file.status === "renamed" && file.previous_filename) { | |
fileChanges.push({ | |
path: file.previous_filename, | |
sha: "", | |
content: "" | |
}); | |
} | |
if (file.status === "removed") { | |
fileChanges.push({ | |
path: file.filename, | |
sha: "", | |
content: "" | |
}); | |
continue; | |
} | |
const { data: fileContent } = await octokit.request("GET /repos/{owner}/{repo}/contents/{path}", { | |
owner, | |
repo, | |
path: file.filename, | |
ref: toSha | |
}); | |
if ("content" in fileContent && fileContent.sha) { | |
fileChanges.push({ | |
path: file.filename, | |
sha: fileContent.sha, | |
content: fileContent.content | |
}); | |
} | |
} |
🤖 Prompt for AI Agents
In supabase/functions/_shared/GitHubSyncHelpers.ts around lines 170 to 194,
renamed files are not handled: we never enqueue a deletion for
file.previous_filename so the old path remains and the rename becomes a
duplicate. Detect file.status === "renamed" and when file.previous_filename
exists push a deletion entry for that old path (e.g., { path:
file.previous_filename, sha: "", content: "" }) before adding the new file
entry; then proceed to fetch and push the new filename as currently implemented.
files.map(async (file) => { | ||
if (file.content === "") { | ||
// Deleted file | ||
return { | ||
path: file.path, | ||
mode: "100644" as const, | ||
type: "blob" as const, | ||
sha: null as unknown as string | ||
}; | ||
} | ||
|
||
const content = atob(file.content); | ||
const { data: blob } = await createContentLimiter.schedule( | ||
async () => | ||
await octokit.request("POST /repos/{owner}/{repo}/git/blobs", { | ||
owner, | ||
repo, | ||
content, | ||
encoding: "utf-8" | ||
}) | ||
); | ||
|
||
return { | ||
path: file.path, | ||
mode: "100644" as const, | ||
type: "blob" as const, | ||
sha: blob.sha | ||
}; | ||
}) | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preserve binary blobs by sending base64 directly.
Decoding GitHub’s base64 (atob
) and re-uploading as utf-8
corrupts any non-text asset. Keep the original base64 payload and tell GitHub it’s base64 so binary files survive the round-trip.
- const content = atob(file.content);
- const { data: blob } = await createContentLimiter.schedule(
- async () =>
- await octokit.request("POST /repos/{owner}/{repo}/git/blobs", {
- owner,
- repo,
- content,
- encoding: "utf-8"
- })
- );
+ const { data: blob } = await createContentLimiter.schedule(
+ async () =>
+ await octokit.request("POST /repos/{owner}/{repo}/git/blobs", {
+ owner,
+ repo,
+ content: file.content,
+ encoding: "base64"
+ })
+ );
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
files.map(async (file) => { | |
if (file.content === "") { | |
// Deleted file | |
return { | |
path: file.path, | |
mode: "100644" as const, | |
type: "blob" as const, | |
sha: null as unknown as string | |
}; | |
} | |
const content = atob(file.content); | |
const { data: blob } = await createContentLimiter.schedule( | |
async () => | |
await octokit.request("POST /repos/{owner}/{repo}/git/blobs", { | |
owner, | |
repo, | |
content, | |
encoding: "utf-8" | |
}) | |
); | |
return { | |
path: file.path, | |
mode: "100644" as const, | |
type: "blob" as const, | |
sha: blob.sha | |
}; | |
}) | |
); | |
files.map(async (file) => { | |
if (file.content === "") { | |
// Deleted file | |
return { | |
path: file.path, | |
mode: "100644" as const, | |
type: "blob" as const, | |
sha: null as unknown as string | |
}; | |
} | |
const { data: blob } = await createContentLimiter.schedule( | |
async () => | |
await octokit.request("POST /repos/{owner}/{repo}/git/blobs", { | |
owner, | |
repo, | |
content: file.content, | |
encoding: "base64" | |
}) | |
); | |
return { | |
path: file.path, | |
mode: "100644" as const, | |
type: "blob" as const, | |
sha: blob.sha | |
}; | |
}) | |
); |
🤖 Prompt for AI Agents
In supabase/functions/_shared/GitHubSyncHelpers.ts around lines 272 to 301, the
code decodes file.content with atob and re-uploads using encoding "utf-8", which
corrupts binary blobs; instead keep the original base64 string (file.content)
and send it to GitHub as-is with encoding set to "base64" in the POST
/repos/{owner}/{repo}/git/blobs request (preserve the createContentLimiter
wrapper and error handling), leaving the deleted-file branch unchanged.
export async function getOctoKitAndInstallationID(repoOrOrgName: string, scope?: Sentry.Scope) { | ||
const org = repoOrOrgName.includes("/") ? repoOrOrgName.split("/")[0] : repoOrOrgName; | ||
const installationId = installations.find((i) => i.orgName === org)?.id; | ||
return { octokit: await getOctoKit(repoOrOrgName, scope), installationId }; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure installationId is resolved after populating the cache.
installationId
is read before getOctoKit
has a chance to populate installations
, so the first call for any org/repo always returns undefined
even when an installation exists. Move the lookup after awaiting getOctoKit
(and reuse the computed org there) so you consistently return the matching installation ID.
-export async function getOctoKitAndInstallationID(repoOrOrgName: string, scope?: Sentry.Scope) {
- const org = repoOrOrgName.includes("/") ? repoOrOrgName.split("/")[0] : repoOrOrgName;
- const installationId = installations.find((i) => i.orgName === org)?.id;
- return { octokit: await getOctoKit(repoOrOrgName, scope), installationId };
+export async function getOctoKitAndInstallationID(repoOrOrgName: string, scope?: Sentry.Scope) {
+ const org = repoOrOrgName.includes("/") ? repoOrOrgName.split("/")[0] : repoOrOrgName;
+ const octokit = await getOctoKit(repoOrOrgName, scope);
+ const installationId = installations.find((i) => i.orgName === org)?.id;
+ return { octokit, installationId };
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export async function getOctoKitAndInstallationID(repoOrOrgName: string, scope?: Sentry.Scope) { | |
const org = repoOrOrgName.includes("/") ? repoOrOrgName.split("/")[0] : repoOrOrgName; | |
const installationId = installations.find((i) => i.orgName === org)?.id; | |
return { octokit: await getOctoKit(repoOrOrgName, scope), installationId }; | |
} | |
export async function getOctoKitAndInstallationID(repoOrOrgName: string, scope?: Sentry.Scope) { | |
const org = repoOrOrgName.includes("/") ? repoOrOrgName.split("/")[0] : repoOrOrgName; | |
const octokit = await getOctoKit(repoOrOrgName, scope); | |
const installationId = installations.find((i) => i.orgName === org)?.id; | |
return { octokit, installationId }; | |
} |
🤖 Prompt for AI Agents
In supabase/functions/_shared/GitHubWrapper.ts around lines 187 to 191, the code
looks up installationId from the cached installations before awaiting
getOctoKit, so the cache may not be populated and the first call returns
undefined; move the installations.find(...) lookup to after awaiting getOctoKit
(reuse the computed org variable) so the cache is populated first and
installationId is computed from the updated installations list, then return the
octokit and installationId.
// Return a Proxy that forwards unknown method calls to the underlying Upstash client | ||
// This allows Redis commands like get, set, incr, etc. to work transparently | ||
return new Proxy(this, { | ||
get(target, prop, receiver) { | ||
// First check if the property exists on the Redis wrapper class | ||
const targetValue = Reflect.get(target, prop, receiver); | ||
if (targetValue !== undefined) { | ||
return targetValue; | ||
} | ||
|
||
// Then check the underlying Upstash client | ||
const clientAsAny = target.client as unknown as Record<string, unknown>; | ||
const clientMethod = clientAsAny[String(prop)]; | ||
|
||
if (typeof clientMethod === "function") { | ||
// Bind the method to the client so 'this' works correctly | ||
return clientMethod.bind(target.client); | ||
} | ||
|
||
return undefined; | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolve the constructor return lint error
Returning the Proxy from the constructor triggers Biome’s lint/correctness/noConstructorReturn
error, so the build will keep failing. Please either move the Proxy creation into a factory/helper outside the constructor, or explicitly suppress the rule with a targeted // biome-ignore lint/correctness/noConstructorReturn
and a short justification so the pipeline can pass.
🧰 Tools
🪛 Biome (2.1.2)
[error] 47-66: The constructor should not return a value.
The constructor is here:
Returning a value from a constructor may confuse users of the class.
(lint/correctness/noConstructorReturn)
🤖 Prompt for AI Agents
In supabase/functions/_shared/Redis.ts around lines 45 to 66, the constructor
currently returns a Proxy which triggers Biome's lint rule
lint/correctness/noConstructorReturn; either move the Proxy creation out of the
constructor into a dedicated factory or helper function that takes the Redis
instance and returns the Proxy (call that factory after constructing the
instance), or, if returning the Proxy here is intentional and safe, add a
targeted suppression comment above the return like "// biome-ignore
lint/correctness/noConstructorReturn -- Proxy return is intentional to forward
unknown methods to Upstash client for ergonomic API" so the linter is satisfied.
Summary by CodeRabbit
New Features
Improvements