-
Notifications
You must be signed in to change notification settings - Fork 9
Create client library that leverages Broadcast from Database #116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Deploying pgflow with
|
Latest commit: |
df4ed9f
|
Status: | ✅ Deploy successful! |
Preview URL: | https://b6a7f5f0.pgflow.pages.dev |
Branch Preview URL: | https://client-library.pgflow.pages.dev |
|
✅ Deploy Preview for pgflow-demo ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
WalkthroughThis PR standardizes “pgflow” naming, expands documentation and style guidelines, and introduces a new Changes
Sequence DiagramsequenceDiagram
participant Dev as Developer
participant Client as PgflowClient
participant Adapter as SupabaseBroadcastAdapter
participant DB as Supabase (RPC)
participant Run as FlowRun
Dev->>Client: startFlow(flow_slug, input[, run_id])
alt New run
Client->>Run: create(initial queued state)
Client->>Adapter: subscribeToRun(runId)
end
Client->>DB: RPC start_flow_with_states(slug,input,runId?)
DB-->>Client: JSON run+step states
Client->>Run: updateState snapshots
Client->>Run: cache FlowStep instances
Dev->>Run: on('completed') / waitForStatus(...)
Possibly related PRs
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
View your CI Pipeline Execution ↗ for commit 84f56c1.
☁️ Nx Cloud last updated this comment at |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 25
🧹 Nitpick comments (45)
pkgs/client/.swcrc (3)
2-16
: Validate JSC configuration for target environment and parser settings.
Thejsc
block targets ES2017 and configures TypeScript parsing with decorators and dynamic imports. Please ensure that:
- ES2017 output aligns with your supported runtimes or consider lowering the target for broader compatibility.
- If you need to compile JSX/TSX files, add
"tsx": true
underparser
.- Confirm whether both
decoratorMetadata
andlegacyDecorator
are required; remove redundant flags if you’re not using experimental decorator metadata.
17-19
: Confirm module output format.
Using"type": "es6"
emits ES modules. Verify that this matches your downstream consumers (Node, bundlers, etc.). If you also need CommonJS output for legacy environments, consider adding a separate CJS SWC config or use Rollup/Vite to generate both formats.
20-28
: Refine exclusion patterns.
Theexclude
array uses regex-like strings (e.g.,".*\\.spec.tsx?$"
) but SWC expects glob patterns. Convert these to globs for clarity and correct matching, for example:"exclude": [ "jest.config.ts", "**/*.spec.ts", "**/*.spec.tsx", "**/*.test.ts", "**/*.test.tsx", "src/jest-setup.ts", "**/jest-setup.ts", "**/*.js" ]This prevents unintended file compilation.
pkgs/client/vite.config.ts (1)
7-10
: Clean up or enable the worker config.
You have commented-out worker settings (nxViteTsPaths()
underworker.plugins
). If you’re not using web workers, remove this block. Otherwise, enable it so that worker code also respects your TS path aliases.pkgs/client/rollup.config.cjs (1)
11-15
: Enable source maps for easier debugging.
You may want to generate source maps in production builds. For instance:{ // Provide additional rollup configuration here. + output: { + sourcemap: true, + }, }This helps consumers trace errors back to your TS source.
pkgs/core/schemas/0110_function_start_flow_with_states.sql (1)
1-5
: Review function signature and parameter defaults.
You’ve definedrun_id UUID default null
. For clarity and convention, consider usingDEFAULT NULL
(uppercase) and ensure parameter names (flow_slug
,input
,run_id
) do not conflict with reserved words. Optionally, prefix parameters (e.g.,p_flow_slug
) to avoid ambiguity.pkgs/core/supabase/tests/functions/get_run_with_states.test.sql (3)
17-26
: Verify presence of"run"
key
The first assertion correctly checks that the returned JSONB has a top-level"run"
key. You could DRY these queries by reusing a singlestates
CTE, but functionally this is sound.
28-36
: Verify presence of"steps"
key
This second assertion mirrors the first for the"steps"
key; it correctly ensures that step states are included.
38-46
: Verify correct number of steps
Checkingjsonb_array_length(result->'steps') = 2
validates that both steps are returned. Good coverage.pkgs/client/src/index.ts (1)
11-12
: RedundantUnsubscribe
export
Unsubscribe
is already included via the wildcard from./lib/types
. Consider removing this duplicate line to keep the public API surface minimal.pkgs/core/schemas/0100_function_start_flow.sql (1)
42-56
: Emittingrun:started
realtime event
Broadcasting a detailed JSONB payload (type, ID, slug, input, status, remaining_steps, timestamp) aligns with the client’s expectations. Consider wrapping this in an exception block if you want the flow to proceed even if the broadcast fails.pkgs/client/tsconfig.lib.json (1)
1-36
: Ensure declaration files are generated
The current config setsemitDeclarationOnly: false
but does not explicitly enable"declaration": true
. To guarantee*.d.ts
files appear indist/
, either:
- Confirm that
tsconfig.base.json
already has"declaration": true
.- Or add under
compilerOptions
:"declaration": true, "declarationMap": trueThis will align your
.d.ts
outputs with the"types"
field in package.json.pkgs/client/tsconfig.spec.json (1)
5-11
: Optional: Add DOM types for browser-based tests. If any of your Vitest suites render components or interact with the DOM, consider adding"dom"
to thecompilerOptions.types
array to pick up ambient DOM declarations.pkgs/core/supabase/tests/realtime/start_ready_steps_events.test.sql (1)
31-36
: Optional: Extend for futurestep:started
events. Once thestep:started
broadcast is implemented, enhance this test to assert that astep:started
event for the “second” step is emitted.pkgs/core/schemas/0100_function_fail_task.sql (1)
135-161
: Schedule retry for queued tasks. The CTE computes exponential backoff delays and invokespgmq.set_vt
. For readability, consider simplifying thePERFORM (WITH ... SELECT ...)
syntax to a directPERFORM SELECT ...;
or a loop with explicitPERFORM pgmq.set_vt(...)
.pkgs/core/supabase/tests/realtime/fail_task_events.test.sql (1)
29-48
: Optional: Validateerror_message
in the event. Consider adding an assertion againstpayload->>'error_message'
to ensure the failure message is propagated correctly.CLAUDE.md (2)
57-57
: Minor grammar refinement for clarity.
Change “very rare exception of where” to “very rare exception to where” for standard collocation.- you should never see `%s` (with the very rare exception of where you're merging in another SQL fragment... + you should never see `%s` (with the very rare exception to where you're merging in another SQL fragment...🧰 Tools
🪛 LanguageTool
[grammar] ~57-~57: The usual collocation for “exception” in this context is “to”, not “of”. Did you mean “exception to”?
Context: ...ould never see%s
(with the very rare exception of where you're merging in another SQL fra...(EXCEPTION_OF_TO)
59-59
: Polish sentence flow and reduce redundancy.
Remove the comma after “Remember” and consider rephrasing “lots of tiny functions”:- Remember, that functions have significant overhead in Postgres - instead of factoring into lots of tiny functions... + Remember that functions have significant overhead in Postgres; instead of factoring into many small functions...🧰 Tools
🪛 LanguageTool
[grammar] ~59-~59: Putting a comma here could disrupt the flow of the sentence.
Context: ...ly formatted using %I and %L). Remember, that functions have significant overhead in ...(COMMA_BEFORE_THAT_2)
[style] ~59-~59: Consider using a synonym to be more concise.
Context: ...in Postgres - instead of factoring into lots of tiny functions, think about how to make...(A_LOT_OF)
[uncategorized] ~59-~59: Use a comma before ‘so’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...ut how to make your code more expressive so there's no need. ### Fully qualified c...(COMMA_COMPOUND_SENTENCE_2)
pkgs/core/supabase/tests/realtime/start_flow_events.test.sql (1)
39-45
: Test 4: Check presence ofstarted_at
timestamp.
Usingok(...)
is sufficient to assert non-null; consider adding a recency check if clock drift is a concern.pkgs/core/supabase/tests/realtime/maybe_complete_run_events.test.sql (1)
51-56
: Test 4: Presence ofcompleted_at
timestamp.
Consider extending this to verify the timestamp is within an acceptable window relative tonow()
.sdk_phase2.md (2)
11-18
: Core functionality section could link to SQL snapshot methods.
Consider referencingstart_flow_with_states
orget_run_with_states
to tie into reconnection strategies.
31-43
: Add keep-alive/ping mechanisms for connection resilience.
To strengthen connection management, consider documenting heartbeat or ping/pong support to detect stale channels early.sdk_phase5.md (3)
12-21
: Extend testing plan with performance/load tests.
In addition to unit and integration tests, consider adding stress tests for high volume real-time events.
34-43
: Type enhancements cover key areas.
Consider specifying which utility libraries (e.g., Zod, io-ts) might be used for runtime type guards.
53-56
: API documentation could reference TypeDoc tooling.
Adding guidance on generating and publishing JSDoc/TypeDoc output will streamline docs maintenance.pkgs/core/supabase/tests/realtime/partition_only_test.test.sql (1)
75-85
: Consider centralizing run_id storageThe use of
into temporary table_run_id
creates a table with a non-standardized name compared to later tests that userun_ids
. Consider standardizing temporary table naming conventions across all test files.-select run_id into temporary table_run_id from flow; +select run_id into temporary run_ids from flow;And then update subsequent references accordingly.
client_library_example_usage.md (4)
5-5
: Clarify assumption on security
The statement “We do not care at all about Row Level Security or Policies” is too blunt and may raise security concerns. Consider rephrasing to: “This example assumes that Row-Level Security policies are managed by the user.”
9-9
: Correct grammar in description
Change “we are interested about the output” to “we are interested in the output” and remove the comma before “because” for proper English.🧰 Tools
🪛 LanguageTool
[formatting] ~9-~9: If the ‘because’ clause is essential to the meaning, do not use a comma before the clause.
Context: ...ested about the output of the step_tasks, because the output of a single step task associ...(COMMA_BEFORE_BECAUSE)
14-17
: Remove or use unused import
Theuuidv4
import is not used in the example. Either demonstrate how to pass a pre-generated UUID tostartFlow
or remove the import to avoid confusion.
43-49
: Fix numbering and wording in steps list
The client initialization steps are all numbered “1.” and use the adverb “locally” twice. Renumber the list to “1., 2., 3., …” and vary or remove duplicate adverbs for clarity.🧰 Tools
🪛 LanguageTool
[style] ~46-~46: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...hem locally 1. Generates a new run UUID locally 1. Set up subscriptions before the flow...(ADVERB_REPETITION_PREMIUM)
sdk_phase3.md (1)
15-15
: Add missing article “a” for clarity
Change “Create private state container” to “Create a private state container” to correct the grammar in the FlowRun Implementation section.🧰 Tools
🪛 LanguageTool
[uncategorized] ~15-~15: You might be missing the article “a” here.
Context: ...n 1. State Management: - Create private state container with public getters ...(AI_EN_LECTOR_MISSING_DETERMINER_A)
sdk_phase4.md (1)
15-15
: Add missing article “the”
Change “Implement IFlowClient interface” to “Implement the IFlowClient interface” for clearer English in the Core Functionality section.pkgs/core/realtime_test.md (1)
30-38
: Consider consolidating the repeated list introductionThe phrase “The following functions send realtime notifications” is effectively repeated in lines 31-38.
To keep the doc crisp, shorten the intro and jump straight into the bullet list, e.g.:### Functions that emit realtime notifications - `pgflow.start_flow` – ‘run:started’ - …pkgs/client/src/lib/PgflowClient.ts (1)
142-146
: Iterating and mutating#runs
concurrently risks skipping elements
for ... of this.#runs.keys()
iterates over a liveMap
iterator; deleting the
current key inside the loop can cause the iterator to skip the entry that
would have followed it.A safer pattern is to iterate over a static array copy:
- for (const runId of this.#runs.keys()) { + for (const runId of Array.from(this.#runs.keys())) { this.dispose(runId); }pkgs/client/src/lib/SupabaseBroadcastAdapter.ts (1)
293-305
:unsubscribe()
should await Supabase channel teardown to avoid ghost listeners
channel.unsubscribe()
is asynchronous and returns a promise. Failing to
await it means the network request may still be in flight while the map entry
is removed, leading to dangling websocket subscriptions.- channel.unsubscribe(); + await channel.unsubscribe();As this turns
#unsubscribe
into an async method, adapt public callers
accordingly (they can ignore the returned promise).pkgs/client/src/lib/FlowStep.ts (3)
14-19
: Consider hoisting status precedence map to astatic
constant
#statusPrecedence
is identical for every instance yet re-allocated per step. Making itprivate static readonly
(or putting it in a module-level const) saves memory and avoids accidental mutation.
108-118
: Edge-case: allow immediate fulfilment without allocating timer/listenerNice optimisation! Just note that callers might rely on micro-task ordering. If you want to guarantee async resolution, wrap in
queueMicrotask(() => resolve(this))
. Otherwise current behaviour is fine.
224-235
: Duplication withFlowRun.#shouldUpdateStatus
The precedence logic is duplicated here and in
FlowRun.ts
(see snippet lines 322-333). Extracting to a shared util would keep them in sync and ease future maintenance.pkgs/client/src/lib/types.ts (1)
194-206
: Unify unsubscribe return type
subscribeToRun
returns() => void
whereas the rest of the API exports theUnsubscribe
alias (line 71). Returning the alias keeps the public surface consistent and allows consumers to import a single type.- subscribeToRun(run_id: string): () => void; + subscribeToRun(run_id: string): Unsubscribe;pkgs/core/supabase/seed.sql (1)
254-260
: Usedate + 1
to avoid implicit cast
next_date date := target_date + interval '1 day';
relies on an implicit cast fromtimestamp
todate
. Use an explicit cast to avoid surprises on strict installations:next_date date := (target_date + interval '1 day')::date;sdk_phase1.md (1)
1-2
: Clarify intent of headline to avoid misleading maintainersThe opening line “THIS PHASE IS DONE! NO NEED TO DO ANYTHING” can be mis-interpreted by future contributors as a directive to skip the rest of the document or ignore pending tasks.
Consider either deleting it or re-phrasing to something like:> _Phase 1 implementation complete – the notes below describe what was delivered._
to keep the historical context while preventing accidental neglect of outstanding work (e.g., adding tests, migrations).
pkgs/client/src/lib/FlowRun.ts (1)
309-312
: Avoid touching NanoEvents internalevents
mapDirectly reading or mutating
this.#events.events
relies on NanoEvents’ internal shape, which is not part of its public API and may break on upgrade.Suggested approach:
- if (Object.keys(this.#events.events).length === 0) { + if ((this as any).#eventsSize === 0) { /* see below */ }A lightweight fix is to track a
listenerCount
in the wrapper methods (on
,dispose
) instead of inspecting internal fields.Alternatively, re-export NanoEvents’
Emitter
interface and extend it with a small helper (e.g.,size()
).Also applies to: 347-348
pkgs/core/scripts/atlas-dump-realtime-schema (1)
5-7
: Extract connection parameters to variables for better maintainability.The PostgreSQL connection string is duplicated and hardcoded, which makes maintenance harder if you need to update the connection parameters.
Refactor to use variables:
-echo > atlas/schema.sql -atlas schema inspect --schema realtime -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql -atlas schema inspect --schema pgmq -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql +# Define connection parameters +PG_USER="postgres" +PG_PASS="postgres" +PG_HOST="localhost" +PG_PORT="50422" +PG_DB="postgres" +PG_CONN_STR="postgres://${PG_USER}:${PG_PASS}@${PG_HOST}:${PG_PORT}/${PG_DB}?sslmode=disable" + +echo > atlas/schema.sql +atlas schema inspect --schema realtime -u "${PG_CONN_STR}" --format "{{ sql . }}" >> atlas/schema.sql +atlas schema inspect --schema pgmq -u "${PG_CONN_STR}" --format "{{ sql . }}" >> atlas/schema.sqlpkgs/core/atlas/atlas.hcl (1)
1-24
: Consider adding version constraints and environment variables.The configuration lacks version constraints for Atlas and doesn't support environment variable overrides, which limits flexibility in different environments.
Add version constraints and environment variable support:
+# Specify the Atlas version required +version = "0.10.0" # Replace with your actual required version + env "local" { // Define the Dev Database used to evaluate the current state - dev = docker.postgres.dev.url + // Allow overriding the database URL via environment variable + dev = getenv("ATLAS_DEV_URL", docker.postgres.dev.url) // Specify the desired schema source - src = "file://schemas/" + src = getenv("ATLAS_SCHEMA_SRC", "file://schemas/") // Specify the directory to place generated migrations migration { - dir = "file://supabase/migrations" + dir = getenv("ATLAS_MIGRATION_DIR", "file://supabase/migrations") } }pkgs/core/scripts/regenerate-migration (1)
1-15
: Add documentation and improve script organization.The script lacks documentation on its purpose and dependencies. Adding comments and improving organization would make it more maintainable.
Enhance the script with documentation and organization:
#!/bin/bash +# regenerate-migration +# +# This script regenerates a database migration by: +# 1. Resetting the atlas.sum file to the repository state +# 2. Removing existing migration files matching the provided name +# 3. Running the migration diff command to generate a new migration +# 4. Resetting the Supabase environment +# +# Dependencies: +# - git +# - atlas (https://atlasgo.io) +# - pnpm +# - nx + +# Configuration +MIGRATIONS_DIR="supabase/migrations" +ATLAS_SUM_FILE="$MIGRATIONS_DIR/atlas.sum" + # Check if migration name is provided if [ -z "$1" ]; then echo "Error: Migration name argument is required" echo "Usage: $0 <migration_name>" exit 1 fi MIGRATION_NAME="$1" -git checkout -f supabase/migrations/atlas.sum -rm -f supabase/migrations/*"$MIGRATION_NAME"* +git checkout -f "$ATLAS_SUM_FILE" +rm -f "$MIGRATIONS_DIR/*$MIGRATION_NAME*" ./scripts/atlas-migrate-diff "$MIGRATION_NAME" pnpm nx run core:supabase:reset
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (61)
CLAUDE.md
(3 hunks)client_library_architecture.md
(1 hunks)client_library_example_usage.md
(1 hunks)client_library_sql_modifications.md
(1 hunks)nx.json
(1 hunks)package.json
(2 hunks)pkgs/client/.swcrc
(1 hunks)pkgs/client/README.md
(1 hunks)pkgs/client/eslint.config.cjs
(1 hunks)pkgs/client/package.json
(1 hunks)pkgs/client/project.json
(1 hunks)pkgs/client/rollup.config.cjs
(1 hunks)pkgs/client/src/index.ts
(1 hunks)pkgs/client/src/lib/FlowRun.ts
(1 hunks)pkgs/client/src/lib/FlowStep.ts
(1 hunks)pkgs/client/src/lib/PgflowClient.ts
(1 hunks)pkgs/client/src/lib/SupabaseBroadcastAdapter.ts
(1 hunks)pkgs/client/src/lib/client.spec.ts
(1 hunks)pkgs/client/src/lib/client.ts
(1 hunks)pkgs/client/src/lib/types.ts
(1 hunks)pkgs/client/tsconfig.json
(1 hunks)pkgs/client/tsconfig.lib.json
(1 hunks)pkgs/client/tsconfig.spec.json
(1 hunks)pkgs/client/vite.config.ts
(1 hunks)pkgs/core/.gitignore
(1 hunks)pkgs/core/atlas.hcl
(0 hunks)pkgs/core/atlas/atlas.hcl
(1 hunks)pkgs/core/realtime_test.md
(1 hunks)pkgs/core/schemas/0060_tables_runtime.sql
(2 hunks)pkgs/core/schemas/0100_function_complete_task.sql
(3 hunks)pkgs/core/schemas/0100_function_fail_task.sql
(3 hunks)pkgs/core/schemas/0100_function_maybe_complete_run.sql
(1 hunks)pkgs/core/schemas/0100_function_start_flow.sql
(3 hunks)pkgs/core/schemas/0100_function_start_ready_steps.sql
(2 hunks)pkgs/core/schemas/0105_function_get_run_with_states.sql
(1 hunks)pkgs/core/schemas/0110_function_start_flow_with_states.sql
(1 hunks)pkgs/core/scripts/atlas-dump-realtime-schema
(1 hunks)pkgs/core/scripts/atlas-migrate-diff
(2 hunks)pkgs/core/scripts/atlas-verify-schemas-synced
(1 hunks)pkgs/core/scripts/build-atlas-postgres-image
(1 hunks)pkgs/core/scripts/regenerate-migration
(1 hunks)pkgs/core/src/types.ts
(2 hunks)pkgs/core/supabase/config.toml
(1 hunks)pkgs/core/supabase/seed.sql
(2 hunks)pkgs/core/supabase/tests/fail_task/error_message_storage.test.sql
(1 hunks)pkgs/core/supabase/tests/functions/get_run_with_states.test.sql
(1 hunks)pkgs/core/supabase/tests/functions/start_flow_with_states.test.sql
(1 hunks)pkgs/core/supabase/tests/pgflow_tests/create_realtime_partition.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/complete_task_events.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/fail_task_events.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/full_flow_events.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/maybe_complete_run_events.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/partition_only_test.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/start_flow_events.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/start_ready_steps_events.test.sql
(1 hunks)sdk_phase1.md
(1 hunks)sdk_phase2.md
(1 hunks)sdk_phase3.md
(1 hunks)sdk_phase4.md
(1 hunks)sdk_phase5.md
(1 hunks)tsconfig.json
(1 hunks)
💤 Files with no reviewable changes (1)
- pkgs/core/atlas.hcl
🧰 Additional context used
🧬 Code Graph Analysis (2)
pkgs/client/src/lib/client.spec.ts (1)
pkgs/client/src/lib/client.ts (1)
client
(1-3)
pkgs/client/src/lib/FlowStep.ts (4)
pkgs/client/src/index.ts (2)
FlowStep
(9-9)Unsubscribe
(12-12)pkgs/dsl/src/dsl.ts (2)
ExtractFlowSteps
(88-94)StepOutput
(134-139)pkgs/client/src/lib/types.ts (3)
FlowStepState
(165-175)StepEvents
(35-66)Unsubscribe
(71-71)pkgs/client/src/lib/FlowRun.ts (1)
currentStatus
(322-333)
🪛 LanguageTool
client_library_example_usage.md
[formatting] ~9-~9: If the ‘because’ clause is essential to the meaning, do not use a comma before the clause.
Context: ...ested about the output of the step_tasks, because the output of a single step task associ...
(COMMA_BEFORE_BECAUSE)
[uncategorized] ~41-~41: You might be missing the article “the” here.
Context: ...ided. When calling pgflow.startFlow
, pgflow client does the following: 1. fetches ...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
[style] ~46-~46: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...hem locally 1. Generates a new run UUID locally 1. Set up subscriptions before the flow...
(ADVERB_REPETITION_PREMIUM)
CLAUDE.md
[style] ~55-~55: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ow, use FOR EACH STATEMENT
instead. - don't call functions for each row in a res...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[grammar] ~57-~57: The usual collocation for “exception” in this context is “to”, not “of”. Did you mean “exception to”?
Context: ...ould never see %s
(with the very rare exception of where you're merging in another SQL fra...
(EXCEPTION_OF_TO)
[grammar] ~59-~59: Putting a comma here could disrupt the flow of the sentence.
Context: ...ly formatted using %I and %L). Remember, that functions have significant overhead in ...
(COMMA_BEFORE_THAT_2)
[style] ~59-~59: Consider using a synonym to be more concise.
Context: ...in Postgres - instead of factoring into lots of tiny functions, think about how to make...
(A_LOT_OF)
[uncategorized] ~59-~59: Use a comma before ‘so’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...ut how to make your code more expressive so there's no need. ### Fully qualified c...
(COMMA_COMPOUND_SENTENCE_2)
sdk_phase5.md
[uncategorized] ~59-~59: You might be missing the article “a” here.
Context: ...nents 2. Usage Guides: - Create getting started guide - Add common patterns ...
(AI_EN_LECTOR_MISSING_DETERMINER_A)
sdk_phase3.md
[uncategorized] ~15-~15: You might be missing the article “a” here.
Context: ...n 1. State Management: - Create private state container with public getters ...
(AI_EN_LECTOR_MISSING_DETERMINER_A)
sdk_phase1.md
[uncategorized] ~19-~19: You might be missing the article “the” here.
Context: ... 1. Schema modifications: - Add error_message TEXT
column to `pgflow.st...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
[uncategorized] ~19-~19: You might be missing the article “the” here.
Context: ... - Add error_message TEXT
column to pgflow.step_states
table to store error...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
[uncategorized] ~36-~36: You might be missing the article “a” here.
Context: ...e from the failing step_state 4. Add supporting index: - Create index on step_stat...
(AI_EN_LECTOR_MISSING_DETERMINER_A)
[uncategorized] ~37-~37: You might be missing the article “an” here.
Context: .... Add supporting index: - Create index on step_states(run_id) for efficient qu...
(AI_EN_LECTOR_MISSING_DETERMINER_AN)
[style] ~66-~66: This is not the usual sequence for adjectives that have no special emphasis.
Context: ...ting PgflowSqlClient
to implement the new small interfaces (IFlowStarter
and `ITaskPr...
(EN_ADJ_ORDER)
pkgs/core/realtime_test.md
[style] ~114-~114: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...nts.test.sql **Purpose**: Verify that
pgflow.maybe_complete_run` sends the 'run:completed'...
(REP_MAYBE)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (116)
pkgs/core/supabase/config.toml (1)
19-20
: Necessary configuration change for realtime functionalityEnabling the realtime feature is essential for the client library to receive broadcast events from Supabase.
Please ensure this configuration change is documented in your deployment docs so that administrators know this feature must remain enabled for the client SDK to function properly.
tsconfig.json (1)
20-23
: Proper integration of client package in TypeScript projectThe new client package is correctly added to the project references, enabling proper TypeScript integration with the rest of the monorepo.
nx.json (1)
42-48
: Add Rollup plugin for client package build
The new@nx/rollup/plugin
entry with"buildTargetName": "build"
correctly integrates the Rollup build step into the workspace, enabling bundling for the newly added@pgflow/client
package.pkgs/client/eslint.config.cjs (1)
1-3
: Re-export root ESLint configuration for client package
This file cleanly forwards the base config from the monorepo root, ensuring lint rules stay consistent across theclient
package.package.json (1)
24-24
: Add Rollup and SWC loader devDependencies
Introducing@nx/rollup
,rollup
, andswc-loader
aligns the devDependencies with the new Rollup-based build pipeline for the@pgflow/client
package.Also applies to: 41-42
pkgs/client/tsconfig.json (1)
1-19
: Configure TypeScript project references for client package
Thereferences
array correctly links the DSL, core, library, and spec projects for incremental builds and type safety within the monorepo.pkgs/core/schemas/0060_tables_runtime.sql (2)
31-31
: Add error_message column to step_states table
Introducing the nullableerror_message text
field allows detailed error tracking at the step state level.
52-52
: Add index on run_id for step_states
The newidx_step_states_run_id
index optimizes queries that filter step states byrun_id
, supporting efficient state retrieval for real-time clients.pkgs/client/vite.config.ts (1)
11-21
: Verify test environment matches client runtime.
Tests run in a Node environment (environment: 'node'
). If your client code relies on browser APIs (e.g.,window
,fetch
), you may need to switch tojsdom
or provide mocks. Confirm thatnode
is the correct choice to avoid unexpected failures.pkgs/client/rollup.config.cjs (1)
1-2
: Ensure@nx/rollup
dependency is installed.
The config importswithNx
from@nx/rollup/with-nx
. Verify that this package is listed in yourpackage.json
dependencies and matches the version your workspace expects. Otherwise the build will fail.pkgs/core/supabase/tests/functions/start_flow_with_states.test.sql (1)
1-21
: Comprehensive validation ofstart_flow_with_states
output
The test correctly resets the database, sets up a two-step flow, invokesstart_flow_with_states
with and without a customrun_id
, and asserts:
- Presence of the
run
key- Correct usage of a provided
run_id
- Proper step count
- Initial state of the root step
All four assertions align with
plan(4)
.pkgs/core/schemas/0100_function_complete_task.sql (2)
12-14
: Declaration ofv_step_state
is well-placed
Capturing the updated row into a local PL/pgSQL variable ensures the broadcast logic references the exact post-update state.
85-104
: Correct conditional broadcasting ofstep:completed
events
TheIF v_step_state.status = 'completed'
guard prevents premature or duplicate broadcasts, and the payload fields align with specification.pkgs/core/supabase/tests/fail_task/error_message_storage.test.sql (1)
1-36
: Error message storage tests are accurate and complete
Theplan(3)
matches the threeis()
assertions verifying:
step_tasks.error_message
step_states.error_message
- Absence of errors in dependent steps
All checks align with the new
error_message
column behavior.pkgs/core/schemas/0105_function_get_run_with_states.sql (1)
1-12
:get_run_with_states
correctly assembles the run snapshot
The SQL function usesto_jsonb(r)
for the run, aLEFT JOIN
plusFILTER
edjsonb_agg
(withCOALESCE
to[]
) for steps, ensuring an empty array when no step states exist.pkgs/core/supabase/tests/functions/get_run_with_states.test.sql (4)
1-5
: Test harness setup is correct
Initializing the TAP plan withplan(3)
and resetting the database viapgflow_tests.reset_db()
ensures a clean state.
6-10
: Flow and steps creation is valid
You create a simple flow with two dependent steps, which sets up the scenario for testingget_run_with_states
.
11-16
: Capturing therun_id
correctly
Using a CTE to start the flow and storerun_id
into a temporary table is a solid approach to reuse it in subsequent assertions.
48-49
: Tear down withfinish()
and rollback
Finalizing the TAP test and rolling back maintain test isolation.pkgs/client/src/index.ts (3)
1-3
: Consolidated type exports
export * from './lib/types'
already re-exports all types (includingUnsubscribe
), so this line is valuable for broad type availability.
5-6
: Main client class export
export { PgflowClient } from './lib/PgflowClient'
cleanly surfaces the primary entry point.
8-10
: FlowRun and FlowStep type exports
Explicitly exporting these types without their implementations offers consumers strong typing without extra bundle weight.pkgs/core/schemas/0100_function_start_flow.sql (4)
1-5
: Addedrun_id
parameter to function signature
Introducing an optionalrun_id UUID DEFAULT NULL
parameter is essential for client-controlled IDs, but verify that callers handle uniqueness errors if they pass a duplicateUUID
.Would you like a snippet to automate searching for all calls to
pgflow.start_flow
across your SQL files to ensure they’ve been updated to include the newrun_id
argument?
21-29
: Explicit run insertion using provided or generatedrun_id
TheCOALESCE(run_id, gen_random_uuid())
logic correctly honors an injected ID or generates one. Ensure thatpgflow.runs.run_id
has a unique constraint to catch collisions early.
31-39
: Created step states referencing the new run
Pulling therun_id
from thecreated_run
CTE is correct. This ties the step states to the exact run, avoiding ambiguity.
58-61
: Returning the newly created run
UsingRETURN QUERY SELECT * FROM pgflow.runs WHERE run_id = v_created_run.run_id
guarantees the client receives the exact run record. This is consistent and clear.pkgs/client/tsconfig.spec.json (1)
1-33
: Test TS config is correct. It properly extends the base config, sets theoutDir
for Vitest outputs, includes the right global types (vitest/globals
,vitest/importMeta
,vite/client
,node
,vitest
), and referencestsconfig.lib.json
. This will ensure your tests are compiled and type‐checked accurately.pkgs/core/supabase/tests/realtime/start_ready_steps_events.test.sql (1)
1-43
: Test correctly verifiesrun:started
emission. The script begins a transaction, sets up a sequential flow, completes the first task (implicitly invokingstart_ready_steps
), and asserts exactly onerun:started
event was broadcast. This aligns with the current implementation.pkgs/core/supabase/tests/pgflow_tests/create_realtime_partition.test.sql (7)
1-8
: Initialize test plan and reset database. Theplan(5)
matches the five assertions below, andpgflow_tests.reset_db()
ensures a clean state.
9-13
: Verify function signature existence. Confirms thatpgflow_tests.create_realtime_partition(date)
is present in the catalog.
15-20
: Test partition creation return value. Validates that invoking the function for today’s date returnstrue
, indicating a new partition was created.
22-32
: Assert partition in system catalog. Checkspg_class
to ensure therealtime.messages_YYYY_MM_DD
table exists after creation.
34-39
: Check idempotent behavior. Verifies a second call returnsfalse
, confirming the function is idempotent.
41-58
: Validaterealtime.send()
post-partition. Ensures that sending a test message under the new partition results in exactly one row inrealtime.messages
.
63-64
: Finish test and rollback. UsesSELECT * FROM finish()
and then rolls back, isolating side effects.pkgs/core/schemas/0100_function_fail_task.sql (7)
12-14
: Introducev_run_failed
flag. Declares a boolean to capture whether the run transitions to a failed state, enabling conditional broadcasting of run-level failure events.
69-73
: Captureerror_message
on step failure. Enhancespgflow.step_states
with the providederror_message
when the step status becomesfailed
, ensuring downstream consumers receive the full context.
79-97
: Broadcaststep:failed
event. The newbroadcast_step_failed
CTE correctly emits astep:failed
event with payload{ run_id, step_slug, status, error_message, failed_at }
. Ensure this channel naming (step:<slug>:failed
) matches the client’s subscription logic.
98-109
: Update run status atomically. TheUPDATE pgflow.runs
block setsstatus='failed'
andfailed_at=now()
when appropriate, then stores the boolean intov_run_failed
for later use. Good transactional consistency.
112-132
: Emitrun:failed
broadcast. Guards the broadcast underIF v_run_failed
and sends the run-level failure event with detailed payload. Verify the topic ('run:failed'
) and channel (pgflow:run:<run_id>
) align with other event conventions.
163-176
: Archive messages for permanently failed tasks. Properly uses a CTE to identify failed tasks and callspgmq.archive
, ensuring failed messages are moved out of the active queue.
178-184
: Return updated task rows. The finalRETURN QUERY
selects the modifiedstep_tasks
, satisfying the function signatureRETURNS SETOF pgflow.step_tasks
.pkgs/core/supabase/tests/realtime/fail_task_events.test.sql (8)
1-3
: Initialize test environment and declareplan(3)
. Begins a transaction and sets the expected number of assertions to three.
4-11
: Prepare flow with single-step retry. Creates the realtime partition, resets the DB, defines a flow withmax_attempts=>1
, and adds the lone “first” step.
13-17
: Start flow and capturerun_id
. Uses a CTE to invokepgflow.start_flow
and stores the result in a temporary table for later assertions.
19-27
: Invokefail_task
on the polled task. Polls for the task and callspgflow.fail_task(...)
, capturing its output infailed_tasks
.
29-34
: Verifyrun:failed
event count. Asserts that exactly onerun:failed
event exists for the capturedrun_id
.
37-41
: Assert event payload status. Checks that the emittedrun:failed
event’sstatus
field equals'failed'
.
43-48
: Assert correct event topic. Verifies that the event’stopic
matchespgflow:run:<run_id>
.
50-55
: Finish test sequence and rollback. CallsSELECT finish()
to completepgTAP
output, then rolls back to avoid lasting changes.CLAUDE.md (4)
7-7
: Documentation emphasis is clear and helpful.
The bold callout effectively highlights the MVP status and development philosophy.
31-31
: Database guidance is concise and correct.
Specifying PostgreSQL as the core DB aligns with the codebase.
35-37
: Testing guidelines are well-structured.
The instructions for organizing TypeScript tests into__tests__
, split by runtime and types, clearly enforce consistency and prevent pollution ofsrc/
.
43-43
: SQL Style Guidelines section placement is appropriate.
The new section cleanly separates database conventions from TypeScript style.pkgs/core/supabase/tests/realtime/start_flow_events.test.sql (11)
1-3
: Setup test transaction and plan correctly.
Usingbegin;
,plan(7);
matches the seven subsequent assertions.
5-6
: Ensure realtime partition exists before tests.
Callingpgflow_tests.create_realtime_partition()
guards against missing partitions.
8-11
: Flow environment initialization is correct.
Resetting DB and defining a flow with one step sets the proper preconditions.
13-17
: Capturingrun_id
via CTE looks good.
The temporary tablerun_ids
neatly holds the identifier for downstream queries.
18-24
: Test 1: Verify onerun:started
event.
Asserting exactly one event ensures no duplicates or missing emissions.
25-30
: Test 2: Validateflow_slug
in payload.
Checkingpayload->>'flow_slug'
equals'sequential'
confirms correct event data.
32-37
: Test 3: Confirmstatus
field.
Ensuring the event’sstatus
is"started"
aligns with business expectations.
47-52
: Test 5: Verify input data propagation.
Confirmingpayload->'input'->>'test_data'
ensures the JSON payload is forwarded intact.
54-59
: Test 6: Assert correct event name.
Validatingevent = 'run:started'
prevents typos in event routing.
61-66
: Test 7: Validate topic formatting.
Usingconcat('pgflow:run:', run_id)
ensures consistency with subscription topics.
68-72
: Cleanup ensures idempotence.
Dropping temporary tables and rolling back leaves the environment pristine for other tests.pkgs/core/supabase/tests/realtime/maybe_complete_run_events.test.sql (10)
1-3
: Initialize test harness correctly.
begin;
withplan(5);
aligns with the five assertions in this script.
5-6
: Ensure realtime partition is present.
Precreating the partition is critical for event routing tests.
8-11
: Reset DB and define simple flow.
Setting up a single-step flow provides controlled conditions for completion logic.
13-17
: Capturerun_id
fromstart_flow
.
Temporaryrun_ids
table use is consistent with other realtime tests.
19-28
: Poll and complete task to trigger completion event.
Chainingpoll_for_tasks
andcomplete_task
within CTEs is clear and effective.
30-35
: Test 1: Singlerun:completed
event emission.
Counting exactly one prevents silent failures or spurious duplicates.
37-42
: Test 2:flow_slug
accuracy in completion event.
Matching'simple'
ensures the correct flow context.
44-49
: Test 3:status
field equals"completed"
.
Verifies that the semantic state is correctly broadcast.
58-63
: Test 5: Topic formatting for completion event.
Consistent use ofpgflow:run:<run_id>
aligns with subscription topics.
65-70
: Teardown via cleanup and rollback.
Droppingrun_ids
andcompleted_tasks
maintains test isolation.sdk_phase2.md (6)
1-4
: Intro and phase context are clear.
The opening correctly situates Phase 2 in relation to Phase 1’s SQL event changes.
5-10
: Objectives are well-defined.
Listing implementation, subscription, and connection goals upfront provides clear focus.
19-24
: Subscription management details are comprehensive.
Channel lifecycle, tracking, and cleanup cover the key scenarios.
25-30
: Event processing responsibilities are clear.
Parsing, routing, and validation steps will help avoid runtime errors.
44-56
: Callback system description is well-scoped.
Registration, removal, and safe invocation guidelines cover typical use cases.
57-66
: Deliverables list is thorough.
Including test coverage, documentation, and initial validation ensures readiness for Phase 3.sdk_phase5.md (9)
1-4
: Phase 5 introduction is concise and focused.
The header correctly frames the finalization effort.
5-11
: Objectives clearly articulate the end goals.
Testing, type enhancements, documentation, and packaging are all covered.
22-28
: Integration test scenarios are well-described.
Including reconnection and end-to-end flow tests will catch multi-component issues.
29-33
: Testing utilities section is helpful.
Mocks and fixtures will accelerate writing reliable tests.
46-50
: Type testing practices are thorough.
Type assertion tests will ensure compile-time guarantees.
58-67
: Usage guides and examples are well-scoped.
Covering React/Next.js and Node.js will help a broad audience.🧰 Tools
🪛 LanguageTool
[uncategorized] ~59-~59: You might be missing the article “a” here.
Context: ...nents 2. Usage Guides: - Create getting started guide - Add common patterns ...(AI_EN_LECTOR_MISSING_DETERMINER_A)
70-79
: Package bundling and metadata are covered.
ESM/CJS outputs, tree-shaking, and source maps set the stage for production-ready packages.
81-84
: CI/CD integration steps are complete.
Automated testing and publishing workflows will enforce quality gates.
86-91
: Deliverables recap is accurate.
The summary ensures all facets of finalization are accounted for.pkgs/core/supabase/tests/realtime/partition_only_test.test.sql (4)
1-3
: Test count matches assertionsThe test planning count (3) matches the number of explicit assertions made using
ok()
and other test functions at lines 70-73, 100-103, and 143-146.
4-38
: Well-structured partition creation with proper error handlingThe partition creation logic correctly:
- Dynamically determines today's date
- Checks if partition already exists before attempting creation
- Uses proper error handling with exception block
- Provides informative logging
This ensures reliable testing across repeated runs.
40-54
: Direct realtime.send test approach is cleanTesting the realtime.send function directly before testing the pgflow integration is a good incremental approach. The error handling ensures the test will continue even if there's an issue with the direct call.
121-140
: Excellent diagnostic informationThe detailed event summary provides valuable diagnostic information for understanding test failures. The loop that displays each event type and count is particularly helpful for debugging.
pkgs/core/supabase/tests/realtime/complete_task_events.test.sql (4)
1-6
: Good test setup with partition creationThe test correctly ensures the realtime partition exists before running tests, using a helper function for clarity and reusability.
28-33
: Clear event count verificationThe test properly verifies that exactly one step:completed event is emitted, using a helper function that makes the test intention clear.
35-62
: Comprehensive event payload validationTests 2-5 thoroughly validate the contents of the event payload, including:
- step_slug verification
- status value verification
- timestamp presence and format
- output data verification
This ensures the payloads contain all required information for consumers.
64-76
: Event naming convention verificationTests 6-7 verify the naming conventions for events and topics, which is crucial for client libraries that need to subscribe to these events with the correct patterns.
pkgs/core/supabase/tests/realtime/full_flow_events.test.sql (4)
7-15
: Well-documented flow structure with commentsThe commented flow structure provides clear context about the test setup and dependencies, making the test more maintainable and easier to understand.
22-27
: Consistent event verification patternAll event verification tests follow a consistent pattern using the helper function
pgflow_tests.count_realtime_events()
, which improves readability and maintainability.
112-126
: Negative test cases for failed eventsTests 7-8 verify that no failure events are emitted during a successful flow run. Testing for the absence of incorrect events is as important as testing for the presence of correct events.
128-136
: Thorough cleanup with comprehensive drop statementsThe cleanup section properly drops all temporary tables created during the test, ensuring a clean state for subsequent tests.
pkgs/core/src/types.ts (3)
41-53
: Well-designed IFlowStarter interfaceThe new
IFlowStarter
interface follows the Interface Segregation Principle by focusing solely on flow initiation responsibilities. The clear documentation and type-safe parameters with generic support for flow types enhance usability.
55-79
: Clean ITaskProcessor interface with focused methodsThe
ITaskProcessor
interface properly encapsulates task-related operations (polling, completion, and failure handling) with appropriate documentation and type safety.
81-87
: Good backward compatibility with composite interfaceThe composite
IPgflowClient
interface maintains backward compatibility while allowing for more granular implementation of just the needed interfaces. This design supports the new client architecture while not breaking existing code.pkgs/core/schemas/0100_function_maybe_complete_run.sql (1)
38-46
: Ensure payload consistency with client expectations
The JSON payload includes anevent_type
field, while other broadcast functions omit this key. Confirm that the client-side adapter and TypeScript event interfaces expectevent_type
. If not, remove it to keep payloads consistent across event types.client_library_sql_modifications.md (1)
24-28
: Documentation payload mismatch
The table entry formaybe_complete_run.sql
lists the payload as{ run_id, output, completed_at }
, but the actual function implementation includes additional fields (flow_slug
,status
, etc.). Update the documentation to accurately reflect the function’s JSON payload.Likely an incorrect or invalid review comment.
pkgs/client/src/lib/SupabaseBroadcastAdapter.ts (1)
147-160
: Event names emitted from snapshots don’t follow the documented convention
#refreshStateFromSnapshot
synthesisesevent_type
asrun:${status}
and
step:${status}
. According to the test-plan document and server functions, the
canonical names are:
run:started|completed|failed
step:<slug>:started|completed|failed
Ommiting the step slug in the snapshot events may confuse consumer code that
filters on exact event names. Consider embedding the slug:- event_type: `step:${step.status}`, + event_type: `step:${step.step_slug}:${step.status}`,This keeps snapshot-generated events consistent with live broadcast events.
pkgs/client/src/lib/types.ts (1)
76-102
: Schema drift risk: keep event payloads in sync
BroadcastRunStartedEvent
uses rawJson
forinput
, but the strongly-typedFlowRunEvents.started
usesExtractFlowInput<TFlow>
. Ensure the SQL emitters cast/validate to the same shape; mismatches will only surface at runtime.pkgs/client/src/lib/FlowRun.ts (1)
256-263
: Preserveremaining_steps
on run failureWhen a run fails,
remaining_steps
is currently left unchanged.
Clients relying on this field for progress bars will display stale data.- error_message: event.error_message || 'Unknown error', + error_message: event.error_message || 'Unknown error', + remaining_steps: 'remaining_steps' in event ? event.remaining_steps : this.#state.remaining_steps,client_library_architecture.md (1)
518-533
: Status precedence table inconsistent with implementationThe precedence table here includes
'created'
, whereasFlowRun.#statusPrecedence
in code omits it (see earlier comment).
Ensure the documentation and implementation stay in sync to prevent regressions in event ordering logic.pkgs/core/.gitignore (1)
1-1
: Appropriate addition to gitignoreAdding
atlas/schema.sql
to gitignore is a good practice as it prevents generated schema files from being tracked in version control.pkgs/core/scripts/atlas-verify-schemas-synced (1)
5-5
: Good configuration standardizationAdding the explicit configuration file path (
--config file://atlas/atlas.hcl
) ensures consistent Atlas configuration across all schema operations.pkgs/core/scripts/build-atlas-postgres-image (1)
2-2
: Proper organization of Docker resourcesThe updated build command correctly references the Dockerfile in the new
atlas/
directory structure and sets the appropriate build context, aligning with the reorganization of schema management resources.pkgs/core/scripts/atlas-migrate-diff (2)
3-4
: Good addition of schema dump stepAdding the realtime schema dump step before migration diff is a good practice to ensure the migration is generated with the latest schema information.
23-23
: Consistent configuration usageUsing the explicit configuration file path (
--config file://atlas/atlas.hcl
) maintains consistency with other Atlas commands in the codebase.
export function client(): string { | ||
return 'client'; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implement real client functionality or clarify purpose
This function only returns a static string 'client' without providing any actual client functionality. This appears to be a placeholder implementation.
Based on the PR description and AI summary, this should implement a client SDK for managing flow runs. Consider implementing the actual client functionality or clarifying if this is intentionally a placeholder.
-export function client(): string {
- return 'client';
-}
+// Example of a more meaningful implementation:
+export class PgflowClient {
+ constructor(private readonly broadcastAdapter: BroadcastAdapter) {}
+
+ // Methods for managing flow runs and steps
+ async getFlowRun(runId: string): Promise<FlowRun> {
+ // Implementation
+ }
+
+ // Other methods...
+}
Committable suggestion skipped: line range outside the PR's diff.
# client | ||
|
||
This library was generated with [Nx](https://nx.dev). | ||
|
||
## Building | ||
|
||
Run `nx build client` to build the library. | ||
|
||
## Running unit tests | ||
|
||
Run `nx test client` to execute the unit tests via [Vitest](https://vitest.dev/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance README with comprehensive documentation
The current README is minimal and doesn't provide sufficient information about the client library's purpose, features, or usage.
Please enhance the README to include:
- Purpose and key features of the client library
- Installation instructions
- Usage examples with code snippets
- API documentation for the main classes (PgflowClient, FlowRun, FlowStep)
- Details about the Supabase real-time integration
- Requirements and dependencies
A well-documented README will help users understand and adopt the client library more effectively.
pkgs/client/src/lib/client.spec.ts
Outdated
import { client } from './client'; | ||
|
||
describe('client', () => { | ||
it('should work', () => { | ||
expect(client()).toEqual('client'); | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Expand tests to cover actual client functionality
This test only verifies that the client()
function returns a static string, which reflects its current minimal implementation.
As the actual client SDK functionality is implemented, the tests should be expanded to cover:
- Initialization of the client with various configurations
- Subscription to flow run events
- State management and updates
- Error handling scenarios
- Integration with the Supabase broadcast adapter
Comprehensive tests will ensure the reliability of the client library as it evolves.
root: __dirname, | ||
cacheDir: '../../node_modules/.vite/pkgs/client', | ||
plugins: [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add path-alias resolution plugin.
Vite won’t automatically resolve your TypeScript path aliases (e.g., @pgflow/*
). Consider adding the Nx TS paths plugin:
import { defineConfig } from 'vite';
+import { nxViteTsPaths } from '@nx/vite/plugins';
export default defineConfig({
- plugins: [],
+ plugins: [nxViteTsPaths()],
root: __dirname,
cacheDir: '../../node_modules/.vite/pkgs/client',
// ...
This ensures imports using your tsconfig.json
aliases work correctly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
root: __dirname, | |
cacheDir: '../../node_modules/.vite/pkgs/client', | |
plugins: [], | |
import { defineConfig } from 'vite'; | |
import { nxViteTsPaths } from '@nx/vite/plugins'; | |
export default defineConfig({ | |
plugins: [nxViteTsPaths()], | |
root: __dirname, | |
cacheDir: '../../node_modules/.vite/pkgs/client', | |
// ... | |
}); |
module.exports = withNx( | ||
{ | ||
main: './src/index.ts', | ||
outputPath: './dist', | ||
tsConfig: './tsconfig.lib.json', | ||
compiler: 'swc', | ||
format: ['cjs', 'esm'], | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Externalize peer dependencies.
To avoid bundling large external libs (nanoevents
, uuid
, etc.), declare them as peer dependencies in your package.json
and externalize them in Rollup. For example:
module.exports = withNx(
{
main: './src/index.ts',
outputPath: './dist',
tsConfig: './tsconfig.lib.json',
compiler: 'swc',
format: ['cjs', 'esm'],
+ external: ['nanoevents', 'uuid', '@pgflow/core']
},
{ /* additional rollup config */ }
);
This reduces bundle size and prevents duplicate copies in downstream apps.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
module.exports = withNx( | |
{ | |
main: './src/index.ts', | |
outputPath: './dist', | |
tsConfig: './tsconfig.lib.json', | |
compiler: 'swc', | |
format: ['cjs', 'esm'], | |
}, | |
module.exports = withNx( | |
{ | |
main: './src/index.ts', | |
outputPath: './dist', | |
tsConfig: './tsconfig.lib.json', | |
compiler: 'swc', | |
format: ['cjs', 'esm'], | |
external: ['nanoevents', 'uuid', '@pgflow/core'], | |
}, | |
{ /* additional rollup config */ } | |
); |
pkgs/client/src/lib/FlowRun.ts
Outdated
#statusPrecedence: Record<string, number> = { | ||
'queued': 0, | ||
'started': 1, | ||
'completed': 2, | ||
'failed': 3, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add precedence for the 'created'
status
step()
initialises steps with status 'created'
, and events from the backend may also use this status.
Because it is missing from #statusPrecedence
, any later 'queued'
or 'started'
event will be evaluated against an implicit precedence 0
, which can accidentally block legitimate updates.
#statusPrecedence: Record<string, number> = {
+ 'created': -1,
'queued': 0,
'started': 1,
'completed': 2,
'failed': 3,
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
#statusPrecedence: Record<string, number> = { | |
'queued': 0, | |
'started': 1, | |
'completed': 2, | |
'failed': 3, | |
}; | |
#statusPrecedence: Record<string, number> = { | |
'created': -1, | |
'queued': 0, | |
'started': 1, | |
'completed': 2, | |
'failed': 3, | |
}; |
// Otherwise, wait for the status to change | ||
return new Promise((resolve, reject) => { | ||
let timeoutId: NodeJS.Timeout | undefined; | ||
|
||
// Set up timeout if provided | ||
if (timeoutMs > 0) { | ||
timeoutId = setTimeout(() => { | ||
unbind(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NodeJS.Timeout
is not browser-safe
setTimeout
returns number
in browser environments but NodeJS.Timeout
in Node.
Typing it as NodeJS.Timeout
makes the SDK unusable in vanilla browser builds.
- let timeoutId: NodeJS.Timeout | undefined;
+ let timeoutId: ReturnType<typeof setTimeout> | undefined;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Otherwise, wait for the status to change | |
return new Promise((resolve, reject) => { | |
let timeoutId: NodeJS.Timeout | undefined; | |
// Set up timeout if provided | |
if (timeoutMs > 0) { | |
timeoutId = setTimeout(() => { | |
unbind(); | |
// Otherwise, wait for the status to change | |
return new Promise((resolve, reject) => { | |
let timeoutId: ReturnType<typeof setTimeout> | undefined; | |
// Set up timeout if provided | |
if (timeoutMs > 0) { | |
timeoutId = setTimeout(() => { | |
unbind(); |
#!/bin/bash | ||
|
||
echo "Dumping baseline schema for Atlas" | ||
|
||
echo > atlas/schema.sql | ||
atlas schema inspect --schema realtime -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql | ||
atlas schema inspect --schema pgmq -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling to ensure robust script execution.
The script dumps schema information from a Postgres database but lacks error checking. If atlas commands fail or the database is inaccessible, the script will continue silently.
Consider adding these improvements:
#!/bin/bash
+set -e # Exit immediately if a command exits with a non-zero status
echo "Dumping baseline schema for Atlas"
+# Ensure the directory exists
+mkdir -p atlas
echo > atlas/schema.sql
+
+# Check if atlas is installed
+if ! command -v atlas &> /dev/null; then
+ echo "Error: atlas is not installed or not in PATH"
+ exit 1
+fi
+
+# Database connection string
+DB_URL="postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable"
+
+# Test database connection
+if ! atlas schema inspect --schema public -u "$DB_URL" &> /dev/null; then
+ echo "Error: Could not connect to the database"
+ exit 1
+fi
atlas schema inspect --schema realtime -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql
atlas schema inspect --schema pgmq -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
#!/bin/bash | |
echo "Dumping baseline schema for Atlas" | |
echo > atlas/schema.sql | |
atlas schema inspect --schema realtime -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql | |
atlas schema inspect --schema pgmq -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql | |
#!/bin/bash | |
set -e # Exit immediately if a command exits with a non-zero status | |
echo "Dumping baseline schema for Atlas" | |
# Ensure the directory exists | |
mkdir -p atlas | |
echo > atlas/schema.sql | |
# Check if atlas is installed | |
if ! command -v atlas &> /dev/null; then | |
echo "Error: atlas is not installed or not in PATH" | |
exit 1 | |
fi | |
# Database connection string | |
DB_URL="postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" | |
# Test database connection | |
if ! atlas schema inspect --schema public -u "$DB_URL" &> /dev/null; then | |
echo "Error: Could not connect to the database" | |
exit 1 | |
fi | |
atlas schema inspect --schema realtime -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql | |
atlas schema inspect --schema pgmq -u "postgres://postgres:postgres@localhost:50422/postgres?sslmode=disable" --format "{{ sql . }}" >> atlas/schema.sql |
pkgs/core/atlas/atlas.hcl
Outdated
image = "jumski/postgres-15-pgmq:latest" | ||
baseline = file("schema.sql") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Document the custom Docker image and consider parameterizing baseline path.
Using a custom Docker image improves performance but introduces a dependency that needs documentation. The hardcoded baseline path may cause issues if the working directory changes.
Consider adding a comment explaining how to build/update the custom image, and use a more robust path for the baseline file:
# custom image is built and pushed to speed up schema verification,
# otherwise it takes around 30s
image = "jumski/postgres-15-pgmq:latest"
- baseline = file("schema.sql")
+ # Use absolute path relative to this config file to avoid working directory issues
+ baseline = file("${path.dir}/schema.sql")
+
+ # TODO: Document in README how to build and update the custom Docker image
+ # The image contains pre-installed PGMQ and other extensions needed for schema verification
Let's verify if this custom image is publicly available:
🏁 Script executed:
#!/bin/bash
# Check if the custom Docker image exists on Docker Hub
docker pull jumski/postgres-15-pgmq:latest
echo "Image pull result: $?"
Length of output: 196
Let’s verify availability via Docker Hub’s HTTP API instead of the Docker CLI:
🏁 Script executed:
#!/bin/bash
# Check if the custom Docker image exists on Docker Hub via the Registry HTTP API
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
https://registry.hub.docker.com/v2/repositories/jumski/postgres-15-pgmq/tags/latest)
echo "HTTP status code: $HTTP_CODE"
Length of output: 194
Document the custom Docker image and parameterize the baseline path
We’ve confirmed the image jumski/postgres-15-pgmq:latest
is publicly available on Docker Hub (HTTP 200). To make this dependency clear and guard against working‐directory issues, please:
- Add a comment above the
image
line explaining where the image lives and how to build/push updates (e.g. reference a README section). - Change the baseline file reference to use
${path.dir}
so it’s resolved relative to this HCL file. - Document the custom image build/update steps in your project’s docs (e.g.
pkgs/core/atlas/README.md
or top‐levelREADME.md
).
Suggested diff:
# Custom Postgres image with PGMQ for faster schema verification
image = "jumski/postgres-15-pgmq:latest"
- baseline = file("schema.sql")
+ # Resolve schema.sql relative to this config file to avoid CWD mistakes
+ baseline = file("${path.dir}/schema.sql")
+
+ # TODO: Document in docs how to build and update the custom Docker image
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
image = "jumski/postgres-15-pgmq:latest" | |
baseline = file("schema.sql") | |
# Custom Postgres image with PGMQ for faster schema verification | |
image = "jumski/postgres-15-pgmq:latest" | |
# Resolve schema.sql relative to this config file to avoid CWD mistakes | |
baseline = file("${path.dir}/schema.sql") | |
# TODO: Document in docs how to build and update the custom Docker image |
git checkout -f supabase/migrations/atlas.sum | ||
rm -f supabase/migrations/*"$MIGRATION_NAME"* | ||
./scripts/atlas-migrate-diff "$MIGRATION_NAME" | ||
pnpm nx run core:supabase:reset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and confirmation for destructive operations.
The script performs destructive operations (git checkout, rm) without error handling or user confirmation, which could lead to data loss or unexpected behavior.
Improve the script with error handling and confirmations:
-git checkout -f supabase/migrations/atlas.sum
-rm -f supabase/migrations/*"$MIGRATION_NAME"*
-./scripts/atlas-migrate-diff "$MIGRATION_NAME"
-pnpm nx run core:supabase:reset
+# Check if there are any migrations that will be removed
+EXISTING_MIGRATIONS=$(find supabase/migrations -name "*$MIGRATION_NAME*" | wc -l)
+
+if [ "$EXISTING_MIGRATIONS" -gt "0" ]; then
+ echo "Warning: This will remove $EXISTING_MIGRATIONS existing migration(s) matching '$MIGRATION_NAME'"
+ read -p "Continue? (y/n): " CONFIRM
+ if [[ ! "$CONFIRM" =~ ^[Yy]$ ]]; then
+ echo "Operation cancelled."
+ exit 0
+ fi
+fi
+
+# Error handling for each step
+echo "Resetting atlas.sum file..."
+if ! git checkout -f supabase/migrations/atlas.sum; then
+ echo "Error: Failed to reset atlas.sum file"
+ exit 1
+fi
+
+echo "Removing existing migrations matching '$MIGRATION_NAME'..."
+rm -f supabase/migrations/*"$MIGRATION_NAME"*
+
+echo "Generating new migration..."
+if ! ./scripts/atlas-migrate-diff "$MIGRATION_NAME"; then
+ echo "Error: Failed to generate new migration"
+ exit 1
+fi
+
+echo "Resetting Supabase environment..."
+if ! pnpm nx run core:supabase:reset; then
+ echo "Error: Failed to reset Supabase environment"
+ exit 1
+fi
+
+echo "Migration regeneration completed successfully!"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
pkgs/core/supabase/tests/pgflow_tests/create_realtime_partition.test.sql (1)
62-80
: Add explicit test for alternative dates (optional)
The current tests only covercurrent_date
. To further validate the partitioning logic, consider adding tests that pass an explicit historical or future date tocreate_realtime_partition(date)
(once the signature is aligned). For example:-- Test partition creation for a specific date +SELECT is( + pgflow_tests.create_realtime_partition('2025-01-01'::date), + true, + 'Should return true when creating a partition for 2025-01-01' +);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
pkgs/core/supabase/tests/pgflow_tests/create_realtime_partition.test.sql
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
-- 4. Test that the function creates a partition that doesn't exist | ||
SELECT is( | ||
pgflow_tests.create_realtime_partition(), | ||
true, | ||
'Should return true when creating a new partition' | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Supply the required date argument or declare a default
Both the initial creation test (SELECT pgflow_tests.create_realtime_partition()
) and the idempotency check call the function without parameters. Unless the function signature includes a default for the date argument, these calls will fail. Please ensure one of the following:
Option A – Add a default in the function definition:
CREATE FUNCTION pgflow_tests.create_realtime_partition(p_date date DEFAULT current_date)
RETURNS boolean AS $$
-- implementation
$$ LANGUAGE plpgsql;
Option B – Update the test calls to pass an explicit date:
SELECT pgflow_tests.create_realtime_partition(current_date);
Also applies to: 56-60
SELECT has_function( | ||
'pgflow_tests', 'create_realtime_partition', ARRAY['date'], | ||
'Function create_realtime_partition(date) should exist' | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent function signature test vs. invocation
The has_function
check specifies a single date
parameter, but subsequent calls to create_realtime_partition()
omit any argument. Please either:
- Adjust the
has_function
call to expect a zero-argument signature if the function is truly no-arg, or - Modify the test invocations (and
has_function
signature) to passcurrent_date
(or the intendeddate
value).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
pkgs/core/supabase/tests/realtime/fail_task_events.test.sql (1)
19-27
: Failing the task as intended
The call topgflow.fail_task(run_id, step_slug, 0, 'Test failure message')
correctly triggers the error path.Consider adding an assertion for the
error_message
in the realtime event payload (e.g.,payload->>'error_message'
) to verify the message is propagated to subscribers.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
pkgs/core/supabase/tests/realtime/complete_task_events.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/fail_task_events.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/full_flow_events.test.sql
(1 hunks)pkgs/core/supabase/tests/realtime/maybe_complete_run_events.test.sql
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pkgs/core/supabase/tests/realtime/full_flow_events.test.sql
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (18)
pkgs/core/supabase/tests/realtime/fail_task_events.test.sql (6)
1-3
: Verify test lifecycle and plan count
The transaction block andselect plan(5)
correctly establish the TAP testing framework. Ensure the plan count matches the number of assertions below.
4-6
: Partition setup is in place
Callingpgflow_tests.create_realtime_partition()
before resetting the DB ensures realtime messages are routed correctly. Good setup.
8-11
: DB reset and flow creation
Resetting the database and creating a sequential flow withmax_attempts => 1
aligns with the intent to test permanent failure behavior.
13-17
: Capturingrun_id
for subsequent assertions
Using a CTE to select frompgflow.start_flow
and storerun_id
into a temporary table is clear and concise.
29-62
: Assertions cover all expected behaviours
The five tests verify:
- Exactly one
run:failed
event.- Payload status is
"failed"
.- Topic format is
pgflow:run:<run_id>
.- No
step:completed
events.- No
run:completed
events.This provides comprehensive coverage of the permanent-failure scenario.
64-69
: Cleanup and transaction rollback
Dropping temporary tables and invokingfinish()
followed byrollback
ensures isolation and no residual state.pkgs/core/supabase/tests/realtime/maybe_complete_run_events.test.sql (6)
1-6
: Initialize testing environment and plan
Thebegin; select plan(6);
and partition creation correctly bootstrap the TAP framework for six assertions.
7-11
: Reset DB and flow setup
Resetting the database and creating a simple flow with a single step is appropriate for exercisingmaybe_complete_run
.
13-17
: Capturing run identifier
The CTE pattern andinto temporary run_ids
capture therun_id
succinctly for reuse in assertions.
19-28
: Completing the task to triggermaybe_complete_run
Invokingpgflow.complete_task
on the only step will internally callmaybe_complete_run
. This correctly exercises the end-of-run logic.
30-70
: Comprehensive run-completion assertions
The six tests validate:
- Exactly one
run:completed
event.- Correct
flow_slug
in payload.- Status is
"completed"
.- A valid
completed_at
timestamp.- Proper topic formatting (
pgflow:run:<run_id>
).- No stray
run:failed
events.This ensures
maybe_complete_run
emits the right event on successful completion.
72-77
: Tear down and rollback
Cleaning up temporary tables and rolling back preserves isolation for other tests.pkgs/core/supabase/tests/realtime/complete_task_events.test.sql (6)
1-6
: Bootstrap TAP framework and partition
The initialbegin; select plan(9);
and partition creation set up nine assertions for step completion.
7-10
: Reset DB and use helper to setup flow
Usingpgflow_tests.setup_flow('sequential')
abstracts boilerplate and readies the sequential flow for polling.
12-16
: Starting the flow and storingrun_id
Consistent use of a CTE and temporary table to holdrun_id
across assertions.
18-27
: Polling and completing the task
Callingpgflow.poll_for_tasks
followed bypgflow.complete_task
correctly triggers the step-completion broadcast.
28-91
: Detailed step-completion event assertions
The nine tests confirm:
- One
step:completed
event exists.- Payload contains the correct
step_slug
.- Status is
"completed"
.- A valid
completed_at
timestamp.- Output data is included (
payload->'output'->>'result'
).- Event name is
step:first:completed
.- Topic is
pgflow:run:<run_id>
.- No
step:failed
events.- No premature
run:completed
event.These cover the end-to-end behavior of
complete_task
.
92-97
: Cleanup steps
Dropping temporary tables and rolling back ensures no side effects.
… file path quoting guidelines - Added detailed instructions on prioritizing declarative SQL over procedural constructs - Included recommendations for batching operations and avoiding loops in SQL functions - Clarified proper use of FORMAT with %I and %L for dynamic SQL - Updated file path quoting instructions to handle special characters correctly
- Introduces a detailed markdown file demonstrating how to construct a client, start flows, subscribe to run and step updates, and handle asynchronous status and output waiting. - Covers type safety considerations, data shape decisions, and event handling strategies. - Provides guidance on real-time updates, buffering, and API design for flow and step states. - Enhances developer understanding of client interactions and event-driven flow management.
Extend the documentation with examples demonstrating how to implement timeouts and cancellation using AbortController for waiting operations, enhancing control over asynchronous flows.
Introduce detailed design goals, core components, component architecture, state management, event handling, implementation strategies, and open questions for the PgFlow client SDK. This includes the structure of the main classes, adapter patterns, state interfaces, and real-time event processing mechanisms, providing a complete overview of the system's internal architecture and planned development phases.
… and API structure Refactors the PgflowClient and related classes to enhance event subscription, state management, and type safety. Adds detailed comments on MVP approach, simplifies error and disconnection handling, and clarifies future phases. Implements subscription to real-time database changes, standardizes event transformation, and updates internal state accordingly. Improves getter accessors, step handling, and event processing logic to support a more robust and type-safe flow run experience. Updates documentation with open questions and phased implementation plan.
…isions - Corrected property names from snake_case to camelCase for better JavaScript convention - Clarified decision points on data shape and scoping in client library usage documentation - Improved documentation accuracy for flow run state inspection and property access
…operties, enhance type safety, and update documentation for event subscriptions and waiting mechanisms - Changed property names to snake_case to match Supabase conventions - Updated example snippets to reflect new property naming - Added detailed API design for step() method, including subscribe and waitForStatus - Clarified usage of type-safe access to flow and step outputs, events, and subscriptions - Improved documentation on waiting for statuses with timeout and cancellation support - Enhanced type safety explanations leveraging DSL utility types for flow and step outputs and events - Overall, these changes unify the API, improve developer experience, and align with conventions
…error handling and documentation - Updated event processing to route run and step events to appropriate subscribers - Clarified subscription model: flow run subscribers receive only run events, steps only their events - Enhanced error handling plan with detection, propagation, and access to error messages for flow and steps - Improved documentation with examples for subscribing to run and step events, including multiple steps - Added guidance on error handling during waitForStatus, including rejection on failure and error message access - Minor formatting and clarity improvements in documentation and comments
… and add out-of-order event safeguards - Updated startFlow to fetch initial state before subscribing to avoid race conditions - Enhanced FlowRun class with dispose and unsubscribe methods for better lifecycle management - Implemented auto-cleanup when flow reaches terminal states - Modified startFlow in SupabaseStandardAdapter to handle SETOF return from RPC - Added logic to prevent out-of-order step event processing using status precedence - Clarified comments and structured code for better maintainability and robustness
…gement - Added IFlowStarter, IFlowObserver, and ITaskExecutor interfaces for flow control - Implemented Client class to start flows, observe run and step events, and manage subscriptions - Developed Adapter interface with a Supabase implementation for backend communication - Enhanced subscription handling with auto-cleanup and event broadcasting - Included flow definition fetching and output enrichment in Supabase adapter - Refined internal state management and event subscription mechanisms for better extensibility
… management - Replaced custom callback subscriptions with NanoEvents for lightweight, type-safe event management - Updated Client and Adapter classes to use NanoEvents for emitting and subscribing to run and step events - Added interfaces for typed events and integrated NanoEvents dependencies in package - Enabled client-generated UUIDs for pre-subscription setup before starting flows - Simplified subscription lifecycle with automatic cleanup on terminal states - Improved overall architecture for efficient, memory-safe event handling and backward compatibility
…and event handling Refactors the documentation to reflect changes in the client initialization, including using createClient from '@supabase/supabase-js' and the Client class from '@pgflow/client'. Adds detailed examples for starting flows, subscribing to run and step events, and managing resources. Clarifies the flow of setting up subscriptions before starting a flow, and emphasizes type safety in event payloads. Also updates the explanation of the client's internal behavior, including fetching flow metadata, setting up realtime subscriptions, and handling step outputs. Overall, enhances guidance on using the client API with the latest implementation details and best practices.
…ling and cleanup - Added Client class for starting flows, observing state, and disposing resources - Introduced FlowRun class for managing individual flow run states and events - Developed FlowStep class for handling step-specific state and events - Integrated event subscriptions, status waiting, and cleanup mechanisms - Updated internal architecture to support flow definition fetching and run tracking - Enhanced modularity and robustness of flow orchestration components
…terfaces - Clarify internal architecture and design approach for the PgFlow SDK - Add new core components: Adapter, Client, FlowRun, FlowStep with detailed interfaces - Revise Client class to include flow definition caching and event handling - Expand FlowRun and FlowStep classes with event types, state management, and methods - Introduce Adapter interface and example implementation for Supabase - Enhance API consistency and documentation clarity across components
…onents Split adapter functionality into focused interfaces for flow starting, task processing, and real-time updates. Created composite interfaces for client and engine use cases, enhancing modularity and dependency management. Moved core interfaces to shared types and defined client-specific interfaces in a new package, improving separation of concerns.
Introduce a new pgflow.start_flow_with_states function that internally calls start_flow, fetches full step states, and returns both run and step states to ensure clients have a complete initial state before processing real-time events. Update documentation and examples to utilize this function, improving reliability and preventing missed fast steps.
…ng, and improve state management - Updated event type definitions for better type safety and consistency - Refactored class interfaces to depend only on necessary components - Implemented client-generated UUID for reliable event subscriptions - Added support for WebSocket disconnection and reconnection with full state refresh - Introduced new `start_flow_with_states` function for initial state snapshot - Added function to fetch current run and step states for state refresh - Improved internal state encapsulation with getters and private fields - Enhanced event subscription and emission logic for efficiency and safety - Updated documentation to reflect new features and architecture improvements - Overall, these changes improve reliability, race condition prevention, and real-time synchronization in flow management system
…and connection handling Enhance the realtime subscription strategy by implementing a multiplex approach with a single channel per run, reducing resource usage and avoiding channel limits. Enrich step data with outputs, maintain event order with a precedence system, and route events efficiently using NanoEvents. Improve WebSocket reconnection logic by performing a full state refresh to ensure consistency after disconnections. These changes optimize client performance and reliability in handling realtime updates.
…ing and cleanup - Track current subscription ID to manage connection lifecycle - Unsubscribe from previous subscription before creating a new one - Maintain existing connection status handling and event routing - Improve reconnection logic to ensure consistent client state after reconnects
…r updates Enhanced the statusPrecedence logic to explicitly reject lower precedence status updates, ensuring correct ordering of state transitions and handling of out-of-order events due to replication.
- Enrich completed steps with outputs and handle WebSocket disconnections with full state refresh - Track and emit events based on step_states status, ignoring retries for status updates - Prevent out-of-order status updates by enforcing precedence rules, rejecting lower precedence changes due to replication issues
- Reduce resource usage by consolidating realtime channels and avoiding per-step channels - Maintain a status precedence system to handle out-of-order events - Implement NanoEvents for efficient event routing - Handle WebSocket disconnections with full state refresh - Update pgflow.complete_task to store task outputs directly in step_states for simplified client architecture and future fanout support
improve type safety and interface consistency across FlowRun, FlowStep, and client types - Updated FlowRun to implement FlowRunBase with more precise step storage - Enhanced FlowStep to implement FlowStepBase with generic event handling - Added non-generic base interfaces for flow runs and steps for better invariance - Improved event type definitions to match nanoevents expectations - Modified PgflowClient to use specific flow types and correct startFlow return types - Updated type guards and event data handling for safer state updates - Minor Vite config adjustment for better build stability and type referencing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
pkgs/client/src/lib/FlowRun.ts (2)
13-18
: Add 'created' status to the status precedence map.The
step()
method initializes steps with status 'created' (line 141), but this status is missing from the#statusPrecedence
map. This could cause issues with the#shouldUpdateStatus
method when evaluating event precedence.#statusPrecedence: Record<string, number> = { + 'created': -1, 'queued': 0, 'started': 1, 'completed': 2, 'failed': 3, };
175-182
:NodeJS.Timeout
is not browser-safe.Using
NodeJS.Timeout
type makes the SDK incompatible with browser environments as browsers returnnumber
fromsetTimeout
instead.- let timeoutId: NodeJS.Timeout | undefined; + let timeoutId: ReturnType<typeof setTimeout> | undefined;pkgs/client/src/lib/client.spec.ts (1)
1-9
: Expand tests to cover actual client functionality.This test only verifies that the
PgflowClient
class is defined, which is insufficient for ensuring the reliability of the client library. As the implementation evolves, comprehensive tests are needed.The tests should be expanded to cover:
- Initialization with various configurations
- Subscription to flow run events
- State management and updates
- Error handling scenarios
- Integration with the Supabase broadcast adapter
// Example of a more comprehensive test suite: describe('PgflowClient', () => { let mockSupabase: any; beforeEach(() => { // Setup mock Supabase client mockSupabase = { schema: jest.fn().mockReturnThis(), rpc: jest.fn(), channel: jest.fn().mockReturnThis(), on: jest.fn().mockReturnThis(), subscribe: jest.fn().mockReturnThis(), }; }); it('should be defined', () => { expect(PgflowClient).toBeDefined(); }); it('should initialize with Supabase client', () => { const client = new PgflowClient(mockSupabase); expect(client).toBeInstanceOf(PgflowClient); }); it('should start a flow and return a FlowRun instance', async () => { // Add test for startFlow method }); // Additional tests for event subscription, state management, etc. });
🧹 Nitpick comments (3)
pkgs/client/src/lib/FlowRun.ts (3)
265-267
: Simplify error handling logic.The error message check is repeated unnecessarily. We can simplify this code to be more DRY.
- error_message: typeof typedEvent.error_message === 'string' ? typedEvent.error_message : 'Unknown error', - error: new Error(typeof typedEvent.error_message === 'string' ? typedEvent.error_message : 'Unknown error'), + error_message: typeof typedEvent.error_message === 'string' ? typedEvent.error_message : 'Unknown error', + error: new Error(this.#state.error_message),
167-168
: Extract timeout to a configurable constant.The timeout value of 5 minutes is hardcoded. Consider extracting it to a class constant or configuration option to make it more maintainable and configurable.
+ // Default timeout for waitForStatus (5 minutes) + static readonly DEFAULT_TIMEOUT_MS = 5 * 60 * 1000; + /** * Wait for the run to reach a specific status * * @param targetStatus - The status to wait for * @param options - Optional timeout and abort signal * @returns Promise that resolves with the run instance when the status is reached */ waitForStatus( targetStatus: 'completed' | 'failed', options?: { timeoutMs?: number; signal?: AbortSignal } ): Promise<this> { - const timeoutMs = options?.timeoutMs ?? 5 * 60 * 1000; // Default 5 minutes + const timeoutMs = options?.timeoutMs ?? FlowRun.DEFAULT_TIMEOUT_MS; const { signal } = options || {};
220-227
: Improve type safety by using a specific event type parameter.The
updateState
method acceptsany
as its parameter type, which reduces type safety. While you do cast to a proper type later, it would be better to start with a more specific type.- updateState(event: any): boolean { + updateState(event: Partial<FlowRunEventData<TFlow>['*']>): boolean { // Type guard to validate the event shape if (!event || typeof event !== 'object' || event.run_id !== this.#state.run_id) { return false; } // Use properly typed event for the rest of the function const typedEvent = event as FlowRunEventData<TFlow>['*'];
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
pkgs/client/src/lib/FlowRun.ts
(1 hunks)pkgs/client/src/lib/FlowStep.ts
(1 hunks)pkgs/client/src/lib/PgflowClient.ts
(1 hunks)pkgs/client/src/lib/client.spec.ts
(1 hunks)pkgs/client/src/lib/types.ts
(1 hunks)pkgs/client/vite.config.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- pkgs/client/vite.config.ts
- pkgs/client/src/lib/PgflowClient.ts
- pkgs/client/src/lib/FlowStep.ts
- pkgs/client/src/lib/types.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
pkgs/client/src/lib/client.spec.ts (1)
pkgs/client/src/lib/PgflowClient.ts (1)
PgflowClient
(11-186)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
pkgs/client/src/lib/FlowRun.ts
Outdated
#checkAutoDispose(): void { | ||
// Don't auto-dispose multiple times | ||
if (this.#disposed) { | ||
return; | ||
} | ||
|
||
// Only auto-dispose in terminal states | ||
if (this.status !== 'completed' && this.status !== 'failed') { | ||
return; | ||
} | ||
|
||
// If there are no listeners, auto-dispose | ||
if (Object.keys(this.#events.events).length === 0) { | ||
this.dispose(); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider step listeners in auto-dispose logic.
The current auto-dispose logic only checks for run event listeners, but doesn't consider if there are any step listeners still active. This could lead to premature disposal if run listeners are removed but step listeners still exist.
#checkAutoDispose(): void {
// Don't auto-dispose multiple times
if (this.#disposed) {
return;
}
// Only auto-dispose in terminal states
if (this.status !== 'completed' && this.status !== 'failed') {
return;
}
// If there are no listeners, auto-dispose
- if (Object.keys(this.#events.events).length === 0) {
+ // Check if any steps have active listeners
+ let hasStepListeners = false;
+ for (const step of this.#steps.values()) {
+ if (Object.keys((step as any).events?.events || {}).length > 0) {
+ hasStepListeners = true;
+ break;
+ }
+ }
+
+ if (Object.keys(this.#events.events).length === 0 && !hasStepListeners) {
this.dispose();
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
#checkAutoDispose(): void { | |
// Don't auto-dispose multiple times | |
if (this.#disposed) { | |
return; | |
} | |
// Only auto-dispose in terminal states | |
if (this.status !== 'completed' && this.status !== 'failed') { | |
return; | |
} | |
// If there are no listeners, auto-dispose | |
if (Object.keys(this.#events.events).length === 0) { | |
this.dispose(); | |
} | |
} | |
#checkAutoDispose(): void { | |
// Don't auto-dispose multiple times | |
if (this.#disposed) { | |
return; | |
} | |
// Only auto-dispose in terminal states | |
if (this.status !== 'completed' && this.status !== 'failed') { | |
return; | |
} | |
// If there are no listeners, auto-dispose | |
// Check if any steps have active listeners | |
let hasStepListeners = false; | |
for (const step of this.#steps.values()) { | |
if (Object.keys((step as any).events?.events || {}).length > 0) { | |
hasStepListeners = true; | |
break; | |
} | |
} | |
if (Object.keys(this.#events.events).length === 0 && !hasStepListeners) { | |
this.dispose(); | |
} | |
} |
…improve event handling - Changed status properties to use FlowRunStatus and FlowStepStatus enums for better type safety - Updated status precedence records to use enum keys - Modified get status() methods to return enum types - Enhanced waitForStatus method to accept enum values for targetStatus - Refined updateState methods to handle enum statuses and prevent state changes in terminal states - Reset event listeners and internal state upon disposal for better cleanup - Adjusted PgflowClient to handle generic event types and check run existence before updating step state - Overall improvements to type safety, consistency, and robustness of flow run and step management
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (7)
pkgs/client/src/lib/FlowStep.ts (2)
139-156
:⚠️ Potential issueAbort handler can still throw before
unbind
is initialised (TDZ)The
abortHandler
closes overunbind
before it’s assigned.
Ifsignal.aborted === true
whenaddEventListener
is called, the callback fires synchronously →ReferenceError: Cannot access 'unbind' before initialization
.- // Set up abort signal if provided - let abortCleanup: (() => void) | undefined; - if (signal) { - const abortHandler = () => { - if (timeoutId) clearTimeout(timeoutId); - unbind(); - reject(new Error(`Aborted waiting for step ${this.step_slug} to reach status '${targetStatus}'`)); - }; + // Declare early so inner closures are safe + let unbind: Unsubscribe = () => {}; + + // Set up abort signal if provided + let abortCleanup: (() => void) | undefined; + if (signal) { + const abortHandler = () => { + if (timeoutId) clearTimeout(timeoutId); + unbind(); + reject( + new Error( + `Aborted waiting for step ${this.step_slug} to reach status '${targetStatus}'` + ) + ); + };
126-134
: 🛠️ Refactor suggestionUse environment-agnostic timeout typing
NodeJS.Timeout
breaks compilation for browser builds becausesetTimeout
returnsnumber
in the DOM.
Switch to the return-type helper so the code works in both Node and browsers.- let timeoutId: NodeJS.Timeout | undefined; + let timeoutId: ReturnType<typeof setTimeout> | undefined;pkgs/client/src/lib/FlowRun.ts (3)
207-215
: 🛠️ Refactor suggestionBrowser-incompatible timeout type
Same issue as in
FlowStep
: useReturnType<typeof setTimeout>
instead ofNodeJS.Timeout
to remain environment-agnostic.
219-231
:⚠️ Potential issueTDZ bug:
abortHandler
referencesunbind
before definitionIf the supplied
AbortSignal
is already aborted the handler fires immediately and hits the TDZ.
Apply the same fix suggested forFlowStep
(early declaration ofunbind
).
331-351
: 🛠️ Refactor suggestionAuto-dispose ignores step listeners
#checkAutoDispose
only inspects#listenerCount
for run-level events; step-level listeners keep living references and will break once youdispose()
prematurely (previously reported).Consider traversing
#steps
and testing each step’s emitter before disposing, or expose anactiveListenerCount
onFlowStep
.pkgs/client/src/lib/PgflowClient.ts (2)
37-55
:⚠️ Potential issueUntracked global listeners ⇒ memory leak
The unsubscribers returned by
onRunEvent
/onStepEvent
are never stored, soPgflowClient
can’t remove them ondisposeAll()
or when the instance is GC’ed, leaving callbacks inside the adapter.The refactor proposed in the previous review (store the handles and expose
close()
or call them indisposeAll()
) is still necessary.
110-117
:⚠️ Potential issue
event_type
still missing in snapshot events
FlowRun.updateState
/FlowStep.updateState
expectevent_type
; the snapshots forwarded fromstart_flow_with_states
omit it, violating the contract and breaking UI filters.Please include a non-colliding constant, e.g.:
- run.updateState({ - ...data.run, - status: data.run.status, - run_id: data.run.run_id, - }); + run.updateState({ + event_type: 'run:snapshot', + ...data.run, + }); … - run.step(stepState.step_slug).updateState({ - ...stepState, - status: stepState.status, - run_id: id, - step_slug: stepState.step_slug, - }); + run.step(stepState.step_slug).updateState({ + event_type: `step:${stepState.step_slug}:snapshot`, + ...stepState, + });Also applies to: 121-128
🧹 Nitpick comments (1)
pkgs/client/src/lib/types.ts (1)
340-341
: Consider removing the ESLint disable comment if possible.The eslint-disable comment suggests the TFlow generic parameter isn't used directly in the interface. Consider either using the parameter somewhere in the interface or removing it if not needed.
-// eslint-disable-next-line @typescript-eslint/no-unused-vars -export interface IFlowRealtime<TFlow = unknown> { +export interface IFlowRealtime {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
pkgs/client/src/lib/FlowRun.ts
(1 hunks)pkgs/client/src/lib/FlowStep.ts
(1 hunks)pkgs/client/src/lib/PgflowClient.ts
(1 hunks)pkgs/client/src/lib/types.ts
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
pkgs/client/src/lib/FlowStep.ts (4)
pkgs/client/src/index.ts (2)
FlowStep
(9-9)Unsubscribe
(12-12)pkgs/dsl/src/dsl.ts (3)
AnyFlow
(50-50)ExtractFlowSteps
(88-94)StepOutput
(134-139)pkgs/client/src/lib/types.ts (5)
FlowStepBase
(387-389)StepEvent
(146-152)FlowStepState
(325-335)StepEvents
(214-220)Unsubscribe
(225-225)pkgs/client/src/lib/FlowRun.ts (1)
currentStatus
(361-372)
pkgs/client/src/lib/FlowRun.ts (4)
pkgs/client/src/index.ts (3)
FlowRun
(8-8)Unsubscribe
(12-12)FlowStep
(9-9)pkgs/dsl/src/dsl.ts (4)
AnyFlow
(50-50)ExtractFlowInput
(60-66)ExtractFlowOutput
(72-82)ExtractFlowSteps
(88-94)pkgs/client/src/lib/types.ts (7)
FlowRunBase
(376-382)FlowRunEvent
(46-49)FlowRunState
(308-320)FlowRunEvents
(98-100)FlowStepBase
(387-389)Unsubscribe
(225-225)StepEvent
(146-152)pkgs/client/src/lib/FlowStep.ts (1)
FlowStep
(15-246)
🪛 Biome (1.9.4)
pkgs/client/src/lib/FlowStep.ts
[error] 217-217: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Unsafe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
pkgs/client/src/lib/FlowRun.ts
[error] 306-306: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Unsafe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (13)
pkgs/client/src/lib/types.ts (13)
1-4
: Clean imports with comprehensive type coverage.The imports bring in all necessary types from both the DSL and core packages, providing a solid foundation for the client library types. The circular import with FlowRun is also handled correctly.
8-13
: Well-defined flow run status enum.The FlowRunStatus enum provides a clear and comprehensive set of states for flow runs, following a logical progression from queued through to completion or failure.
18-41
: Type-safe event data structure with discriminated unions.The FlowRunEventData type creates a strongly-typed event system with distinct event shapes for each status transition. The status field serves as an excellent discriminant for the union type, ensuring type safety when handling events.
46-50
: Clear discriminated union for flow run events.The FlowRunEvent type establishes a proper discriminated union, making it easy to narrow types in switch statements or conditionals.
54-93
: Comprehensive type guards for runtime safety.The isFlowRunEvent and related type guard functions provide excellent runtime validation capabilities. These will be especially valuable when processing events from external sources or API responses where TypeScript's compile-time checking isn't sufficient.
98-100
: Event handlers compatible with nanoevents library.The FlowRunEvents type is designed to work seamlessly with the nanoevents pattern, creating a consistent approach to event handling across the application.
105-110
: Complete step status enum with logical progression.The FlowStepStatus enum captures all necessary states for a flow step, including the initial 'created' state which appropriately precedes 'started'.
115-152
: Well-structured step event types with strong typing.The StepEventData and StepEvent types create a parallel event structure to the flow run events, maintaining consistency in the API while ensuring type safety specific to step operations.
157-209
: Robust step event type guards.The type guards for step events follow the same pattern as flow run events, providing consistent runtime validation capabilities across the library.
227-304
: Comprehensive broadcast event types for Supabase integration.The broadcast event types are well-designed for integrating with Supabase realtime channels, with clear discriminants via event_type. The optional compatibility fields ensure the types can handle all possible variations in the data structure.
308-335
: Complete state representations for both runs and steps.The FlowRunState and FlowStepState types provide comprehensive state representations including all necessary metadata, timestamps, and error handling capabilities.
376-389
: Clean base interfaces for runs and steps.The FlowRunBase and FlowStepBase interfaces provide a solid foundation for the implementation classes, with clear method signatures for state management.
399-421
:✅ Verification successful
Well-documented client interface with type safety.
The IFlowClient interface combines realtime capabilities with flow management functions, creating a comprehensive API surface with excellent type safety through generics.
One observation: The
startFlow
method introduces a new generic parameterTSpecificFlow extends TFlow
instead of reusing the interface's type parameter. While technically correct, it was previously identified and supposedly fixed in commit 37734e5. Verify if this approach is intentional or if the fix should be re-applied.
🏁 Script executed:
#!/bin/bash # Check if this was intentionally reverted or kept this way git show 37734e5 -- pkgs/client/src/lib/types.ts | grep -A 10 "startFlow"Length of output: 947
startFlow signature correctly uses
TSpecificFlow
The change in commit 37734e5 intentionally replaces the interface’s
TFlow
generic withTSpecificFlow extends TFlow
to narrow both input and return types. This aligns with the intended fix—no further action is needed.
on<E extends keyof StepEvents<TFlow, TStepSlug>>( | ||
event: E, | ||
callback: StepEvents<TFlow, TStepSlug>[E] | ||
): Unsubscribe { | ||
return this.#events.on(event, callback); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
'*'
wildcard isn’t part of StepEvents
– typing mismatch
on('*', …)
and emit('*', …)
compile only because '*'
is treated as any
, defeating the purpose of strong typing and breaking consumers using --noImplicitAny
.
Option 1 (quick): widen the event map:
export type StepEvents<…> = {
[K in keyof StepEventData<…>]:
(event: StepEventData<…>[K]) => void;
+ '*'?: (event: StepEvent<…>) => void;
};
Option 2 (preferred): create a separate wildcard channel and keep the strongly-typed map intact.
Either way, make sure the generic argument passed to createNanoEvents
matches the final shape.
Failing to fix this will surface as TS errors for downstream users.
Also applies to: 221-223
const unbind = this.on('*', (event) => { | ||
if (event.status === targetStatus) { | ||
if (timeoutId) clearTimeout(timeoutId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Wildcard '*'
event typing inconsistent with FlowRunEvents
As in FlowStep
, '*'
isn’t declared in FlowRunEvents
, causing implicit any
.
Please extend the event map or create a dedicated wildcard channel to preserve type-safety.
Also applies to: 310-312
…toring, run retrieval, and event subscription - Introduces a new example TypeScript file with multiple usage scenarios - Implements functions to start flows, monitor steps, retrieve previous runs, and subscribe to events - Demonstrates typed interactions with the client library and proper event handling - Provides a comprehensive usage guide for the client library features in a single script
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
pkgs/client/examples/basic.ts (2)
46-81
: Consider using proper type definitions instead of type assertionsThe example demonstrates step monitoring well, but relies on type assertions for output handling.
- console.log(`Sentiment score: ${(event.output as { score: number }).score}`); + // Define interfaces for your step outputs for better type safety + interface SentimentOutput { + score: number; + } + const output = event.output as SentimentOutput; + console.log(`Sentiment score: ${output.score}`);Also, the comment on line 47 about "use Flow type directly rather than typeof" seems unrelated to the code below it and could be removed or clarified.
102-111
: Consider deriving step slugs from flow definitionHardcoding step slugs creates maintenance challenges if the flow structure changes.
- for (const stepSlug of ['website', 'sentiment', 'summary', 'saveToDb']) { + // Use the flow definition to determine available steps + const stepSlugs = Object.keys(AnalyzeWebsite.steps); + for (const stepSlug of stepSlugs) {This change would make the example more robust against changes in the flow definition.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
pkgs/client/examples/basic.ts
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (5)
pkgs/client/examples/basic.ts (5)
1-12
: Well-structured client setupThe imports and client initialization code follow best practices, clearly demonstrating how to set up the PgflowClient with a Supabase instance. The placeholder credentials are appropriately used for an example.
17-41
: Good example of flow initiation and completion monitoringThis example effectively demonstrates starting a flow with typed input and waiting for completion. The timeout parameter in waitForStatus is a good practice for avoiding indefinite waiting, and the typed output access shows the type-safety benefits of the client.
119-150
: Excellent demonstration of event subscriptions and resource cleanupThis example effectively shows how to use both wildcard and specific event subscriptions, and properly demonstrates resource cleanup with
dispose()
. The pattern shown here helps prevent memory leaks in long-running applications.
155-174
: Well-structured main function with proper error handlingThe main function effectively orchestrates all examples with proper error handling and resource cleanup in the finally block. The sequential execution with clear separation between examples makes the code easy to follow.
176-181
: Good module pattern for both direct execution and importThe code correctly handles both direct execution and being imported as a module, following Node.js best practices. Exporting the example functions also allows them to be reused or tested individually.
…e dependencies - Introduced a new test file with detailed type assertions for PgflowClient methods - Verified correct input/output typings, event handling, and flow operations - Ensured type safety for starting, retrieving, and managing flow runs - Updated package.json to include @pgflow/dsl and vitest as dev dependencies
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
pkgs/client/__tests__/types/client-basic.test-d.ts (1)
140-140
: Avoid using the genericFunction
typeThe
Function
type is too generic and can lead to type safety issues. It's better to use a more specific function signature.- waitForStatus: Function; + waitForStatus: (status: FlowRunStatus, options?: { timeoutMs?: number }) => Promise<any>;Alternatively, you could import the actual type from your implementation:
- waitForStatus: Function; + waitForStatus: ReturnType<FlowRun<any>['waitForStatus']>;🧰 Tools
🪛 Biome (1.9.4)
[error] 140-140: Don't use 'Function' as a type.
Prefer explicitly define the function shape. This type accepts any function-like value, which can be a common source of bugs.
(lint/complexity/noBannedTypes)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (2)
pkgs/client/__tests__/types/client-basic.test-d.ts
(1 hunks)pkgs/client/package.json
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pkgs/client/package.json
🧰 Additional context used
🧬 Code Graph Analysis (1)
pkgs/client/__tests__/types/client-basic.test-d.ts (4)
pkgs/dsl/src/example-flow.ts (1)
AnalyzeWebsite
(8-36)pkgs/dsl/src/dsl.ts (4)
Flow
(179-346)ExtractFlowInput
(60-66)ExtractFlowOutput
(72-82)AnyFlow
(50-50)pkgs/client/src/lib/FlowRun.ts (1)
input
(87-89)pkgs/client/src/lib/client.ts (1)
client
(1-3)
🪛 Biome (1.9.4)
pkgs/client/__tests__/types/client-basic.test-d.ts
[error] 140-140: Don't use 'Function' as a type.
Prefer explicitly define the function shape. This type accepts any function-like value, which can be a common source of bugs.
(lint/complexity/noBannedTypes)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (1)
pkgs/client/__tests__/types/client-basic.test-d.ts (1)
1-201
: Well-structured type tests with comprehensive coverageThe type tests thoroughly validate the client API's type safety, covering instantiation, flow starting, run retrieval, step operations, event subscription, and generic flow handling. This provides excellent documentation of expected usage patterns while ensuring type safety.
🧰 Tools
🪛 Biome (1.9.4)
[error] 140-140: Don't use 'Function' as a type.
Prefer explicitly define the function shape. This type accepts any function-like value, which can be a common source of bugs.
(lint/complexity/noBannedTypes)
…better type safety - Changed expectTypeOf to expectTypeOf().toEqualTypeOf() for accurate type comparisons - Removed redundant or commented-out code in tests - Updated getRun method test to be asynchronous for correctness - Enhanced test clarity and consistency in type testing of client methods and flow steps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
pkgs/client/__tests__/types/client-basic.test-d.ts
(1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
pkgs/client/__tests__/types/client-basic.test-d.ts
[error] 127-127: Don't use 'Function' as a type.
Prefer explicitly define the function shape. This type accepts any function-like value, which can be a common source of bugs.
(lint/complexity/noBannedTypes)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (1)
pkgs/client/__tests__/types/client-basic.test-d.ts (1)
1-188
: Well-structured type tests for the PgflowClient.This test file provides comprehensive type checking for the PgflowClient, covering all crucial functionality including flow starting, run retrieval, step access, event subscription, and generic flow handling. The tests ensure that the client API correctly enforces type safety and provides proper type inference.
🧰 Tools
🪛 Biome (1.9.4)
[error] 127-127: Don't use 'Function' as a type.
Prefer explicitly define the function shape. This type accepts any function-like value, which can be a common source of bugs.
(lint/complexity/noBannedTypes)
expectTypeOf(runPromise).resolves.toMatchTypeOf<{ | ||
run_id: string; | ||
step: (stepSlug: string) => any; | ||
waitForStatus: Function; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace 'Function' with a specific function type.
Using the Function
type is discouraged as it provides very little type safety. It accepts any function-like value, which can lead to bugs.
- waitForStatus: Function;
+ waitForStatus: (status: FlowRunStatus, options?: { timeoutMs?: number }) => Promise<{ run_id: string; status: FlowRunStatus; output: ExtractFlowOutput<typeof AnalyzeWebsite> | null }>;
Alternatively, you could use a more generic function signature if needed:
- waitForStatus: Function;
+ waitForStatus: (...args: any[]) => Promise<any>;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
waitForStatus: Function; | |
// ... | |
- waitForStatus: Function; | |
+ waitForStatus: (status: FlowRunStatus, options?: { timeoutMs?: number }) => Promise<{ | |
+ run_id: string; | |
+ status: FlowRunStatus; | |
+ output: ExtractFlowOutput<typeof AnalyzeWebsite> | null; | |
+ }>; | |
// ... |
🧰 Tools
🪛 Biome (1.9.4)
[error] 127-127: Don't use 'Function' as a type.
Prefer explicitly define the function shape. This type accepts any function-like value, which can be a common source of bugs.
(lint/complexity/noBannedTypes)
- Added comprehensive type tests for startFlow, waitForStatus, step, getRun, and event handling - Improved type inference and enforcement for flow input/output, step outputs, and run properties - Updated example flow and client usage to demonstrate strongly typed flow interactions - Modified test setup to ensure proper type checks and validation across various flow operations - Adjusted tsconfig for better module resolution and type inclusion in tests
…mples - Updated test files to fix formatting issues and ensure proper type assertions - Added explicit type parameters to startFlow and getRun method calls for better type inference - Enhanced event handling tests to verify event output types match flow definitions - Corrected output casting comments and removed unnecessary code in example scripts - Overall improvements to type coverage, consistency, and clarity in client usage and tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
pkgs/client/examples/basic.ts (4)
1-29
: Avoid hardcoded credentials in examples.The example uses hardcoded URLs and keys for the Supabase client. Even though these are placeholders, it's better to use environment variables for credentials to enforce good security practices.
-const supabaseUrl = 'https://example.supabase.co'; -const supabaseKey = 'your-supabase-key'; -const supabase = createClient(supabaseUrl, supabaseKey); +const supabaseUrl = process.env.SUPABASE_URL || 'https://example.supabase.co'; +const supabaseKey = process.env.SUPABASE_KEY || 'your-supabase-key'; +const supabase = createClient(supabaseUrl, supabaseKey);
45-47
: Remove commented debugging code.This appears to be commented debugging or instruction code that doesn't serve a purpose in the final example.
- // completed.step('root').output - // Output is fully typed according to flow definition + // Output is fully typed according to flow definition
58-58
: Inconsistent comment about flow typing.The comment states to use Flow type directly rather than typeof, but the code still uses typeof.
- // Start the flow - use Flow type directly rather than typeof + // Start the flow with full type information
70-72
: Incomplete event handler code.The event.output statement has no use and appears to be incomplete example code.
rootStep.on('completed', (event) => { - event.output; + console.log(`Root step completed with output:`, event.output); });pkgs/client/__tests__/types/client-basic.test-d.ts (1)
36-42
: Consider using mock for Supabase client.Rather than creating a real Supabase client with placeholder credentials, consider using a mock object to make it clear this is just for testing.
- // Mock Supabase client - const supabase = createClient( - 'https://example.supabase.co', - 'your-supabase-key' - ); + // Mock Supabase client + const supabase = { + channel: () => ({ + on: () => ({}), + subscribe: () => ({}) + }) + } as any;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pkgs/client/__tests__/types/client-basic.test-d.ts
(1 hunks)pkgs/client/examples/basic.ts
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
pkgs/client/__tests__/types/client-basic.test-d.ts (2)
pkgs/dsl/src/dsl.ts (8)
Flow
(179-346)ExtractFlowInput
(60-66)Simplify
(17-17)ExtractFlowOutput
(72-82)ExtractFlowSteps
(88-94)StepOutput
(134-139)StepInput
(148-155)AnyFlow
(50-50)pkgs/client/src/lib/FlowRun.ts (1)
input
(87-89)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (6)
pkgs/client/__tests__/types/client-basic.test-d.ts (6)
1-34
: Comprehensive type testing setup with well-structured test flow.The example flow provides good coverage of different dependency patterns and output types, which is useful for thorough type testing.
43-82
: Well-structured input type validation tests.The tests properly verify that the client enforces correct input types and that the returned FlowRun has the expected properties with preserved type information.
84-112
: Comprehensive waitForStatus method typing tests.The tests correctly verify that waitForStatus returns properly typed results and that the output structure matches the expected flow output type.
114-159
: Thorough step method typing tests.The tests effectively verify that step access, output types, event handlers, and waitForStatus all maintain proper typing according to the flow definition.
161-199
: Complete event subscription typing tests.The tests verify that both global and specific event handlers receive properly typed event objects with the correct properties based on the event type.
201-252
: Excellent generic flow type handling tests.This section thoroughly tests that the client maintains proper type information when used with generic flow types, which is important for library flexibility.
/** | ||
* Main function to run all examples | ||
*/ | ||
async function main() { | ||
try { | ||
console.log('=== Example 1: Start flow and wait ==='); | ||
const completedRun = await startFlowAndWait(); | ||
|
||
console.log('\n=== Example 2: Monitor steps ==='); | ||
await monitorSteps(); | ||
|
||
console.log('\n=== Example 3: Get previous run ==='); | ||
await getRun(completedRun.run_id); | ||
|
||
console.log('\n=== Example 4: Subscribe to events ==='); | ||
await subscribeToRunEvents(); | ||
} catch (error) { | ||
console.error('Error running examples:', error); | ||
} finally { | ||
// Clean up all resources | ||
client.disposeAll(); | ||
} | ||
} | ||
|
||
// Only execute if run directly | ||
if (require.main === module) { | ||
main().catch(console.error); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing 'getRun' function implementation.
The main function references a 'getRun' function at line 132 and it's exported at line 149, but the implementation is missing from the file.
Add the missing 'getRun' function implementation before line 120. Here's a possible implementation:
/**
* Example 3: Get a previous run
*/
async function getRun(runId: string) {
console.log(`Getting run: ${runId}`);
// Fetch an existing run by ID
const run = await client.getRun<typeof TestFlow>(runId);
console.log(`Run status: ${run.status}`);
// Iterate over all steps
const allSteps = run.allSteps();
for (const stepSlug of allSteps) {
const step = run.step(stepSlug);
console.log(`Step ${stepSlug} status: ${step.status}`);
if (step.output) {
console.log(`Step ${stepSlug} output: ${JSON.stringify(step.output)}`);
} else if (step.error) {
console.log(`Step ${stepSlug} error: ${step.error}`);
}
}
return run;
}
Includes detailed plans for unit, type safety, integration, and end-to-end tests, along with practical implementation tips and high-value test cases to ensure robustness of the client library and internal architecture. Also introduces mock examples and testing scenarios aligned with the project's MVP approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (7)
client-library-unit-testing.md (4)
18-79
: Enhance the Supabase mock with unsubscribe logic and type annotationsThe example
createMockSupabaseClient
mock is solid, but to more closely mirror the real Supabase API and avoid potential memory‐leaks in tests, consider:
- Implementing the
channel.unsubscribe
method to call the emitter’s off/unsubscribe internally (right now it’s a no-opvi.fn()
).- Adding TypeScript signatures for
createChannel
parameters and the returnedChannel
/MockClient
shape for better IDE support and compile‐time safety.Example diff:
const createChannel = (name: string): SupabaseChannelMock => { - const channel = { + const channel: SupabaseChannelMock = { on: vi.fn((event, callback) => { const unsubscribe = emitter.on(`${name}:${event}`, callback) return unsubscribe }), subscribe: vi.fn((callback) => { const unsubscribe = emitter.on(`${name}`, callback) return { unsubscribe } }), - unsubscribe: vi.fn(), + // Unsubscribe should remove all handlers for this channel + unsubscribe: vi.fn(() => { + emitter.off(`${name}`, () => {}) // or track/unsubscribe specific handlers + }), // Test helper to simulate events _emit: (event, payload) => { emitter.emit(`${name}:${event}`, payload) } }
86-159
: Add tests forwaitForStatus
timeout and abort scenariosThe current FlowRun test suite covers happy paths but omits:
- Handling of a timeout (e.g., should reject after the specified timeout).
- Aborting a pending
waitForStatus
call via anAbortController
.Also, to prevent flakiness, wrap the
setTimeout
in Vitest’s fake timers.
164-226
: Populate or remove the empty reconnection test stubThe
should properly handle reconnection
test is currently empty. Either provide a skeleton implementation or remove the placeholder to avoid confusion.Example skeleton:
it('should properly handle reconnection', async () => { // Arrange: start a flow and subscribe mockSupabase.rpc.mockResolvedValueOnce({ data: { /*...*/ }, error: null }) const run = await client.startFlow('test-flow', {}) // Simulate disconnection mockSupabase._channels.get('pgflow:run:123').unsubscribe() // Trigger reconnection logic, then simulate an event await client.reconnectChannel('pgflow:run:123') mockSupabase._simulateEvent('pgflow:run:123', 'run:status', { /*...*/ }) // Assert that run state is updated after reconnect expect(run.status).toBe(/* expected */) })
231-277
: Include tests for subscription cleanup inSupabaseBroadcastAdapter
Beyond routing, it’s valuable to verify that:
- Unsubscribing from a run or step channel invokes the adapter’s
.unsubscribe()
logic.- After unsubscribing, no further events reach the handler.
client-library-testing-strategy.md (3)
6-15
: Specify language for the directory-structure code blockThe fenced block showing the suggested directory layout lacks a language. Adding
text
(orbash
) improves readability and satisfies Markdown lint rules.Example:
-``` +```text pkgs/ └─ client/ ├─ src/ └─ tests/ ├─ unit/ ├─ integration/ ├─ types/ └─ e2e/ (in website app or example-nextjs project) -``` +```🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
6-6: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
28-33
: Standardize terminology: use “pseudocode” and “boilerplate”To align with common style guides and avoid hyphenation false‐positives:
- Change “pseudo-code” → “pseudocode”
- Change “boiler-plate” → “boilerplate”
- Change “over-kill” → “overkill”
🧰 Tools
🪛 LanguageTool
[misspelling] ~30-~30: This word is normally spelled as one.
Context: ...cks – remember MVP guideline. Example (pseudo-code): ```ts import { FlowRunStatus } from ...(EN_COMPOUNDS_PSEUDO_CODE)
58-61
: Add language identifier to thetsd
install code blockThe installation snippet for
tsd
should specifybash
orshell
to satisfy linting and enhance clarity.Example:
-``` -pnpm add -D tsd -``` +```bash +pnpm add -D tsd +```🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
58-58: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
client-library-testing-strategy.md
(1 hunks)client-library-unit-testing.md
(1 hunks)
🧰 Additional context used
🪛 LanguageTool
client-library-testing-strategy.md
[misspelling] ~3-~3: This word is normally spelled as one.
Context: ...l logic & typings** without drowning in boiler-plate. Directory suggestions assume Vitest ...
(EN_COMPOUNDS_BOILER_PLATE)
[style] ~4-~4: Consider using a less common alternative to make your writing sound more unique and professional.
Context: ...eady in the repo) + Playwright for E2E. Feel free to rename. ``` pkgs/ └─ client/ ├─ src...
(FEEL_FREE_TO_STYLE_ME)
[misspelling] ~30-~30: This word is normally spelled as one.
Context: ...cks – remember MVP guideline. Example (pseudo-code): ```ts import { FlowRunStatus } from ...
(EN_COMPOUNDS_PSEUDO_CODE)
[uncategorized] ~90-~90: Possible missing comma found.
Context: ...w) if Docker is over-kill right now. 2. Seed a tiny schema that contains: - `pgfl...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~141-~141: Possible missing comma found.
Context: ...w 1. No mocking frenzy – only mock Supabase where network hurts speed. 2. Share J...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~143-~143: Possible missing comma found.
Context: ... across unit + int tests. 3. For real Supabase you can skip auth by using anon
key i...
(AI_HYDRA_LEO_MISSING_COMMA)
[typographical] ~146-~146: It appears that a comma is missing.
Context: ...eaded`) to speed up CI. Following this plan you’ll have **solid confidence in the c...
(DURING_THAT_TIME_COMMA)
🪛 markdownlint-cli2 (0.17.2)
client-library-testing-strategy.md
6-6: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
58-58: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
Introduce guidelines emphasizing scope reduction and simplicity for early MVP Add detailed testing strategy covering unit, integration, and utilities Include plans for type safety, documentation, package finalization, and CI/CD setup Reflect on current testing utilities, mocking strategies, and future considerations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (12)
testing-plan.md (12)
1-2
: Establish document scope and context.Consider adding a brief introduction outlining the overall objectives and scope of this testing plan to orient readers before diving into the checklist.
5-7
: Clarify mock interfaces and error scenarios.It would be helpful to specify the shape and behavior of the stubbed objects (e.g. methods on
SupabaseClient.schema
) and how errors (network failures, permission errors) should be mocked so tests remain deterministic.
20-23
: Include FlowStep lifecycle edge cases.Beyond precedence and mapping, consider tests for repeated step lookups (ensuring idempotent caching), rapid status toggles, and disposal when listeners are removed mid‐flow.
26-32
: Broaden SupabaseBroadcastAdapter error and edge tests.Add scenarios for channel subscription failures, unauthorized access errors, and ensure that on malformed payloads the adapter fails gracefully or ignores invalid events.
47-51
: Ensure integration tests enforce isolation and cleanup.Outline how test environments will be reset between runs (e.g. disposing clients/adapters) to prevent cross-test interference, especially when simulating real‐time channels.
54-56
: Define staleness criteria for out‐of‐order events.Clarify whether events carry timestamps or sequence numbers, and add tests that verify events older than the last seen state are ignored based on that metadata.
59-63
: Parameterize reconnection timing.Specify the expected delay intervals and add tests that simulate varied reconnection backoff strategies, verifying both re‐subscription and snapshot refresh behaviors under different timer advances.
64-66
: Cover negativewaitForStatus
scenarios.In addition to the success race, add tests for timeouts and invalid status requests to ensure
waitForStatus
rejects or throws as expected.
71-73
: Standardize utility function naming and usage.Recommend prefixing helpers consistently (e.g.
testEmit
,testAdvanceAndFlush
) and including return types in docs so contributors know exactly how to invoke and compose them.
79-83
: Align checklist formatting for clarity.Convert the tick marks under “MOCKING STRATEGY CHECK” into checkboxes (
- [x]
) or plain bullets to match the rest of the document and indicate completion status uniformly.
89-91
: Link de-scoped items to issue tracker.For transparency, consider appending issue or ticket links for each removed TODO so stakeholders can reference the backlog and track when they might be addressed.
97-98
: Assign owners or timelines to open questions.To prevent blocking MVP progress, tag each open question with an owner or due-date, or suggest a decision point to ensure they're revisited promptly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
MVP.md
(1 hunks)testing-plan.md
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- MVP.md
🧰 Additional context used
🪛 LanguageTool
testing-plan.md
[uncategorized] ~94-~94: Possible missing comma found.
Context: ...-- ## OPEN QUESTIONS (flagged for later but NOT blocking MVP) - Memory-leak tracki...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Cloudflare Pages
- [ ] instance caching: second `getRun(run_id)` returns cached FlowRun | ||
- [ ] broadcast → FlowRun propagation (run + step) | ||
- [ ] `dispose()` and `disposeAll()` cascade to adapter and run instances | ||
- [ ] error bubble-up: failed rpc rejects and cleans up | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Test PgflowClient RPC error handling and retries.
Include tests for RPC timeouts, network failures, and ensure that failed calls properly bubble up errors and clean up cached instances as intended.
- [ ] status precedence: higher precedence wins, terminal states are immutable | ||
- [ ] event → state mapping (`started`, `completed`, `failed`) | ||
- [ ] `waitForStatus()` resolves / rejects correctly (use `vi.useFakeTimers()`) | ||
- [ ] step caching (`run.step('x')` returns same instance) | ||
- [ ] auto-dispose when (terminal state && no listeners) | ||
- [ ] protection against foreign-run events (ignored) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Expand FlowRun tests to cover concurrency edge cases.
In addition to precedence and terminal-state protections, add tests for simultaneous events (e.g. two completed
events in flight) and race conditions where older events arrive after a newer terminal state.
… classes - Introduced unit tests for FlowRun class covering initialization, event handling, state updates, status precedence, event filtering, waiting for status, and disposal - Added tests for FlowStep class validating initialization, event handling, status transitions, event filtering, waiting for status, and timeout behavior - Included tests for PgflowClient class verifying initialization and startFlow method behavior, including successful execution and error handling - Utilized mock functions and fixtures to simulate various scenarios and ensure correct behavior across different components - Enhanced test coverage for core classes managing flow and step states and client interactions
- Added build, lint, and test targets to project.json for improved build process - Changed coverage reports directory in vite.config.ts to a centralized location
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
pkgs/client/project.json (1)
7-17
: Build target is properly configured.
The@nx/js:tsc
executor withoutputPath
,main
,tsConfig
,assets
, androotDir
matches the library setup.Consider adding an explicit
outputs
entry (e.g.,["{options.outputPath}"]
) to leverage Nx’s computation caching more effectively, and you could replace hard-codedpkgs/client
paths with the{projectRoot}
token for consistency.pkgs/client/__tests__/FlowStep.test.ts (1)
13-45
: Consider avoiding type assertions in initialization test.While the initialization test properly verifies all properties, the use of
as any
for step_slug suggests a potential type issue.- step_slug: STEP_SLUG as any, + step_slug: STEP_SLUG,Consider defining proper types for the test fixtures or using more specific type assertions to maintain type safety.
pkgs/client/__tests__/FlowRun.test.ts (1)
15-52
: Consider avoiding type assertions in initialization test.While the initialization test properly verifies all properties, the use of
as any
for input suggests a potential type issue.- input: { foo: 'bar' } as any, + input: { foo: 'bar' },Consider defining proper types for the test fixtures or using more specific type assertions to maintain type safety throughout your tests.
pkgs/client/__tests__/fixtures.ts (1)
130-140
: Useful helper for simulating broadcast eventsThe
emit
function simplifies the process of triggering broadcast events in tests. By automatically extracting the handler from the mock channel's handler map, it reduces boilerplate in test cases.Note that it currently only handles the wildcard event (
*
). If you need to test more specific event handling, you might want to extend this to support named events.export function emit( channelMock: any, eventType: string, payload: any ): void { - const handler = channelMock.handlers.get('*'); + // First try to get a specific handler, fall back to wildcard + const handler = channelMock.handlers.get(eventType) || channelMock.handlers.get('*'); if (handler) { handler({ event: eventType, payload }); } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
pkgs/client/__tests__/FlowRun.test.ts
(1 hunks)pkgs/client/__tests__/FlowStep.test.ts
(1 hunks)pkgs/client/__tests__/PgflowClient.test.ts
(1 hunks)pkgs/client/__tests__/SupabaseBroadcastAdapter.test.ts
(1 hunks)pkgs/client/__tests__/fixtures.ts
(1 hunks)pkgs/client/__tests__/mocks/index.ts
(1 hunks)pkgs/client/project.json
(1 hunks)pkgs/client/vite.config.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pkgs/client/vite.config.ts
🧰 Additional context used
🧬 Code Graph Analysis (4)
pkgs/client/__tests__/FlowStep.test.ts (2)
pkgs/client/__tests__/mocks/index.ts (1)
resetMocks
(92-95)pkgs/client/__tests__/fixtures.ts (5)
RUN_ID
(6-6)STEP_SLUG
(8-8)broadcastStepStarted
(93-101)broadcastStepCompleted
(103-110)broadcastStepFailed
(112-119)
pkgs/client/__tests__/mocks/index.ts (1)
pkgs/client/src/lib/SupabaseBroadcastAdapter.ts (1)
payload
(39-53)
pkgs/client/__tests__/PgflowClient.test.ts (2)
pkgs/client/__tests__/fixtures.ts (6)
RUN_ID
(6-6)startedRunSnapshot
(11-22)startedStepState
(40-48)FLOW_SLUG
(7-7)broadcastRunCompleted
(75-82)broadcastStepStarted
(93-101)pkgs/client/__tests__/mocks/index.ts (2)
resetMocks
(92-95)mockSupabase
(5-56)
pkgs/client/__tests__/fixtures.ts (1)
pkgs/client/src/lib/SupabaseBroadcastAdapter.ts (1)
payload
(39-53)
🪛 Biome (1.9.4)
pkgs/client/__tests__/fixtures.ts
[error] 125-127: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (45)
pkgs/client/project.json (3)
1-6
: Project metadata configuration looks good.
Thename
,$schema
,sourceRoot
, andprojectType
correctly align with the new client library’s structure.
18-21
: Lint target is correctly defined.
Using the@nx/eslint:lint
executor with default inputs aligns with workspace standards.
22-30
: Test target configuration is solid.
The@nx/vite:test
executor with coverage outputs andpassWithNoTests
fits the Vitest setup for this package.pkgs/client/__tests__/SupabaseBroadcastAdapter.test.ts (9)
1-13
: Good imports and fixture setup.The test file correctly imports all necessary functions from Vitest and uses well-organized fixtures to support the tests.
14-29
: Well-structured test setup with proper timer management.The test setup uses fake timers for controlled testing of async behavior and includes proper cleanup in the afterEach hook. This is essential for isolated tests.
31-52
: Good verification of channel subscription.Tests correctly verify that the adapter subscribes to the appropriate channel with the correct naming pattern and registers handlers for broadcast events.
54-77
: Thorough testing of run event routing.The test properly verifies that run events are correctly routed to the appropriate callbacks. The mock setup and broadcast handler testing approach is clean and effective.
79-102
: Comprehensive step event routing verification.Similar to the run event test, this correctly verifies step event routing with appropriate assertions.
104-113
: Good unsubscribe testing.The test confirms that the unsubscribe method properly calls the channel's unsubscribe method, ensuring resources are cleaned up correctly.
115-173
: Excellent error handling and reconnection testing.This test comprehensively validates the adapter's reconnection behavior by:
- Simulating a channel error
- Verifying reconnection timing
- Checking that current state is fetched via RPC
- Ensuring events are re-emitted from the snapshot
- Confirming a new channel is created
This is critical for ensuring resilience in real-time applications.
175-204
: Good data fetching test.The test properly verifies that the adapter can fetch flow definitions and correctly processes the query results.
206-226
: Proper verification of run state retrieval.This test correctly verifies that the getRunWithStates method calls the appropriate RPC function with the correct parameters and correctly processes the response.
pkgs/client/__tests__/FlowStep.test.ts (8)
1-12
: Good imports and test setup.The test file properly imports all necessary functions from Vitest and relevant fixtures.
47-72
: Proper event handling test for started state.This test correctly verifies that the step updates its state and triggers callbacks when a started event is received.
74-100
: Good completed event handling test.The test verifies that the completed state is correctly applied and the appropriate properties are updated, including the output data.
102-129
: Thorough failed event handling test.This test correctly verifies error state handling, including proper creation of Error instances from error messages.
131-165
: Excellent status precedence rule testing.The test validates an important aspect of state machines - that terminal states cannot transition to other states. This prevents invalid state transitions and ensures data integrity.
167-190
: Good filtering test for step events.The test correctly verifies that events for different step slugs are ignored, which is essential for proper event routing.
192-248
: Comprehensive waitForStatus testing.The tests properly verify both the success case (resolving when status is reached) and the timeout case. The use of vi.advanceTimersByTime for testing async behavior is a good practice.
248-274
: Proper abort signal handling test.The test correctly verifies that the waitForStatus method respects AbortSignal, which is important for cancellation support.
pkgs/client/__tests__/PgflowClient.test.ts (9)
1-20
: Good test setup with UUID mocking.The test file correctly sets up mocks, including a predictable UUID for consistent testing. This ensures deterministic test behavior.
21-36
: Proper initialization test.The test verifies that the client can be created correctly with the required dependencies.
38-69
: Comprehensive startFlow test.This test thoroughly validates that:
- The RPC is called with correct parameters
- The returned run is initialized correctly
- Real-time subscriptions are established
This covers both API interaction and local state management.
71-86
: Good error handling test.The test correctly verifies that errors from the RPC call are properly propagated to the caller.
88-113
: Effective caching behavior test.The test confirms that the client correctly caches run instances and avoids unnecessary database calls, which is important for performance.
115-129
: Appropriate null handling for non-existent runs.The test validates the behavior when attempting to get a run that doesn't exist in the database.
131-173
: Thorough event callback testing.This test verifies that events are properly forwarded to registered callbacks, which is essential for the event-driven architecture.
175-217
: Good resource cleanup verification.The test checks that disposing a run removes it from the cache and unsubscribes from the channel, which is important for preventing memory leaks.
219-255
: Comprehensive disposeAll test.The test verifies that the disposeAll method correctly disposes all cached runs, which is important for bulk cleanup operations.
pkgs/client/__tests__/FlowRun.test.ts (10)
1-14
: Good imports and setup.The test file correctly imports all necessary functions and fixtures.
54-79
: Thorough started event handling test.This test correctly verifies state updates and event emission for the started status.
81-107
: Good completed event handling test.The test properly verifies output handling and successful completion state.
109-135
: Comprehensive failed event handling test.The test correctly verifies error state handling, including proper creation of Error instances.
137-164
: Excellent status precedence rule testing.The test validates that terminal states cannot transition to other states, ensuring state machine integrity.
166-191
: Good run ID filtering test.The test properly verifies that events for different run IDs are ignored, which is essential for correct event routing.
193-248
: Well-implemented waitForStatus tests.The tests cover both successful resolution and timeout scenarios for the waitForStatus method.
250-272
: Good step caching verification.The test confirms that step instances are correctly cached to avoid creating duplicate objects.
274-301
: Important auto-disposal test.This test verifies the resource management behavior that automatically disposes runs when they reach terminal states and have no listeners, preventing memory leaks.
303-329
: Good step state update test.The test verifies that step states can be correctly updated via the parent run object.
pkgs/client/__tests__/mocks/index.ts (3)
1-56
: Well-structured mock implementation for Supabase clientThis mock implementation for the Supabase client is well-designed, with clear separation between RPC, query builder, and schema functions. The chainable API accurately mimics the real Supabase client structure, which will make tests more readable and maintainable.
The
mockSupabase()
function returns both the client mock and individual mock components, giving tests the flexibility to make detailed assertions on specific method calls.
58-89
: Good approach for testing event-based systemsThe
mockRealtimeChannel()
implementation elegantly solves the challenge of testing event-based systems by:
- Storing handlers in Maps that tests can access
- Supporting both broadcast and system events
- Maintaining chainable API methods like the real implementation
This allows tests to simulate incoming events precisely when needed, which is essential for testing the SupabaseBroadcastAdapter's event routing.
91-95
: Good test isolation practiceIncluding a dedicated
resetMocks()
function promotes proper test isolation by ensuring a clean slate between test runs. Calling bothrestoreAllMocks()
andclearAllMocks()
handles both function implementations and call history.pkgs/client/__tests__/fixtures.ts (3)
1-38
: Comprehensive test fixtures for flow run statesThe fixtures for flow run states (started, completed, failed) are well-structured and cover all the essential properties. Using the spread operator to build upon the base state is a clean approach that ensures consistency between the fixtures.
Each state correctly reflects its expected property values, including status, timestamps, and results/errors as appropriate.
39-62
: Complete step state snapshots with consistent patternsThe step state snapshots follow the same pattern as the run snapshots, maintaining consistency throughout the test fixtures. All status transitions are properly represented with appropriate field values.
64-119
: Well-structured broadcast event fixturesThe broadcast event fixtures closely match the expected format used in the SupabaseBroadcastAdapter, with proper event types that follow the prefix pattern (
run:
orstep:
) that the adapter checks for. This ensures the tests will accurately verify the adapter's event routing logic.
- Add error handling and logging to atlas-dump-realtime-schema for schema inspection - Update atlas-verify-schemas-synced to run schema dump before diff and handle diff output - Enhance script robustness and clarity in schema synchronization process
…ge documentation, and revise website documentation guidelines - Replaced nx commands with pnpm nx in build/test instructions - Added new CLAUDE.md for @pgflow/client with testing and coding guidelines - Updated website/CLAUDE.md with formatting, structure, and documentation best practices - Improved consistency and clarity in documentation style and structure guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (1)
CLAUDE.md (1)
54-54
:⚠️ Potential issueInconsistent File-Structure Guideline
The doc prohibits nesting code insrc/lib/
, yet the new client package usespkgs/client/src/lib/
. Please reconcile this discrepancy by updating the guideline or the directory layout.
🧹 Nitpick comments (2)
pkgs/website/CLAUDE.md (2)
209-209
: Capitalize "Markdown"
The term “Markdown” is a proper noun and should be capitalized.🧰 Tools
🪛 LanguageTool
[grammar] ~209-~209: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ...ted = true;; ```` #### Tables Use markdown tables for comparing features:
mark...(MARKDOWN_NNP)
271-271
: Specify Language for Fenced Code Block
Add a language identifier to this fenced code block to satisfy markdownlint (MD040
).🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
271-271: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
CLAUDE.md
(1 hunks)pkgs/client/CLAUDE.md
(1 hunks)pkgs/website/CLAUDE.md
(21 hunks)
✅ Files skipped from review due to trivial changes (1)
- pkgs/client/CLAUDE.md
🧰 Additional context used
🪛 LanguageTool
CLAUDE.md
[style] ~68-~68: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ow, use FOR EACH STATEMENT
instead. - don't call functions for each row in a res...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[grammar] ~70-~70: The usual collocation for “exception” in this context is “to”, not “of”. Did you mean “exception to”?
Context: ...ould never see %s
(with the very rare exception of where you're merging in another SQL fra...
(EXCEPTION_OF_TO)
[grammar] ~72-~72: Putting a comma here could disrupt the flow of the sentence.
Context: ...ly formatted using %I and %L). Remember, that functions have significant overhead in ...
(COMMA_BEFORE_THAT_2)
[style] ~72-~72: Consider using a synonym to be more concise.
Context: ...in Postgres - instead of factoring into lots of tiny functions, think about how to make...
(A_LOT_OF)
[uncategorized] ~72-~72: Use a comma before ‘so’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...ut how to make your code more expressive so there's no need. ### Fully qualified c...
(COMMA_COMPOUND_SENTENCE_2)
pkgs/website/CLAUDE.md
[grammar] ~209-~209: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ...ted = true; ; ```` #### Tables Use markdown tables for comparing features:
mark...
(MARKDOWN_NNP)
🪛 markdownlint-cli2 (0.17.2)
pkgs/website/CLAUDE.md
271-271: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (20)
pkgs/website/CLAUDE.md (6)
18-18
: Blank Line Before List Items
Adding a blank line before the “Getting Started” list improves readability and ensures proper Markdown rendering.
30-30
: Consistent List Formatting
Inserting a blank line before the bullet list maintains consistency with Diátaxis framework guidelines.
94-96
: Frontmatter Indentation Consistency
Adjusting the indentation for theorder
property in frontmatter aligns with YAML formatting standards.
103-106
: Component Import Quotation Standardized
Switching to single quotes for component imports aligns with the project’s style guide.
164-166
: Inline Steps List
Condensing steps into an inline list makes the example more compact and readable.
172-176
: FileTree Example Formatting
Switching to an inline list forFileTree
improves clarity for directory structures.CLAUDE.md (14)
33-39
: Commands Standardized with pnpm Prefix
The build and test commands are correctly prefixed withpnpm nx
, ensuring consistency across the monorepo.
44-44
: Database Guideline Addition Approved
Clarifies that pgflow uses PostgreSQL for orchestration; aligns with project conventions.
48-50
: Detailed TypeScript Test Directory Structure
The guidelines for test placement and separation between runtime and type tests are clear and enforce best practices.
56-56
: SQL Style Guidelines Section Added
Introducing the SQL Style Guidelines aligns with the core’s shift towards declarative SQL and broadcasting.
58-58
: Declarative vs Procedural SQL Header
The new header clearly delineates the preferred SQL style; matches project philosophy.
62-63
: Emphasize Declarative Batching
Reinforces that SQL operations should be batched; clear and concise.
66-67
: Trigger and Loop Guidance Approved
The advice to avoid loops and useFOR EACH STATEMENT
is sound and consistent with the declarative SQL approach.
69-69
: Set-Returning Functions Recommendation Approved
EncouragingSETOF
functions fits the batching model and improves performance.
71-71
: Dynamic SQL Formatting Guidelines Approved
Mandating%I
and%L
for safety is a critical best practice.
74-74
: Fully Qualified Naming Header Approved
Requiring explicit qualification improves clarity and prevents ambiguity.
78-80
: SQL Example Clarity
The examples for qualification and argument references reinforce the guideline.
86-87
: Newcli
andclient
Packages Documented
Listingcli
andclient
reflects the updated monorepo structure and usage.
90-91
:example-flows
andwebsite
Packages Documented
The inclusion of these packages completes the monorepo’s package overview.
414-414
: MDX Heading Rule Emphasized
The warning about avoiding top-level headings in MDX is crucial for Starlight compatibility.
```bash frame="none" | ||
npx pgflow@latest install | ||
``` | ||
```; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove Semicolon from Bash Code Fence
The closing fence for the bash snippet should be not
;``` to render correctly.
- ```;
+ ```
🤖 Prompt for AI Agents
In pkgs/website/CLAUDE.md around lines 193 to 195, the closing code fence for
the bash snippet incorrectly includes a semicolon. Remove the semicolon after
the closing triple backticks so that the fence ends with ``` alone to ensure
proper markdown rendering.
// Code goes here | ||
const example = "Highlighted code"; | ||
``` | ||
```; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove Semicolon from Code Fence
The closing fence for the JSX code block should be not
;``` to avoid MD syntax errors.
- ```;
+ ```
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Code goes here | |
const example = "Highlighted code"; | |
``` | |
```; |
🤖 Prompt for AI Agents
In pkgs/website/CLAUDE.md around lines 185 to 187, the closing code fence for
the JSX block incorrectly includes a semicolon (```;). Remove the semicolon so
that the closing fence is just three backticks (```) to ensure proper Markdown
syntax and correct rendering of the code block.
Avoid plpgsql as much as you can. | ||
It is important to have your DB procedures run in batched ways and use declarative rather than procedural constructs where possible: | ||
|
||
- do not ever use `language plplsql` in functions, always use `language sql` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo in Language Identifier
language plplsql
should be language plpgsql
.
🤖 Prompt for AI Agents
In CLAUDE.md at line 65, there is a typo in the language identifier where
`language plplsql` is used. Correct this typo by replacing `language plplsql`
with the correct `language plpgsql`.
|
||
### Fully qualified column names and arguments | ||
|
||
When creating or updating SQL code, always fully quality the column names and function arguments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo in "fully quality"
“fully quality the column names” should read “fully qualify column names.”
🤖 Prompt for AI Agents
In CLAUDE.md at line 76, correct the typo by changing "fully quality the column
names" to "fully qualify the column names" to ensure the phrase accurately
describes the intended action.
Switches the test configuration from using @nx/vite:test with coverage reports to using nx:run-commands with a direct vitest run command, simplifying the test setup and execution in the client package
…and waitForStatus behavior - Added tests for handling started, completed, failed, and missing fields events - Included status precedence and terminal state protections - Covered foreign run ID event rejection and step-specific event handling - Implemented waitForStatus timeout, immediate resolution, and abort scenarios - Enhanced test coverage for event-to-state mapping and robustness of FlowRun
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (10)
pkgs/client/src/lib/FlowStep.ts (1)
221-222
: Wrap exhaustiveness helper in a block to satisfy Biome’snoSwitchDeclarations
ruleStatic analysis warns that declarations inside a
switch
may be visible to other clauses.
Wrapping the helper in braces silences the warning without functional change:- default: - // Exhaustiveness check - const _exhaustivenessCheck: never = event; - return false; + default: { + // Exhaustiveness check + const _exhaustivenessCheck: never = event; + return false; + }🧰 Tools
🪛 Biome (1.9.4)
[error] 222-222: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.The declaration is defined in this switch clause:
Unsafe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
pkgs/client/__tests__/FlowStep.test.ts (1)
28-37
: Extensive use ofas any
in tests masks real API frictionCasting
step_slug
toany
suppresses the generic type contract thatFlowStep
is designed to enforce.
While acceptable for quick unit tests, it weakens type-level regression guarantees and may hide future breaking changes.Consider introducing a small helper type in the fixtures (e.g.
type TestFlow = Flow<{/*…*/}>
) so that slugs are typed correctly withoutas any
.Also applies to: 51-61, 89-99, 129-139
pkgs/client/__tests__/mocks/index.ts (1)
88-106
: Minor –Blob
isn’t defined in Node <18Node ≤17 lacks a global
Blob
; some CI runners still use these versions.
If you need broader compatibility, stubBlob
or return a simpleUint8Array
.pkgs/client/__tests__/fixtures.ts (3)
289-298
: Consider using more specific types instead ofany
The
emit
function usesany
types for bothchannelMock
andpayload
parameters. These could be replaced with more specific types to improve type safety and code readability.export function emit( - channelMock: any, + channelMock: { handlers: Map<string, (payload: { event: string; payload: unknown }) => void> }, eventType: string, - payload: any + payload: unknown ): void { const handler = channelMock.handlers.get('*'); if (handler) { handler({ event: eventType, payload }); } }
373-389
: Improve timeout management in event sequence emitterThe
createEventSequenceEmitter
function creates multiple timeouts but only stores the ID of the last one, which could lead to memory leaks if tests don't run to completion. Consider returning a cleanup function to clear all timeouts.export function createEventSequenceEmitter( channelMock: any, events: (BroadcastRunEvent | BroadcastStepEvent)[], delayMs = 100 -): () => void { +): () => { start: () => void; cleanup: () => void } { return () => { - let timeoutId: NodeJS.Timeout; + const timeoutIds: NodeJS.Timeout[] = []; // Set up delayed emissions events.forEach((event, index) => { - timeoutId = setTimeout(() => { + const timeoutId = setTimeout(() => { const eventType = event.event_type; emit(channelMock, eventType, event); }, index * delayMs); + timeoutIds.push(timeoutId); }); + + return { + start: () => {}, // Function is already started on creation + cleanup: () => timeoutIds.forEach(clearTimeout) + }; }; }
307-363
: Enhance type safety in the createRunSnapshot functionThe function uses string literals for status types, but these could be replaced with the enum types that are already imported from
../src/lib/types
(FlowRunStatus and FlowStepStatus).export function createRunSnapshot( - status: 'started' | 'completed' | 'failed', + status: FlowRunStatus, - stepStatuses: Record<string, 'created' | 'started' | 'completed' | 'failed'> = {} + stepStatuses: Record<string, FlowStepStatus> = {} ): { run: RunRow; steps: StepStateRow[] } { // Base run based on status let run: RunRow; switch (status) { - case 'started': + case FlowRunStatus.Started: run = { ...startedRunSnapshot }; break; - case 'completed': + case FlowRunStatus.Completed: run = { ...completedRunSnapshot }; break; - case 'failed': + case FlowRunStatus.Failed: run = { ...failedRunSnapshot }; break; } // Rest of the function...pkgs/client/__tests__/FlowRun.test.ts (4)
254-260
: Consider using a fixture for the queuedEventFor consistency with the rest of the test suite, consider moving this queued event definition to the fixtures file, similar to how other events like
broadcastRunStarted
are defined.- // Create a "queued" event (lower precedence than Started) - const queuedEvent = { - run_id: RUN_ID, - flow_slug: FLOW_SLUG, - status: FlowRunStatus.Queued, - input: { foo: 'bar' }, - }; + // Use a fixture from fixtures.ts + const queuedEvent = broadcastRunQueued;
315-318
: Consider using a fixture for the newCompletedEventFor consistency with other event fixtures, consider moving this modified completed event to the fixtures file.
- // Create a new completed event with different output - const newCompletedEvent = { - ...broadcastRunCompleted, - output: { result: 'different result' }, - }; + // Use a fixture from fixtures.ts + const newCompletedEvent = broadcastRunCompletedDifferentResult;
36-36
: Type casting as 'any' could hide errorsWhile appropriate for testing, using
as any
bypasses TypeScript's type checking. Consider using a more specific type if possible.- input: { foo: 'bar' } as any, + input: { foo: 'bar' } as Record<string, string>,
193-197
: Consider using a type for the incompleteEventInstead of casting to
any
on line 200, consider defining a proper partial type for the incomplete event to maintain type safety.// Incomplete failed event without timestamps + type PartialRunEvent = Pick<BroadcastRunFailedEvent, 'run_id' | 'status' | 'error_message'>; const incompleteEvent = { run_id: RUN_ID, status: FlowRunStatus.Failed, error_message: 'Something went wrong' }; // Should still update the state correctly - run.updateState(incompleteEvent as any); + run.updateState(incompleteEvent as PartialRunEvent);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (5)
pkgs/client/__tests__/FlowRun.test.ts
(1 hunks)pkgs/client/__tests__/FlowStep.test.ts
(1 hunks)pkgs/client/__tests__/fixtures.ts
(1 hunks)pkgs/client/__tests__/mocks/index.ts
(1 hunks)pkgs/client/src/lib/FlowStep.ts
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
pkgs/client/__tests__/FlowStep.test.ts (3)
pkgs/client/__tests__/mocks/index.ts (1)
resetMocks
(251-261)pkgs/client/src/lib/FlowStep.ts (1)
FlowStep
(15-251)pkgs/client/__tests__/fixtures.ts (7)
RUN_ID
(21-21)STEP_SLUG
(23-23)broadcastStepStarted
(198-206)broadcastStepCompleted
(211-218)broadcastStepFailed
(223-230)ANOTHER_STEP_SLUG
(24-24)advanceAndFlush
(268-280)
pkgs/client/__tests__/FlowRun.test.ts (3)
pkgs/client/__tests__/mocks/index.ts (1)
resetMocks
(251-261)pkgs/client/__tests__/fixtures.ts (9)
RUN_ID
(21-21)FLOW_SLUG
(22-22)broadcastRunStarted
(161-169)broadcastRunCompleted
(174-181)broadcastRunFailed
(186-193)STEP_SLUG
(23-23)broadcastStepStarted
(198-206)advanceAndFlush
(268-280)ANOTHER_STEP_SLUG
(24-24)pkgs/client/src/lib/SupabaseBroadcastAdapter.ts (1)
unsubscribe
(265-267)
pkgs/client/__tests__/mocks/index.ts (2)
pkgs/client/src/lib/types.ts (2)
BroadcastRunEvent
(260-263)BroadcastStepEvent
(300-303)pkgs/client/src/lib/SupabaseBroadcastAdapter.ts (1)
payload
(39-53)
pkgs/client/src/lib/FlowStep.ts (4)
pkgs/client/src/index.ts (2)
FlowStep
(9-9)Unsubscribe
(12-12)pkgs/dsl/src/dsl.ts (3)
AnyFlow
(50-50)ExtractFlowSteps
(88-94)StepOutput
(134-139)pkgs/client/src/lib/types.ts (5)
FlowStepBase
(387-389)StepEvent
(146-152)FlowStepState
(325-335)StepEvents
(214-220)Unsubscribe
(225-225)pkgs/client/src/lib/FlowRun.ts (1)
currentStatus
(361-372)
🪛 Biome (1.9.4)
pkgs/client/src/lib/FlowStep.ts
[error] 222-222: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Unsafe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (5)
pkgs/client/__tests__/fixtures.ts (2)
268-280
: The advanceAndFlush function now has the correct return typeThe implementation no longer has a return type mismatch and properly returns a Promise. This addresses the past review comment, though with a different implementation approach.
111-111
:❓ Verification inconclusive
Add missing property in the interface
The
completedStepState
includes anoutput
property, but this property isn't present in the other step state samples. Consider adding this property to the StepStateRow interface if it's required, or make it optional in the samples.
🏁 Script executed:
#!/bin/bash # Check if the StepStateRow interface includes an output property rg -A 5 "interface StepStateRow" --type=typescriptLength of output: 87
🏁 Script executed:
#!/bin/bash # Search for the StepStateRow interface definition in TypeScript files rg -n -A5 "interface StepStateRow" --glob '*.ts' --glob '*.tsx'Length of output: 65
🏁 Script executed:
#!/bin/bash # Broad search for StepStateRow occurrences in the repository rg -n "StepStateRow"Length of output: 2692
🏁 Script executed:
#!/bin/bash # Inspect the fixture around completedStepState to see how output is used sed -n '90,130p' pkgs/client/__tests__/fixtures.ts # Locate the database Row definition for step_states in core types rg -n "step_states" -A20 pkgs/core/src/types.tsLength of output: 1303
🏁 Script executed:
#!/bin/bash # Show detailed definition of the step_states table Row type rg -n "'step_states':" -A50 pkgs/core/src/types.tsLength of output: 58
🏁 Script executed:
#!/bin/bash # Locate the `step_states` table definition in the generated Database type (double-quoted key) rg -n '"step_states":' -A50 pkgs/core/src/types.ts # Check if `output` appears anywhere in the core types file rg -n '"output":' -A10 pkgs/core/src/types.tsLength of output: 100
🏁 Script executed:
#!/bin/bash # Locate the Database interface block to inspect table definitions rg -n "interface Database" -A100 pkgs/core/src/types.ts # Check if `output:` appears in any Row definitions rg -n "output:" -A5 pkgs/core/src/types.tsLength of output: 100
The definition of the
step_states
table’sRow
type wasn’t found in the generated core types, so I can’t confirm whetheroutput
is actually part ofStepStateRow
. Please manually verify inpkgs/core/src/types.ts
(or your database schema) whether thestep_states
row includes anoutput
column. Based on what you find:
- If
output
is defined in the table, update theStepStateRow
alias/interface to include it, and ensure all fixtures reflect that (mark it optional or nullable if appropriate).- If
output
isn’t part of the table, remove or optionalizeoutput
in thecompletedStepState
fixture.pkgs/client/__tests__/FlowRun.test.ts (3)
1-815
: Well-structured and comprehensive test suite!This test file thoroughly covers the FlowRun class functionality with clear test organization and good test patterns. The suite validates:
- Initialization and state management
- Event-driven state transitions
- Status precedence enforcement
- Event filtering by run ID
- Asynchronous status waiting with timeout and abort support
- Step instance caching and state updates
- Lifecycle disposal (auto and manual)
These tests provide excellent coverage for the FlowRun's critical behaviors.
424-424
: Good use of advanceAndFlush helperGreat use of the custom test helper to manage timer-based testing. This approach makes the asynchronous tests more reliable by ensuring timers and microtasks are properly processed.
750-788
: Excellent resource cleanup testingThis test thoroughly verifies that dispose() properly cleans up all resources, including event listeners and step instances. This is important to prevent memory leaks, especially in long-running applications.
const timeoutMs = options?.timeoutMs ?? 5 * 60 * 1000; // Default 5 minutes | ||
const { signal } = options || {}; | ||
|
||
// If we already have the target status, resolve immediately | ||
if (this.status === targetStatus) { | ||
return Promise.resolve(this); | ||
} | ||
|
||
// Otherwise, wait for the status to change | ||
return new Promise((resolve, reject) => { | ||
let timeoutId: NodeJS.Timeout | undefined; | ||
|
||
// Set up timeout if provided | ||
if (timeoutMs > 0) { | ||
timeoutId = setTimeout(() => { | ||
unbind(); | ||
reject(new Error(`Timeout waiting for step ${this.step_slug} to reach status '${targetStatus}'`)); | ||
}, timeoutMs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix TDZ crash & improve cross-platform timeout typing in waitForStatus
abortHandler
references unbind
before that identifier is defined (Temporal Dead-Zone).
If an already-aborted AbortSignal
is passed, the handler fires synchronously → ReferenceError: Cannot access 'unbind' before initialization
, and the promise is never settled.
While touching this block, we can also:
- initialise
unbind
early to remove the TDZ - use
ReturnType<typeof setTimeout>
instead ofNodeJS.Timeout
so the code works in browsers as well - ensure we always detach the abort listener when the promise settles (timeout path currently leaks the listener).
- let timeoutId: NodeJS.Timeout | undefined;
+ // Declare early so all inner closures can safely reference them
+ let unbind: Unsubscribe = () => {};
+ let timeoutId: ReturnType<typeof setTimeout> | undefined;
+ let abortCleanup: (() => void) | undefined;
- // Set up timeout if provided
+ // Set up timeout if provided
if (timeoutMs > 0) {
timeoutId = setTimeout(() => {
unbind();
+ if (abortCleanup) abortCleanup();
reject(new Error(`Timeout waiting for step ${this.step_slug} to reach status '${targetStatus}'`));
}, timeoutMs);
}
- // Set up abort signal if provided
- let abortCleanup: (() => void) | undefined;
+ // Set up abort signal if provided
if (signal) {
const abortHandler = () => {
if (timeoutId) clearTimeout(timeoutId);
unbind();
reject(new Error(`Aborted waiting for step ${this.step_slug} to reach status '${targetStatus}'`));
};
Also applies to: 139-155
🤖 Prompt for AI Agents
In pkgs/client/src/lib/FlowStep.ts around lines 118 to 135, fix the Temporal
Dead Zone (TDZ) issue by declaring the unbind function before it is referenced
in the abortHandler to avoid ReferenceError. Replace the timeoutId type from
NodeJS.Timeout to ReturnType<typeof setTimeout> for better cross-platform
compatibility. Also, ensure the abort event listener is always removed when the
promise settles, including in the timeout rejection path, to prevent listener
leaks. Apply the same fixes to lines 139-155.
const unbind = this.on('*', (event) => { | ||
if (event.status === targetStatus) { | ||
if (timeoutId) clearTimeout(timeoutId); | ||
if (abortCleanup) abortCleanup(); | ||
unbind(); | ||
resolve(this); | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Wildcard '*'
event is not represented in StepEvents
– breaks type-safety
on('*', …)
and emit('*', …)
compile only because '*'
is treated as any
.
Down-stream consumers using --noImplicitAny
will get type errors.
Recommended fix (quick):
- Extend the event map in
types.ts
:
export type StepEvents<…> = {
[K in keyof StepEventData<…>]:
(event: StepEventData<…>[K]) => void;
+ '*': (event: StepEvent<…>) => void;
};
- Update the nanoevents invocation to include the widened map.
Long-term, consider a dedicated wildcard channel to avoid diluting the strongly-typed map.
Also applies to: 227-228
🤖 Prompt for AI Agents
In pkgs/client/src/lib/FlowStep.ts around lines 154 to 161, the use of the
wildcard event '*' in the on() method is not included in the StepEvents type
map, causing type-safety issues and errors with --noImplicitAny. To fix this,
extend the StepEvents type map in types.ts to explicitly include the '*' event
with an appropriate type, then update the nanoevents event emitter/handler
initialization to use this extended map. Also apply the same fix to lines
227-228 to maintain consistency and type safety.
…subscription, event routing, and unsubscription behaviors - Added new test file with detailed coverage of channel naming, handler registration, and event routing - Included tests for multiple run subscriptions, system event handling, and callback invocation - Implemented unsubscribe tests to ensure proper channel cleanup and safe multiple unsubscriptions - Enhanced existing test suite with additional scenarios for robustness and correctness
…ove step materialization logic - Added a test case to verify that steps not previously accessed are created on demand when receiving step:started events - Updated PgflowClient to always materialize steps before updating their state to prevent event loss - Minor cleanup in test files for consistency and correctness
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (9)
pkgs/client/src/lib/SupabaseBroadcastAdapter.ts (4)
46-60
:⚠️ Potential issueRuntime type guard missing – malformed messages could crash client.
#handleBroadcastMessage
assumes thatpayload.event
is always a string starting withrun:
orstep:
. If the backend evolves or a mis-configured publisher emits a different event, the adapter will silently ignore it, butpayload.payload
is still cast and forwarded without validation – riskingundefined
fields downstream.#handleBroadcastMessage(payload: { event: string; payload: BroadcastRunEvent | BroadcastStepEvent; }): void { const event = payload.event as string; const eventData = payload.payload; - if (event.startsWith('run:')) { + if (typeof event === 'string' && event.startsWith('run:')) { // Handle run events this.#emitter.emit('runEvent', eventData as BroadcastRunEvent); - } else if (event.startsWith('step:')) { + } else if (typeof event === 'string' && event.startsWith('step:')) { // Handle step events this.#emitter.emit('stepEvent', eventData as BroadcastStepEvent); + } else { + console.warn(`Received unknown event type: ${event}`); } }
148-176
:⚠️ Potential issueAdd runtime validation before type casting in
#refreshStateFromSnapshot
.The method uses unsafe type casts (
as unknown as
) without validating the data structure, which could lead to runtime errors if the data doesn't match the expected types.#refreshStateFromSnapshot( run_id: string, state: { run: RunRow; steps: StepStateRow[] } | null ): void { if (!state || !state.run) return; + // Validate run data structure before emitting + if (!state.run.run_id || !state.run.flow_slug || !state.run.status) { + console.error('Invalid run data in state snapshot:', state.run); + return; + } + // Create proper run event with correct event_type const runEvent: BroadcastRunEvent = { event_type: `run:${state.run.status}`, ...state.run } as unknown as BroadcastRunEvent; // Emit run event this.#emitter.emit('runEvent', runEvent); // Emit events for each step state if (state.steps && Array.isArray(state.steps)) { for (const step of state.steps) { + // Validate step data structure before emitting + if (!step.run_id || !step.step_slug || !step.status) { + console.error('Invalid step data in state snapshot:', step); + continue; + } // Create proper step event with correct event_type const stepEvent: BroadcastStepEvent = { event_type: `step:${step.status}`, ...step } as unknown as BroadcastStepEvent; // Emit step event this.#emitter.emit('stepEvent', stepEvent); } } }
301-314
:⚠️ Potential issueAdd data validation before type casting in
getRunWithStates
.The method casts the returned data to
{ run: RunRow; steps: StepStateRow[] }
without validating its structure, which could lead to runtime errors.async getRunWithStates(run_id: string): Promise<{ run: RunRow; steps: StepStateRow[]; }> { // Call the RPC function which returns a JSONB object const { data, error } = await this.#supabase .schema('pgflow') .rpc('get_run_with_states', { run_id }); if (error) throw error; if (!data) throw new Error(`No data returned for run ${run_id}`); + // Validate data structure + if (!data.run || typeof data.run !== 'object') { + throw new Error(`Invalid run data returned for run ${run_id}`); + } + + if (!Array.isArray(data.steps)) { + throw new Error(`Invalid steps data returned for run ${run_id}`); + } + return data as { run: RunRow; steps: StepStateRow[] }; }
337-337
: 🛠️ Refactor suggestionAdd method to dispose all resources.
The adapter currently allows unsubscribing from individual runs, but doesn't provide a way to clean up all resources when the adapter is no longer needed.
} + + /** + * Cleans up all resources by unsubscribing from all channels + * and clearing event emitters + */ + disposeAll(): void { + // Unsubscribe from all channels + for (const [run_id, channel] of this.#channels.entries()) { + try { + channel.unsubscribe(); + } catch (e) { + console.warn(`Error unsubscribing from channel for run ${run_id}:`, e); + } + } + + // Clear the channels map + this.#channels.clear(); + + // Clear unsubscribe functions map + this.#unsubscribeFunctions.clear(); + } }pkgs/client/src/lib/PgflowClient.ts (5)
37-43
: Capture and dispose adapter-level unsubscribers to avoid memory leaksThe event listener for run events is never removed. If the entire
PgflowClient
instance is disposed (for example in tests or single-page navigation), the callback will remain alive inside the adapter's emitter and keep references to the client's closed state map.Implement a private field to store the unsubscribe handle and call it in a new
close()
method or insidedisposeAll()
:+ #unsubRun?: Unsubscribe; + #unsubStep?: Unsubscribe; constructor(supabaseClient: SupabaseClient) { // ... - this.#realtimeAdapter.onRunEvent((event) => { + this.#unsubRun = this.#realtimeAdapter.onRunEvent((event) => { // ... } disposeAll(): void { for (const runId of this.#runs.keys()) { this.dispose(runId); } + this.#unsubRun?.(); + this.#unsubStep?.(); }
45-53
: Capture and dispose adapter-level unsubscribers to avoid memory leaksSimilar to the run event listener, the step event listener is never removed, which can cause memory leaks.
Apply this fix:
constructor(supabaseClient: SupabaseClient) { // ... - this.#realtimeAdapter.onStepEvent((event) => { + this.#unsubStep = this.#realtimeAdapter.onStepEvent((event) => { // ... }
109-116
:updateState
calls miss theevent_type
field expected by listenersThe snapshot forwarded from the
start_flow_with_states
RPC omits the mandatoryevent_type
property, which will:
- Violate the type contract (TypeScript may complain in strict mode)
- Break downstream consumers that rely on the event_type for filtering or UI updates
Apply this fix:
if (data.run) { run.updateState({ + event_type: 'run:snapshot', ...data.run, - status: data.run.status, - run_id: data.run.run_id, // Correctly use run_id instead of id }); }
119-128
:updateState
calls miss theevent_type
field expected by listenersThe step snapshots also lack the
event_type
field, which is mandatory according to the type contract.Apply this fix:
if (data.steps && Array.isArray(data.steps)) { for (const stepState of data.steps) { run.step(stepState.step_slug).updateState({ + event_type: `step:${stepState.step_slug}:snapshot`, ...stepState, - status: stepState.status, - run_id: id, - step_slug: stepState.step_slug, }); } }
155-159
: 🛠️ Refactor suggestionAdd close method to properly clean up all resources
The
disposeAll
method cleans up individual runs but doesn't properly clean up global event listeners from the constructor.Add a dedicated
close
method or updatedisposeAll
to clean up everything:disposeAll(): void { for (const runId of this.#runs.keys()) { this.dispose(runId); } + // Clean up global listeners + this.#unsubRun?.(); + this.#unsubStep?.(); } + /** + * Clean up all resources including global listeners and all run subscriptions + */ + close(): void { + this.disposeAll(); + }
🧹 Nitpick comments (5)
pkgs/client/__tests__/SupabaseBroadcastAdapter.simple.test.ts (1)
30-35
: ReplaceFunction
type with a more specific type signature.The
Function
type is overly broad and can lead to type safety issues. It's better to use a more specific function signature.- function createImmediateScheduler() { - return (fn: Function) => { + function createImmediateScheduler() { + return (fn: () => void) => { fn(); return 0 as unknown as NodeJS.Timeout; }; }🧰 Tools
🪛 Biome (1.9.4)
[error] 31-31: Don't use 'Function' as a type.
Prefer explicitly define the function shape. This type accepts any function-like value, which can be a common source of bugs.
(lint/complexity/noBannedTypes)
pkgs/client/src/lib/SupabaseBroadcastAdapter.ts (1)
69-83
: Improve error handling in#handleChannelError
with error classification.The current error handling logs the error and attempts reconnection regardless of the error type. Consider adding error classification to handle different error types appropriately.
async #handleChannelError( run_id: string, channelName: string, channel: RealtimeChannel, error: unknown ): Promise<void> { console.error(`Channel ${channelName} error:`, error); + // Classify error types for better handling + const isTransient = this.#isTransientError(error); + + // Only attempt reconnection for transient errors + if (isTransient) { // Schedule reconnection attempt this.#schedule(async () => { if (this.#channels.has(run_id)) { await this.#reconnectChannel(run_id, channelName); } }, this.#reconnectionDelay); + } else { + console.warn(`Non-transient error for channel ${channelName}, not attempting reconnection`); + } } + /** + * Determines if an error is likely transient and worth retrying + * @param error - The error to classify + * @returns true if the error is likely transient + */ + #isTransientError(error: unknown): boolean { + // Implement error classification logic + // For example, network errors are typically transient + if (error instanceof Error) { + const errorMessage = error.message.toLowerCase(); + const transientPhrases = ['network', 'timeout', 'connection', 'temporarily']; + return transientPhrases.some(phrase => errorMessage.includes(phrase)); + } + // By default, assume errors are transient and worth retrying + return true; + }pkgs/client/src/lib/PgflowClient.ts (3)
222-230
: Consider using proper type definitions instead of type assertionsMultiple type assertions are used to handle the database schema mismatch with TypeScript types. This indicates that the types should be updated to match the actual data structure.
Instead of type assertions, consider updating the type definitions to match the actual database schema:
- const runData = run as unknown as (RunRow & { error_message?: string }); + // Update RunRow type definition to include error_message field + type ExtendedRunRow = RunRow & { error_message?: string }; + const runData = run as ExtendedRunRow;And for the output property:
- output: runData.output as any, + output: runData.output as unknown as TSpecificFlow extends { output: infer O } ? O : unknown,
293-295
: Update type definitions to match database schemaThe comment indicates that
StepStateRow
doesn't includeoutput
in its type, which results in another type assertion.Consider updating the
StepStateRow
type to include all fields present in the actual database records:- // StepStateRow doesn't include output in its type, but it's typically present in the data - output: (stepState as any).output || null, + output: stepState.output || null,This requires updating the
StepStateRow
type definition in the imported package.
252-256
: Improve type safety by defining specific event interfacesUsing a type assertion with
as any
bypasses TypeScript's type checking and can lead to runtime errors.Create a proper typed interface for the step events:
- flowRun.step(stepState.step_slug).updateState(stepEvent as any); + interface StepEvent { + run_id: string; + step_slug: string; + status: FlowStepStatus; + started_at?: string; + completed_at?: string; + failed_at?: string; + output?: unknown; + error_message?: string; + } + flowRun.step(stepState.step_slug).updateState(stepEvent as StepEvent);Or use a more precise type assertion based on the step status.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (6)
pkgs/client/__tests__/PgflowClient.test.ts
(1 hunks)pkgs/client/__tests__/SupabaseBroadcastAdapter.simple.test.ts
(1 hunks)pkgs/client/__tests__/SupabaseBroadcastAdapter.test.ts
(1 hunks)pkgs/client/src/lib/PgflowClient.ts
(1 hunks)pkgs/client/src/lib/SupabaseBroadcastAdapter.ts
(1 hunks)pkgs/client/src/lib/client.spec.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pkgs/client/src/lib/client.spec.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
pkgs/client/__tests__/SupabaseBroadcastAdapter.simple.test.ts (3)
pkgs/client/__tests__/mocks/index.ts (1)
mockSupabase
(33-116)pkgs/client/src/lib/SupabaseBroadcastAdapter.ts (1)
SupabaseBroadcastAdapter
(20-337)pkgs/client/__tests__/fixtures.ts (5)
RUN_ID
(21-21)FLOW_SLUG
(22-22)STEP_SLUG
(23-23)startedRunSnapshot
(31-42)startedStepState
(93-101)
🪛 Biome (1.9.4)
pkgs/client/__tests__/SupabaseBroadcastAdapter.simple.test.ts
[error] 31-31: Don't use 'Function' as a type.
Prefer explicitly define the function shape. This type accepts any function-like value, which can be a common source of bugs.
(lint/complexity/noBannedTypes)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: main
- GitHub Check: Cloudflare Pages
🔇 Additional comments (3)
pkgs/client/__tests__/SupabaseBroadcastAdapter.simple.test.ts (1)
12-15
: Well-structured test suite that follows best practices.This test suite has a good organization with clear test descriptions, focused assertions, and proper setup/teardown. It effectively isolates the core functionality of the adapter without getting caught up in implementation details.
Also applies to: 16-174
pkgs/client/__tests__/PgflowClient.test.ts (1)
1-302
: Comprehensive, well-structured test suite with good mocking patterns.The tests cover all main client functionalities including initialization, flow starting, error handling, caching, event handling, and resource disposal. Good use of mocks and spies to verify interactions without relying on implementation details.
pkgs/client/__tests__/SupabaseBroadcastAdapter.test.ts (1)
1-585
: Thorough test coverage with well-organized test cases.This comprehensive test suite covers all key adapter functionalities with proper organization into logical test groups. The tests validate subscription mechanics, event routing, unsubscription, and data access methods thoroughly.
console.error('Error getting run:', error); | ||
return null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling in getRun method
The current implementation logs the error to the console and returns null, but there's no way to distinguish between a "not found" response and a failure.
Consider throwing the error or returning a more structured result:
- console.error('Error getting run:', error);
- return null;
+ // Option 1: Throw with more context
+ throw new Error(`Failed to retrieve run ${run_id}: ${error.message || error}`);
+
+ // Option 2: Return structured result
+ return {
+ error: error instanceof Error ? error : new Error(String(error)),
+ run: null
+ };
This would allow consumers to handle errors more gracefully.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In pkgs/client/src/lib/PgflowClient.ts around lines 262 to 263, the getRun
method currently logs errors to the console and returns null, which does not
differentiate between a "not found" case and an actual failure. Modify the
method to either throw the caught error after logging it or return a structured
result object that clearly indicates success, not found, or error states. This
change will enable consumers of getRun to handle errors and missing data more
effectively.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores