-
Notifications
You must be signed in to change notification settings - Fork 3
[#1049] feat: async and improved initialization #1050
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Yep, good call. Made an issue. Not something to address in this PR I'd say, since it is already giganormous. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Parallelizes initial sync of transactions to improve performance, separates email and post credits for users, and adds better batching/concurrency for sync and pricing operations. The PR also fixes table data handling and creates proper database relations.
- Implements async batching for transaction syncing, pricing connections, and trigger execution
- Separates
emailCreditsfrompostCreditsin user profiles with tracking and limits - Creates actual database relation between
PayButtonandUserProfilemodels
Reviewed Changes
Copilot reviewed 20 out of 23 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| utils/index.ts | Adds async batch processing utility function |
| services/triggerService.ts | Complete rewrite for batched trigger execution with separate email/post credit tracking |
| services/transactionService.ts | Optimizes price connection handling with chunked operations |
| services/priceService.ts | Improves error handling and logging for price API calls |
| services/chronikService.ts | Implements parallel address syncing and batched transaction processing |
| services/addressService.ts | Adds bulk address sync update functionality |
| prisma-local/schema.prisma | Adds foreign key relation and separates post credits from email credits |
| components/TableContainer/TableContainerGetter.tsx | Fixes undefined data handling in tables |
| components/Account/ProDisplay.tsx | Updates UI to show correct post credit limits |
| constants/index.ts | Adds new configuration constants for batching and concurrency |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| console.error(`Problem getting price of ${networkTicker} (attempt ${attempt}) ->\n${error as string}: ${error.response.statusText as string} `) | ||
| } | ||
|
|
||
| if (attempt < PRICE_API_MAX_RETRIES) { | ||
| return await getAllPricesByNetworkTicker(networkTicker, attempt + 1) | ||
| } else { | ||
| throw new Error(`Price file could not be created after ${PRICE_API_MAX_RETRIES} retries`) | ||
| console.error(`Price file could not be created after ${PRICE_API_MAX_RETRIES} retries`) | ||
| throw new Error(lastError.response.statusText) |
Copilot
AI
Sep 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential null dereference when accessing error.response.statusText. The error object may not have a response property or the response may not have a statusText property.
| console.error(`Problem getting price of ${networkTicker} (attempt ${attempt}) ->\n${error as string}: ${error.response.statusText as string} `) | ||
| } | ||
|
|
||
| if (attempt < PRICE_API_MAX_RETRIES) { | ||
| return await getAllPricesByNetworkTicker(networkTicker, attempt + 1) | ||
| } else { | ||
| throw new Error(`Price file could not be created after ${PRICE_API_MAX_RETRIES} retries`) | ||
| console.error(`Price file could not be created after ${PRICE_API_MAX_RETRIES} retries`) | ||
| throw new Error(lastError.response.statusText) |
Copilot
AI
Sep 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential null dereference when accessing lastError.response.statusText. The lastError object may not have a response property or the response may not have a statusText property.
| ) | ||
| console.log(`[TRIGGER ${currency}]: executing emails with concurrency=${TRIGGER_EMAIL_CONCURRENCY}`) | ||
| await runAsyncInBatches( | ||
| `${currency} POST TRIGGERS`, |
Copilot
AI
Sep 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect task name used for email triggers. Should be 'EMAIL TRIGGERS' instead of 'POST TRIGGERS' for the email batch execution.
| `${currency} POST TRIGGERS`, | |
| `${currency} EMAIL TRIGGERS`, |
| try { | ||
| const params = buildPostParams(trigger, tx, currency, valueStr, address) | ||
| const parsed = parseTriggerPostData({ userId: trigger.paybutton.user.id, postData: trigger.postData, postDataParameters: params }) | ||
| const resp = await axios.post(trigger.postURL, parsed, { timeout: config.triggerPOSTTimeout }) | ||
| // HTTP 2xx counts as accepted | ||
| accepted = resp.status >= 200 && resp.status < 300 | ||
| data = { postedData: parsed, postedURL: trigger.postURL, responseData: resp.data } | ||
| } catch (err: any) { |
Copilot
AI
Sep 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The postData JSON validation error handling has been removed from the original implementation. If parseTriggerPostData throws an error, it should be caught and logged appropriately as it was in the original code.
services/chronikService.ts
Outdated
| const bd = this.broadcastIncomingTx(tx.address.address, tx, []) // inputAddresses left empty in bulk | ||
| triggerBatch.push(bd) |
Copilot
AI
Sep 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The comment indicates inputAddresses is intentionally left empty in bulk operations, but this may cause issues if triggers depend on input address information. Consider if this is the intended behavior or if input addresses should be preserved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a few minor things copilot found.
| "secret": "06d1b75746d5be25f9ebb345a887c08698ebe0fe605b51c22c0c04fe57150f24" | ||
| }`, | ||
| postURL: 'http://192.168.128.1:3003/paymentReceived' | ||
| postURL: 'https://httpbin.org/post' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talke about how we do want to be careful not to accidentally hit their API with a ton of requests if we have a bunch of txs to process.
No action required necessarily but just something we should think about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm good merging after going through the above ^.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When deployed in prod, it does seem to block the UI while doing [PARALLEL FETCHING].
We should be able to get the # of addresses at the start and show a % complete right?
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds POST-credit limits to docs and config/types, introduces batching/concurrency constants and a runAsyncInBatches utility, refactors Chronik sync to parallel batched fetching and batched DB commits, rewrites trigger execution into a credit-aware batch pipeline, optimizes price-transaction linking, and updates Prisma schema/migrations and seeds. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Scheduler as Sync Orchestrator
participant MBC as MultiBlockchainClient
participant CBC as ChronikBlockchainClient
participant Chronik as Chronik API
participant DB as Database
participant Trig as Trigger Service
participant FS as File Store
Scheduler->>MBC: startSync()
MBC->>CBC: syncAddresses(addresses[])
rect rgba(200,230,255,0.18)
note right of CBC: Parallel burst fetching & buffering
loop bursts
CBC->>Chronik: fetch page (CHRONIK_FETCH_N_TXS_PER_PAGE)
Chronik-->>CBC: txs[]
CBC->>CBC: buffer & aggregate (TX_EMIT_BATCH_SIZE)
alt batch ready
CBC->>DB: commit batched txs (DB_COMMIT_BATCH_SIZE)
DB-->>CBC: committed
CBC->>FS: appendTxsToFile(txs)
CBC->>Trig: executeTriggersBatch(broadcasts, networkId)
end
end
end
CBC->>DB: updateManyLastSynced(addresses)
MBC-->>Scheduler: done
sequenceDiagram
autonumber
actor Chronik as Chronik Sync
participant Trig as Trigger Service
participant DB as Database
participant Mail as Email Provider
participant HTTP as Post Endpoint
Chronik->>Trig: executeTriggersBatch(broadcasts[], networkId)
Trig->>DB: fetchTriggersGroupedByAddress(addresses)
Trig->>Trig: build per-user task queues & credit buckets
par Concurrent batches
rect rgba(220,255,220,0.12)
note right of Trig: POST tasks (TRIGGER_POST_CONCURRENCY)
Trig->>HTTP: POST payloads
HTTP-->>Trig: responses
end
rect rgba(255,240,200,0.12)
note right of Trig: Email tasks (TRIGGER_EMAIL_CONCURRENCY)
Trig->>Mail: send emails
Mail-->>Trig: results
end
and
Trig->>Trig: accumulate logs & out-of-credits notes
end
Trig->>DB: createMany(triggerLogs, TRIGGER_LOG_BATCH_SIZE)
Trig->>DB: decrement user credits (transaction)
Trig-->>Chronik: done
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
services/priceService.ts (2)
277-283: UTC mismatch when derivingdayfromtimestamp.Use
moment.unix(...).utc(); current code uses local time and can select the wrong API day.-async function renewPricesForTimestamp (timestamp: number): Promise<void> { - const xecPrice = await getPriceForDayAndNetworkTicker(moment(timestamp * 1000), NETWORK_TICKERS.ecash) +async function renewPricesForTimestamp (timestamp: number): Promise<void> { + const day = moment.unix(timestamp).utc() + const xecPrice = await getPriceForDayAndNetworkTicker(day, NETWORK_TICKERS.ecash) await upsertPricesForNetworkId(xecPrice, XEC_NETWORK_ID, timestamp) - const bchPrice = await getPriceForDayAndNetworkTicker(moment(timestamp * 1000), NETWORK_TICKERS.bitcoincash) + const bchPrice = await getPriceForDayAndNetworkTicker(day, NETWORK_TICKERS.bitcoincash) await upsertPricesForNetworkId(bchPrice, BCH_NETWORK_ID, timestamp) }
193-201: Use UTC for date formatting and await upsert calls
- In syncCurrentPrices (services/priceService.ts:193–201), replace
moment()withmoment.utc()and change the twovoid upsertCurrentPricesForNetworkId…calls toawait(e.g.await Promise.all([upsertCurrentPricesForNetworkId(...), …])) to prevent unhandled promise rejections (techoverflow.net, stackoverflow.com)- In renewPricesForTimestamp (services/priceService.ts:276–283), build the day in UTC (e.g.
moment.unix(timestamp).utc()) so the date sent to the API is always correct (techoverflow.net)tests/unittests/triggerService.test.ts (1)
101-112: Fix failing test: mock paybutton must include iterable addressesPipeline error “t.paybutton.addresses is not iterable” indicates executeAddressTriggers now iterates paybutton.addresses. Provide minimal iterable in the mock.
Apply:
- prismaMock.paybuttonTrigger.findMany.mockResolvedValue([ + prismaMock.paybuttonTrigger.findMany.mockResolvedValue([ { id: 'trigger-1', isEmailTrigger: false, postURL: 'https://httpbin.org/post', postData: '{"address": <address>, "outputAddresses": <outputAddresses>}', paybutton: { name: 'My Paybutton', - providerUserId: 'user-1' + providerUserId: 'user-1', + addresses: [] // ensure iterable per current service logic } } as any ])
♻️ Duplicate comments (6)
services/priceService.ts (2)
145-148: Avoid null-deref on Axios error.response.statusText (still present).
error.responsecan be undefined on network/timeout errors; logging and later throwinglastError.response.statusTextmay crash.Apply:
-} catch (error: any) { - lastError = error - console.error(`Problem getting price of ${networkTicker} (attempt ${attempt}) ->\n${error as string}: ${error.response.statusText as string} `) +} catch (error: unknown) { + lastError = error + const statusText = + (axios.isAxiosError(error) ? error.response?.statusText : undefined) ?? + (error as Error)?.message ?? + 'Unknown error' + console.error( + `Problem getting price of ${networkTicker} (attempt ${attempt}) ->\n${String(error)}: ${statusText}` + ) }
153-155: ThrowinglastError.response.statusTextis unsafe; may be undefined.Use a robust fallback chain and include status text in the log.
-} else { - console.error(`Price file could not be created after ${PRICE_API_MAX_RETRIES} retries`) - throw new Error(lastError.response.statusText) -} +} else { + const statusText = + (axios.isAxiosError(lastError) ? lastError.response?.statusText : undefined) ?? + (lastError as Error)?.message ?? + 'Unknown error' + console.error(`Price file could not be created after ${PRICE_API_MAX_RETRIES} retries -> ${statusText}`) + throw new Error(statusText) +}prisma-local/seeds/triggers.ts (1)
20-23: Gate external test endpoint to avoid hammering httpbinPointing seeds at httpbin is convenient but risks unintended high-volume POSTs during large syncs. Gate with an env flag so default local dev doesn’t egress.
- postURL: 'https://httpbin.org/post' + postURL: process.env.ALLOW_EXTERNAL_SEEDED_POSTS === 'true' + ? 'https://httpbin.org/post' + : ''services/chronikService.ts (2)
756-757: Note on empty inputAddresses in bulk broadcasts.Leaving inputAddresses empty in batch broadcasts is acceptable if triggers don’t depend on them; otherwise, consider populating. Output addresses are now passed as empty too per fix above.
If triggers require input/output data, we can extend the batch to compute them. Do you want a follow-up patch?
754-758: Bug: missing outputAddresses arg to broadcastIncomingTx (TS compile/runtime error).broadcastIncomingTx requires both input and output address arrays; here only 3 args are passed.
- const bd = this.broadcastIncomingTx(tx.address.address, tx, []) // inputAddresses left empty in bulk + const bd = this.broadcastIncomingTx(tx.address.address, tx, [], [])services/triggerService.ts (1)
570-581: Wrong task label for email batch. Should be “EMAIL TRIGGERS”.Console/task name is copy-pasted from posts.
Apply this diff:
console.log(`[TRIGGER ${currency}]: executing emails with concurrency=${TRIGGER_EMAIL_CONCURRENCY}`) await runAsyncInBatches( - `${currency} POST TRIGGERS`, + `${currency} EMAIL TRIGGERS`, emailUserRunners.map(run => async () => { emailResults.push(await run()) }), TRIGGER_EMAIL_CONCURRENCY )
🧹 Nitpick comments (18)
services/priceService.ts (4)
126-126: TypelastErrorasunknown, notany.Keeps the type system honest and forces safe narrowing at use sites.
-let lastError: any +let lastError: unknown
103-104: Useunknownin catch and log safely; avoid${error as string}.
error as stringoften prints “[object Object]”. PreferString(error)and Axios-aware extraction.-} catch (error: any) { - console.error(`Problem getting price of ${networkTicker} ${day.format(HUMAN_READABLE_DATE_FORMAT)} (attempt ${attempt}) ->\n${error as string}: ${error.message as string} `) +} catch (error: unknown) { + const msg = + (axios.isAxiosError(error) ? error.response?.statusText : undefined) ?? + (error as Error)?.message ?? + 'Unknown error' + console.error( + `Problem getting price of ${networkTicker} ${day.format(HUMAN_READABLE_DATE_FORMAT)} (attempt ${attempt}) ->\n${String(error)}: ${msg}` + )
135-141: Filter to valid date keys before mapping daily prices.APIs often include a
successor metadata key; filter ensures only YYYY-MM-DD entries are mapped.-const dailyPrices: IResponseDataDaily[] = Object.entries<IResponseData>(res.data).map(([day, priceData]) => { - return { - day, - ...priceData - } -}) +const dailyPrices: IResponseDataDaily[] = Object + .entries(res.data) + .filter(([day]) => moment(day, PRICE_API_DATE_FORMAT, true).isValid()) + .map(([day, priceData]) => ({ + day, + ...(priceData as IResponseData) + }))
106-108: Optional: add bounded exponential backoff with jitter between retries.Reduces thundering herd and gives the API time to recover.
if (attempt < PRICE_API_MAX_RETRIES) { - return await getPriceForDayAndNetworkTicker(day, networkTicker, attempt + 1) + await new Promise(r => setTimeout(r, Math.min(500 * 2 ** (attempt - 1), 5000) + Math.floor(Math.random() * 250))) + return await getPriceForDayAndNetworkTicker(day, networkTicker, attempt + 1) }if (attempt < PRICE_API_MAX_RETRIES) { - return await getAllPricesByNetworkTicker(networkTicker, attempt + 1) + await new Promise(r => setTimeout(r, Math.min(500 * 2 ** (attempt - 1), 5000) + Math.floor(Math.random() * 250))) + return await getAllPricesByNetworkTicker(networkTicker, attempt + 1) }Also applies to: 150-152
components/TableContainer/TableContainerGetter.tsx (1)
87-95: Good defensive default; consider guarding totalCount and avoiding setState on unmounted componentThe
?? []fix is solid. Add a safe default fortotalCountto avoidNaNpageCount, and guard against state updates after unmount/effect re-run.useEffect(() => { - void (async () => { + let cancelled = false + void (async () => { setLoading(true) const d = await dataGetter(pageIndex, pageSize, sortColumn, sortDesc) - setPageCount(Math.ceil(d.totalCount / pageSize)) - setData(d.data ?? []) - setLoading(false) + if (cancelled) return + const total = typeof d.totalCount === 'number' && Number.isFinite(d.totalCount) ? d.totalCount : 0 + setPageCount(Math.ceil(total / pageSize)) + setData(Array.isArray(d.data) ? d.data : []) + setLoading(false) })() + return () => { cancelled = true } }, [pageSize, pageIndex, sortColumn, sortDesc, tableRefreshCount])config/index.ts (1)
14-16: Config fields look good; consider normalizing “Inf” onceUsing
number | 'Inf'is consistent with email limits. Consider a small helper to normalize to numeric Infinity at app boot to simplify consumers.Example (outside this file):
export const toFiniteOrInfinity = (v: number | 'Inf') => v === 'Inf' ? Number.POSITIVE_INFINITY : vprisma-local/schema.prisma (2)
36-36: Be explicit about onDelete; verify data backfill before FK
- Consider explicitly setting
onDelete: Restrict(or desired behavior) to avoid implicit defaults.- Ensure no orphan
Paybutton.providerUserIdrows exist before applying the FK in prod.- user UserProfile @relation(fields: [providerUserId], references: [id], onUpdate: Restrict) + user UserProfile @relation(fields: [providerUserId], references: [id], onDelete: Restrict, onUpdate: Restrict)SQL to check for orphans (run against prod before migrating):
SELECT p.id, p.providerUserId FROM Paybutton p LEFT JOIN UserProfile u ON u.id = p.providerUserId WHERE u.id IS NULL LIMIT 10;
181-182: Defaults reduced to 5; ensure business logic/UI align and prevent negatives
- Confirm UI copies/docs reflect new defaults and that “out of credits” flows cover POST credits as well.
- Add application-level guard to prevent credits dropping below 0 (MySQL lacks CHECK by default).
constants/index.ts (1)
276-294: Bound concurrency via config/env; remove “WIP” and non-English comment512 concurrent fetches and 100 trigger concurrency are aggressive defaults. Make them configurable (env/config), drop “WIP” marker, and keep comments English-only.
Apply:
-/* WIP RENAME ALL THOSE */ +/* Concurrency and batching (configurable via env) */ -export const CHRONIK_FETCH_N_TXS_PER_PAGE = 200 +export const CHRONIK_FETCH_N_TXS_PER_PAGE = 200 -export const INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY = 512 +export const INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY = Number(process.env.INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY ?? 128) -export const TX_EMIT_BATCH_SIZE = 10_000 // for our generator, not chronik -export const DB_COMMIT_BATCH_SIZE = 10_000 // tamanho dos lotes para commit no DB +export const TX_EMIT_BATCH_SIZE = Number(process.env.TX_EMIT_BATCH_SIZE ?? 10_000) // for our generator, not chronik +export const DB_COMMIT_BATCH_SIZE = Number(process.env.DB_COMMIT_BATCH_SIZE ?? 10_000) // batch size for DB commits -export const TRIGGER_POST_CONCURRENCY = 100 -export const TRIGGER_EMAIL_CONCURRENCY = 100 +export const TRIGGER_POST_CONCURRENCY = Number(process.env.TRIGGER_POST_CONCURRENCY ?? 50) +export const TRIGGER_EMAIL_CONCURRENCY = Number(process.env.TRIGGER_EMAIL_CONCURRENCY ?? 50) -export const TRIGGER_LOG_BATCH_SIZE = 200 +export const TRIGGER_LOG_BATCH_SIZE = Number(process.env.TRIGGER_LOG_BATCH_SIZE ?? 200) -export const PRICES_CONNECTION_BATCH_SIZE = 1_000 +export const PRICES_CONNECTION_BATCH_SIZE = Number(process.env.PRICES_CONNECTION_BATCH_SIZE ?? 1_000) // interactive $transaction timeout in ms (for the single delete + several createMany of prices) -export const PRICES_CONNECTION_TIMEOUT = 30_000 +export const PRICES_CONNECTION_TIMEOUT = Number(process.env.PRICES_CONNECTION_TIMEOUT ?? 30_000)services/transactionService.ts (2)
399-413: Chunked createMany — OK; consider returning counts for observabilityFunctionality is fine. Returning inserted counts per chunk would improve logs/metrics.
414-425: Broaden client type to also accept PrismaClient (call sites already pass it)Signature currently says TransactionClient but callers pass prisma. Make the type explicit for clarity.
Apply:
-export async function connectTransactionToPrices ( - tx: Transaction, - txClient: Prisma.TransactionClient, +export async function connectTransactionToPrices ( + tx: Transaction, + txClient: Prisma.TransactionClient | typeof prisma, allPrices: AllPrices, disconnectBefore = true ): Promise<void> {tests/unittests/transactionService.test.ts (1)
54-56: Remove unused upsert mock to avoid confusionThe service no longer calls pricesOnTransactions.upsert. Safe to delete.
- prismaMock.pricesOnTransactions.upsert.mockResolvedValue(mockedUSDPriceOnTransaction) - prisma.pricesOnTransactions.upsert = prismaMock.pricesOnTransactions.upsertREADME.md (1)
259-270: Document the new POST limits clearly and fix fenced-code lint.
- Clarify that “POST requests” refers to server-to-server trigger POSTs fired on payment (to avoid confusion with generic HTTP usage).
- Add a language to the fenced “type” code blocks to satisfy MD040.
Apply:
-##### proSettings.standardDailyPostLimit -``` -type: number | "Inf" -``` +##### proSettings.standardDailyPostLimit +```text +type: number | "Inf" +``` -##### proSettings.proDailyPostLimit -``` -type: number | "Inf" -``` +##### proSettings.proDailyPostLimit +```text +type: number | "Inf" +```tests/unittests/chronikService.test.ts (1)
54-56: Mock shape updated correctly; consider asserting usage.The new updateManyLastSynced mock matches production imports. Optionally, add an assertion in a syncAddresses path test to verify it’s called once per successful batch.
Would you like a small test that drives syncAddresses and expects updateManyLastSynced to be called with the processed addresses?
services/chronikService.ts (2)
296-404: Parallel fetch generator is solid; consider small robustness tweaks.
- Type pageTxs as Tx[] instead of any[].
- Log burst/page indices in warnings to aid debugging.
Apply:
- let pageTxs: any[] = [] + let pageTxs: Tx[] = [] try { pageTxs = await this.getPaginatedTxs(address.address, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE) } catch (err: any) { - console.warn(`${addrLogPrefix} page=${pageIndex} failed: ${err?.message as string ?? err as string}`) + console.warn(`${addrLogPrefix} page=${pageIndex} failed: ${err?.message as string ?? String(err)}`) }
621-627: Make broadcastIncomingTx params defensive with defaults.Optional hardening so accidental omitted arrays don’t explode.
-private broadcastIncomingTx (addressString: string, createdTx: TransactionWithAddressAndPrices, inputAddresses: Array<{address: string, amount: Prisma.Decimal}>, outputAddresses: Array<{address: string, amount: Prisma.Decimal}>): BroadcastTxData { +private broadcastIncomingTx ( + addressString: string, + createdTx: TransactionWithAddressAndPrices, + inputAddresses: Array<{address: string, amount: Prisma.Decimal}> = [], + outputAddresses: Array<{address: string, amount: Prisma.Decimal}> = [] +): BroadcastTxData {services/triggerService.ts (2)
451-470: Per-user concurrency can explode; bound attempts per round.
runTasksUpToCreditsfires up toremainingNeededtasks concurrently per user, which can be very large. Cap per-user parallelism to protect I/O.Apply this diff:
-async function runTasksUpToCredits ( - tasks: TriggerTask[], - credits: number -): Promise<{ accepted: number, attempted: number }> { +async function runTasksUpToCredits ( + tasks: TriggerTask[], + credits: number, + perUserConcurrency = 25 +): Promise<{ accepted: number, attempted: number }> { if (credits <= 0 || tasks.length === 0) return { accepted: 0, attempted: 0 } @@ - while (accepted < credits && idx < tasks.length) { - const remainingNeeded = credits - accepted - const batch = tasks.slice(idx, idx + remainingNeeded) + while (accepted < credits && idx < tasks.length) { + const remainingNeeded = credits - accepted + const sliceSize = Math.min(remainingNeeded, perUserConcurrency) + const batch = tasks.slice(idx, idx + sliceSize) idx += batch.length attempted += batch.length @@ return { accepted, attempted } }And pass a cap at call sites:
- const { accepted, attempted } = await runTasksUpToCredits(queue, limit) + const { accepted, attempted } = await runTasksUpToCredits(queue, limit, Math.min(limit, 25))- const { accepted, attempted } = await runTasksUpToCredits(queue, limit) + const { accepted, attempted } = await runTasksUpToCredits(queue, limit, Math.min(limit, 25))Also applies to: 555-565, 561-565
113-120: Validate postURL format at create/update time as well.Complement runtime guards by rejecting invalid/unsafe URLs when creating/updating triggers.
I can draft a small validator (http/https only, disallow localhost/private/meta IPs) to reuse in both create/update paths. Want me to add it?
Also applies to: 155-161
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (23)
README.md(1 hunks)components/Account/ProDisplay.tsx(1 hunks)components/TableContainer/TableContainerGetter.tsx(1 hunks)config/example-config.json(1 hunks)config/index.ts(1 hunks)constants/index.ts(2 hunks)prisma-local/migrations/20250812201840_relation_between_user_and_paybutton/migration.sql(1 hunks)prisma-local/migrations/20250813150215_user_credits_for_post_requests/migration.sql(1 hunks)prisma-local/migrations/migration_lock.toml(1 hunks)prisma-local/schema.prisma(3 hunks)prisma-local/seed.ts(1 hunks)prisma-local/seeds/triggers.ts(1 hunks)services/addressService.ts(1 hunks)services/chronikService.ts(8 hunks)services/priceService.ts(3 hunks)services/transactionService.ts(2 hunks)services/triggerService.ts(2 hunks)tests/mockedObjects.ts(2 hunks)tests/unittests/chronikService.test.ts(1 hunks)tests/unittests/transactionService.test.ts(1 hunks)tests/unittests/triggerService.test.ts(2 hunks)tsconfig.json(1 hunks)utils/index.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
tests/unittests/transactionService.test.ts (1)
prisma-local/mockedClient.ts (1)
prismaMock(19-19)
services/priceService.ts (1)
constants/index.ts (2)
HUMAN_READABLE_DATE_FORMAT(165-165)PRICE_API_MAX_RETRIES(169-169)
prisma-local/seed.ts (1)
prisma-local/seeds/users.ts (2)
devUserProfiles(24-46)adminUserProfiles(60-69)
services/transactionService.ts (2)
services/priceService.ts (3)
AllPrices(21-24)flattenTimestamp(9-13)fetchPricesForNetworkAndTimestamp(244-275)constants/index.ts (2)
PRICES_CONNECTION_BATCH_SIZE(291-291)PRICES_CONNECTION_TIMEOUT(293-293)
services/chronikService.ts (6)
constants/index.ts (6)
INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY(282-282)CHRONIK_FETCH_N_TXS_PER_PAGE(280-280)TX_EMIT_BATCH_SIZE(283-283)DB_COMMIT_BATCH_SIZE(284-284)NETWORK_IDS_FROM_SLUGS(132-135)MainNetworkSlugsType(187-187)services/transactionService.ts (3)
TransactionWithAddressAndPrices(104-104)createManyTransactions(494-532)connectAllTransactionsToPrices(474-487)services/addressService.ts (2)
setSyncingBatch(314-325)updateManyLastSynced(275-286)prisma-local/seeds/transactions.ts (1)
appendTxsToFile(12-20)ws-service/types.ts (1)
BroadcastTxData(5-9)services/triggerService.ts (1)
executeTriggersBatch(500-607)
services/triggerService.ts (6)
ws-service/types.ts (2)
BroadcastTxData(5-9)SimplifiedTransaction(11-34)utils/validators.ts (3)
OpReturnData(415-419)EMPTY_OP_RETURN(421-425)parseTriggerPostData(262-289)constants/mail.ts (4)
MAIL_FROM(38-38)MAIL_SUBJECT(39-39)MAIL_HTML_REPLACER(51-65)getMailerTransporter(15-36)constants/index.ts (6)
TRIGGER_LOG_BATCH_SIZE(289-289)RESPONSE_MESSAGES(8-108)NETWORK_TICKERS_FROM_ID(190-193)SUPPORTED_QUOTES_FROM_ID(161-164)TRIGGER_POST_CONCURRENCY(286-286)TRIGGER_EMAIL_CONCURRENCY(287-287)services/transactionService.ts (1)
getTransactionValue(13-30)utils/index.ts (1)
runAsyncInBatches(240-247)
🪛 GitHub Actions: Pull Request Tests
tests/unittests/triggerService.test.ts
[error] 153-153: Unit test failed: 'Payment Trigger system › executeAddressTriggers posts with outputAddresses containing primary at index 0' due to TypeError: t.paybutton.addresses is not iterable.
services/triggerService.ts
[error] 280-280: TypeError: t.paybutton.addresses is not iterable in fetchTriggersGroupedByAddress
🪛 markdownlint-cli2 (0.17.2)
README.md
260-260: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
266-266: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (22)
prisma-local/migrations/migration_lock.toml (1)
3-3: No-op change acknowledgedRe-adding the same provider value is harmless. Nothing to do.
tsconfig.json (1)
8-9: LGTM: add ES2019 lib for Array.flatMap typingsChange is safe and scoped to type libs; no runtime impact.
prisma-local/schema.prisma (2)
176-176: Reverse relation added — looks correct
paybuttons: Paybutton[]properly complements the new FK. No issues.
273-273: Non-semantic newline changeNo action needed.
constants/index.ts (1)
106-106: Add POST credits message — looks goodMessage key and copy read well and match new feature.
services/transactionService.ts (2)
392-397: Good extraction of link rows builderKeeps the 2-rows-per-tx logic centralized.
464-472: No changes needed — Prisma v6.14.0 supportstimeouton$transaction
Interactive transactions accept a second parameter withmaxWait,timeout, andisolationLeveloptions, and this API is available in Prisma Client v6.x. (prisma.io, prisma.dev.org.tw)tests/unittests/transactionService.test.ts (1)
60-62: Mocks updated for createMany — good alignment with new batching pathMatches the service refactor.
prisma-local/migrations/20250813150215_user_credits_for_post_requests/migration.sql (1)
1-3: Adds postCredits and aligns emailCredits default — OKDefaults look sensible. Confirm application code uses these as initial credits (not daily caps) to avoid semantic drift.
tests/mockedObjects.ts (2)
530-531: postCredits added to mocks — OK
545-546: postCredits added to second mock — OKcomponents/Account/ProDisplay.tsx (1)
66-67: Switched to post limits — OK; ensure ProSettings provides these fieldsIf config/proSettings change is merged, this is good. Validate types and example-config are updated.
tests/unittests/triggerService.test.ts (2)
88-94: Whitespace-only tweak — fine
159-162: Whitespace-only tweak — fineconfig/example-config.json (1)
44-45: LGTM — POST limits mirrored in example config.Values align with README and the implied ProSettings: number | "Inf". No further changes needed.
prisma-local/seed.ts (1)
37-41: Remove incorrect FK order warning
There is no separateUsermodel—UserProfileis the root user entity and has no foreign key to a non-existentUsertable, so creating profiles first does not violate any constraints.Likely an incorrect or invalid review comment.
services/chronikService.ts (5)
153-155: Helper for lastSync TS looks good.Converts Date to UNIX seconds; matches Chronik timestamps.
710-716: Top-level early-return path is fine.Clean guard when no addresses.
1027-1029: Network partitioning in multi-client sync looks correct.Filters by NETWORK_IDS_FROM_SLUGS and delegates to per-network client.
940-944: Initialization propagation is appropriate.Central setInitialized unblocks both clients.
811-815: Nice summary logging.Helpful totals and failed-report; keep.
services/triggerService.ts (1)
332-337: Good: JSON parse errors now logged and do not consume credits.Catching
parseTriggerPostDatafailures and logging them prevents charging users for invalid templates.
| -- AddForeignKey | ||
| ALTER TABLE `Paybutton` ADD CONSTRAINT `Paybutton_providerUserId_fkey` FOREIGN KEY (`providerUserId`) REFERENCES `UserProfile`(`id`) ON DELETE RESTRICT ON UPDATE RESTRICT; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
FK may fail on existing data; also consider indexing and delete policy
- Ensure there are no orphan Paybutton.providerUserId values before applying RESTRICT, or the migration will fail. If providerUserId is optional, consider ON DELETE SET NULL instead of RESTRICT.
- Add an explicit index on Paybutton(providerUserId) to aid joins (MySQL/InnoDB auto-creates when needed, but being explicit is clearer/perf-friendly).
Apply this diff in the migration (or a follow-up one):
ALTER TABLE `Paybutton` ADD CONSTRAINT `Paybutton_providerUserId_fkey` FOREIGN KEY (`providerUserId`) REFERENCES `UserProfile`(`id`) ON DELETE RESTRICT ON UPDATE RESTRICT;
+CREATE INDEX `idx_paybutton_providerUserId` ON `Paybutton`(`providerUserId`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| -- AddForeignKey | |
| ALTER TABLE `Paybutton` ADD CONSTRAINT `Paybutton_providerUserId_fkey` FOREIGN KEY (`providerUserId`) REFERENCES `UserProfile`(`id`) ON DELETE RESTRICT ON UPDATE RESTRICT; | |
| -- AddForeignKey | |
| ALTER TABLE `Paybutton` ADD CONSTRAINT `Paybutton_providerUserId_fkey` FOREIGN KEY (`providerUserId`) REFERENCES `UserProfile`(`id`) ON DELETE RESTRICT ON UPDATE RESTRICT; | |
| CREATE INDEX `idx_paybutton_providerUserId` ON `Paybutton`(`providerUserId`); |
| export async function updateManyLastSynced (addressStringArray: string[]): Promise<void> { | ||
| await prisma.address.updateMany({ | ||
| where: { | ||
| address: { | ||
| in: addressStringArray | ||
| } | ||
| }, | ||
| data: { | ||
| lastSynced: new Date() | ||
| } | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Batch + normalize updates; handle empty input
- Normalize addresses to lowercase to match stored form.
- No-ops on empty arrays.
- Chunk IN-lists to avoid oversized queries on large sets.
-export async function updateManyLastSynced (addressStringArray: string[]): Promise<void> {
- await prisma.address.updateMany({
- where: {
- address: {
- in: addressStringArray
- }
- },
- data: {
- lastSynced: new Date()
- }
- })
-}
+export async function updateManyLastSynced (addressStringArray: string[]): Promise<void> {
+ if (addressStringArray.length === 0) return
+ const addresses = addressStringArray.map(a => a.toLowerCase())
+ const BATCH = 1000
+ const now = new Date()
+ for (let i = 0; i < addresses.length; i += BATCH) {
+ const slice = addresses.slice(i, i + BATCH)
+ await prisma.address.updateMany({
+ where: { address: { in: slice } },
+ data: { lastSynced: now }
+ })
+ }
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function updateManyLastSynced (addressStringArray: string[]): Promise<void> { | |
| await prisma.address.updateMany({ | |
| where: { | |
| address: { | |
| in: addressStringArray | |
| } | |
| }, | |
| data: { | |
| lastSynced: new Date() | |
| } | |
| }) | |
| } | |
| export async function updateManyLastSynced (addressStringArray: string[]): Promise<void> { | |
| if (addressStringArray.length === 0) return | |
| const addresses = addressStringArray.map(a => a.toLowerCase()) | |
| const BATCH = 1000 | |
| const now = new Date() | |
| for (let i = 0; i < addresses.length; i += BATCH) { | |
| const slice = addresses.slice(i, i + BATCH) | |
| await prisma.address.updateMany({ | |
| where: { address: { in: slice } }, | |
| data: { lastSynced: now } | |
| }) | |
| } | |
| } |
🤖 Prompt for AI Agents
In services/addressService.ts around lines 275 to 286, updateManyLastSynced
should no-op on empty input, normalize addresses to lowercase, and chunk large
IN lists to avoid oversized queries; implement: if addressStringArray is empty
return immediately, map all addresses to .toLowerCase(), split the array into
reasonable-sized chunks (e.g. 500–1000 items), and call
prisma.address.updateMany for each chunk with where: { address: { in: chunk } }
and data: { lastSynced: new Date() } so all matching rows are updated without
oversized single queries.
services/chronikService.ts
Outdated
| if (toCommit.length >= DB_COMMIT_BATCH_SIZE) { | ||
| console.log(`${this.CHRONIK_MSG_PREFIX} committing batch to DB — txs=${toCommit.length}`) | ||
| const created = await createManyTransactions(toCommit) | ||
| await setSyncingBatch(addresses.map(a => a.address), false) | ||
| console.log(`${this.CHRONIK_MSG_PREFIX} committed — created=${created.length}`) | ||
| toCommit = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Do not clear syncing flags mid-run; move setSyncing(false) to a finally.
Clearing syncing for all addresses inside each commit can re-enable mempool processing before initial sync finishes, defeating waitForSyncing’s gate.
- const created = await createManyTransactions(toCommit)
- await setSyncingBatch(addresses.map(a => a.address), false)
+ const created = await createManyTransactions(toCommit)And add a single clear at the end of syncAddresses:
- } catch (err: any) {
+ } catch (err: any) {
console.error(`${this.CHRONIK_MSG_PREFIX}: ERROR in parallel sync: ${err.message as string}`)
...
- }
+ } finally {
+ await setSyncingBatch(addresses.map(a => a.address), false)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (toCommit.length >= DB_COMMIT_BATCH_SIZE) { | |
| console.log(`${this.CHRONIK_MSG_PREFIX} committing batch to DB — txs=${toCommit.length}`) | |
| const created = await createManyTransactions(toCommit) | |
| await setSyncingBatch(addresses.map(a => a.address), false) | |
| console.log(`${this.CHRONIK_MSG_PREFIX} committed — created=${created.length}`) | |
| toCommit = [] | |
| // → inside your batch-commit loop: remove the mid-run flag clear | |
| if (toCommit.length >= DB_COMMIT_BATCH_SIZE) { | |
| console.log(`${this.CHRONIK_MSG_PREFIX} committing batch to DB — txs=${toCommit.length}`) | |
| const created = await createManyTransactions(toCommit) | |
| console.log(`${this.CHRONIK_MSG_PREFIX} committed — created=${created.length}`) | |
| toCommit = [] | |
| } |
| if (toCommit.length >= DB_COMMIT_BATCH_SIZE) { | |
| console.log(`${this.CHRONIK_MSG_PREFIX} committing batch to DB — txs=${toCommit.length}`) | |
| const created = await createManyTransactions(toCommit) | |
| await setSyncingBatch(addresses.map(a => a.address), false) | |
| console.log(`${this.CHRONIK_MSG_PREFIX} committed — created=${created.length}`) | |
| toCommit = [] | |
| // → at the end of syncAddresses, wrap the entire try/catch in a finally | |
| } catch (err: any) { | |
| console.error(`${this.CHRONIK_MSG_PREFIX}: ERROR in parallel sync: ${err.message as string}`) | |
| // … existing error handling … | |
| } finally { | |
| // clear syncing flags once, after all batches (or error) | |
| await setSyncingBatch(addresses.map(a => a.address), false) | |
| } |
🤖 Prompt for AI Agents
In services/chronikService.ts around lines 739 to 744, the call to
setSyncingBatch(addresses.map(a => a.address), false) is being executed inside
each DB commit loop which can prematurely clear syncing flags; move that
per-address clearing out of the commit loop and into a finally block that runs
once at the end of syncAddresses so syncing flags are only cleared after the
whole sync completes (ensure the finally always calls setSyncingBatch(...) for
all addresses, and remove the setSyncingBatch call from inside the loop).
| // collect UNIQUE (networkId, timestamp) pairs | ||
| const networkIdToTimestamps = new Map<number, Set<number>>() | ||
| await Promise.all(txList.map(async (t) => { | ||
| const networkId = await getTransactionNetworkId(t) | ||
| const ts = flattenTimestamp(t.timestamp) | ||
| const set = networkIdToTimestamps.get(networkId) ?? new Set<number>() | ||
| set.add(ts) | ||
| networkIdToTimestamps.set(networkId, set) | ||
| })) | ||
|
|
||
| // fetch AllPrices for each unique (networkId, timestamp) | ||
| const timestampToPrice: Record<number, AllPrices> = {} | ||
| for (const networkId of Object.keys(networkIdToUniqueTimestamp)) { | ||
| for (const timestamp of networkIdToUniqueTimestamp[Number(networkId)]) { | ||
| const allPrices = await fetchPricesForNetworkAndTimestamp(Number(networkId), timestamp, prisma) | ||
| timestampToPrice[timestamp] = allPrices | ||
| } | ||
| } | ||
|
|
||
| await prisma.$transaction(async (prisma) => { | ||
| void await prisma.pricesOnTransactions.deleteMany({ | ||
| where: { | ||
| transactionId: { | ||
| in: txList.map(t => t.id) | ||
| } | ||
| } | ||
| let pairs = 0 | ||
| for (const [networkId, timestamps] of networkIdToTimestamps.entries()) { | ||
| for (const ts of timestamps) { | ||
| pairs++ | ||
| // not parallel | ||
| const allPrices = await fetchPricesForNetworkAndTimestamp(networkId, ts, prisma) | ||
| timestampToPrice[ts] = allPrices | ||
| } | ||
| } | ||
| console.log(`[PRICES] Loaded prices for ${pairs} (network,timestamp) pairs.`) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BUG: Price map keys ignore networkId — can attach wrong prices across networks
timestampToPrice is keyed only by day timestamp; identical-day txs on XEC and BCH will overwrite each other, causing incorrect links. Use a (networkId,timestamp) composite key and reuse the resolved networkId per tx.
Apply:
- // fetch AllPrices for each unique (networkId, timestamp)
- const timestampToPrice: Record<number, AllPrices> = {}
+ // fetch AllPrices for each unique (networkId, timestamp)
+ const priceByPair = new Map<string, AllPrices>()
+ const networkIdsByTxId = new Map<string, number>()
let pairs = 0
for (const [networkId, timestamps] of networkIdToTimestamps.entries()) {
for (const ts of timestamps) {
pairs++
- // not parallel
- const allPrices = await fetchPricesForNetworkAndTimestamp(networkId, ts, prisma)
- timestampToPrice[ts] = allPrices
+ // not parallel
+ const allPrices = await fetchPricesForNetworkAndTimestamp(networkId, ts, prisma)
+ priceByPair.set(`${networkId}:${ts}`, allPrices)
}
}
console.log(`[PRICES] Loaded prices for ${pairs} (network,timestamp) pairs.`)And below:
- // Build all join rows (2 per tx: USD + CAD)
+ // Build all join rows (2 per tx: USD + CAD)
const rows: Prisma.PricesOnTransactionsCreateManyInput[] = []
for (const t of txList) {
- const ts = flattenTimestamp(t.timestamp)
- const allPrices = timestampToPrice[ts]
+ const ts = flattenTimestamp(t.timestamp)
+ const networkId = networkIdsByTxId.get(t.id) ?? await getTransactionNetworkId(t)
+ networkIdsByTxId.set(t.id, networkId)
+ const allPrices = priceByPair.get(`${networkId}:${ts}`)!
rows.push(...buildPriceTxConnectionInput(t, allPrices))
}🤖 Prompt for AI Agents
In services/transactionService.ts around lines 432 to 454, the timestampToPrice
map currently keys only by the flattened timestamp so identical-day transactions
on different networks can overwrite each other; switch to a composite key that
includes networkId (e.g. `${networkId}:${ts}`) when storing and retrieving
prices, populate that composite key when fetching prices in the loop, and when
resolving prices for each transaction reuse the previously-resolved networkId
(don’t call getTransactionNetworkId twice) to build the same composite key for
lookup; ensure pairs counting and the console message remain correct and update
any variable names to reflect the composite-key map.
| return { | ||
| amount: tx.amount, | ||
| currency, | ||
| txId: tx.hash, | ||
| buttonName: trigger.paybutton.name, | ||
| address, | ||
| timestamp: tx.timestamp, | ||
| opReturn: (tx.paymentId !== '' || tx.message !== '') | ||
| ? { paymentId: tx.paymentId, message: tx.message, rawMessage: tx.rawMessage } | ||
| : EMPTY_OP_RETURN, | ||
| inputAddresses: tx.inputAddresses, | ||
| value: valueStr | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing outputAddresses in POST payload params (breaks JSON template).
parseTriggerPostData expects outputAddresses; omitting it can yield “undefined” in the template and cause JSON parse failures.
Apply this diff:
return {
amount: tx.amount,
currency,
txId: tx.hash,
buttonName: trigger.paybutton.name,
address,
timestamp: tx.timestamp,
opReturn: (tx.paymentId !== '' || tx.message !== '')
? { paymentId: tx.paymentId, message: tx.message, rawMessage: tx.rawMessage }
: EMPTY_OP_RETURN,
inputAddresses: tx.inputAddresses,
+ outputAddresses: tx.outputAddresses,
value: valueStr
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ): PostDataParameters { | |
| return { | |
| amount: tx.amount, | |
| currency, | |
| txId: tx.hash, | |
| buttonName: trigger.paybutton.name, | |
| address, | |
| timestamp: tx.timestamp, | |
| opReturn: (tx.paymentId !== '' || tx.message !== '') | |
| ? { paymentId: tx.paymentId, message: tx.message, rawMessage: tx.rawMessage } | |
| : EMPTY_OP_RETURN, | |
| inputAddresses: tx.inputAddresses, | |
| value: valueStr | |
| } | |
| ): PostDataParameters { | |
| return { | |
| amount: tx.amount, | |
| currency, | |
| txId: tx.hash, | |
| buttonName: trigger.paybutton.name, | |
| address, | |
| timestamp: tx.timestamp, | |
| opReturn: (tx.paymentId !== '' || tx.message !== '') | |
| ? { paymentId: tx.paymentId, message: tx.message, rawMessage: tx.rawMessage } | |
| : EMPTY_OP_RETURN, | |
| inputAddresses: tx.inputAddresses, | |
| outputAddresses: tx.outputAddresses, | |
| value: valueStr | |
| } | |
| } |
🤖 Prompt for AI Agents
In services/triggerService.ts around lines 296 to 309, the returned
PostDataParameters is missing outputAddresses which parseTriggerPostData
expects; add an outputAddresses property to the returned object (e.g. set
outputAddresses: tx.outputAddresses or outputAddresses: tx.outputAddresses || []
to ensure a defined array) so the POST payload includes outputAddresses and
avoids undefined values in the JSON template.
| const actionType: TriggerLogActionType = 'PostData' | ||
| let accepted = false | ||
| let isError = false | ||
| let data!: PostDataTriggerLog | PostDataTriggerLogError | ||
| try { | ||
| const params = buildPostParams(trigger, tx, currency, valueStr, address) | ||
| const parsed = parseTriggerPostData({ userId: trigger.paybutton.user.id, postData: trigger.postData, postDataParameters: params }) | ||
| const resp = await axios.post(trigger.postURL, parsed, { timeout: config.triggerPOSTTimeout }) | ||
| // HTTP 2xx counts as accepted | ||
| accepted = resp.status >= 200 && resp.status < 300 | ||
| data = { postedData: parsed, postedURL: trigger.postURL, responseData: resp.data } | ||
| } catch (err: any) { | ||
| isError = true | ||
| data = { errorName: err.name, errorMessage: err.message, errorStack: err.stack, triggerPostData: trigger.postData, triggerPostURL: trigger.postURL } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SSRF risk: user-controlled postURL allows internal network access. Add URL validation before axios.post.
Block localhost/private/meta IPs and non-http(s) schemes to mitigate SSRF. Consider expanding to DNS resolution in a follow-up.
Apply this diff:
try {
+ // Basic SSRF guard
+ const u = new URL(trigger.postURL)
+ if (!['http:', 'https:'].includes(u.protocol)) {
+ throw new Error('Invalid postURL protocol')
+ }
+ const host = u.hostname.toLowerCase()
+ const forbiddenHost = /^(localhost|127\.|::1)/.test(host) || ['0.0.0.0', '169.254.169.254'].includes(host)
+ const privateIPv4 =
+ /^10\./.test(host) || /^192\.168\./.test(host) || /^172\.(1[6-9]|2\d|3[0-1])\./.test(host)
+ if (forbiddenHost || privateIPv4) {
+ throw new Error('Forbidden postURL host')
+ }
const params = buildPostParams(trigger, tx, currency, valueStr, address)
const parsed = parseTriggerPostData({ userId: trigger.paybutton.user.id, postData: trigger.postData, postDataParameters: params })
const resp = await axios.post(trigger.postURL, parsed, { timeout: config.triggerPOSTTimeout })If you prefer a stricter check (DNS to catch hostnames resolving to private IPs), I can supply an async resolver helper.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return async () => { | |
| const actionType: TriggerLogActionType = 'PostData' | |
| let accepted = false | |
| let isError = false | |
| let data!: PostDataTriggerLog | PostDataTriggerLogError | |
| try { | |
| const params = buildPostParams(trigger, tx, currency, valueStr, address) | |
| const parsed = parseTriggerPostData({ userId: trigger.paybutton.user.id, postData: trigger.postData, postDataParameters: params }) | |
| const resp = await axios.post(trigger.postURL, parsed, { timeout: config.triggerPOSTTimeout }) | |
| // HTTP 2xx counts as accepted | |
| accepted = resp.status >= 200 && resp.status < 300 | |
| data = { postedData: parsed, postedURL: trigger.postURL, responseData: resp.data } | |
| } catch (err: any) { | |
| isError = true | |
| return async () => { | |
| const actionType: TriggerLogActionType = 'PostData' | |
| let accepted = false | |
| let isError = false | |
| let data!: PostDataTriggerLog | PostDataTriggerLogError | |
| try { | |
| // Basic SSRF guard | |
| const u = new URL(trigger.postURL) | |
| if (!['http:', 'https:'].includes(u.protocol)) { | |
| throw new Error('Invalid postURL protocol') | |
| } | |
| const host = u.hostname.toLowerCase() | |
| const forbiddenHost = /^(localhost|127\.|::1)/.test(host) || ['0.0.0.0', '169.254.169.254'].includes(host) | |
| const privateIPv4 = | |
| /^10\./.test(host) || /^192\.168\./.test(host) || /^172\.(1[6-9]|2\d|3[0-1])\./.test(host) | |
| if (forbiddenHost || privateIPv4) { | |
| throw new Error('Forbidden postURL host') | |
| } | |
| const params = buildPostParams(trigger, tx, currency, valueStr, address) | |
| const parsed = parseTriggerPostData({ | |
| userId: trigger.paybutton.user.id, | |
| postData: trigger.postData, | |
| postDataParameters: params | |
| }) | |
| const resp = await axios.post(trigger.postURL, parsed, { timeout: config.triggerPOSTTimeout }) | |
| // HTTP 2xx counts as accepted | |
| accepted = resp.status >= 200 && resp.status < 300 | |
| data = { postedData: parsed, postedURL: trigger.postURL, responseData: resp.data } | |
| } catch (err: any) { | |
| isError = true |
🤖 Prompt for AI Agents
In services/triggerService.ts around lines 320 to 333, the code posts to
trigger.postURL without validating it, creating an SSRF risk; before calling
axios.post, parse trigger.postURL with the URL constructor and validate that
protocol is 'http:' or 'https:', then reject URLs whose hostname is an IP
literal in localhost/private ranges (IPv4: 127.0.0.0/8, 10.0.0.0/8,
172.16.0.0/12, 192.168.0.0/16; IPv6: ::1, fc00::/7, fe80::/10) or exactly
'localhost' or other clearly internal hostnames; if host is a hostname (not an
IP literal) and you want to avoid DNS-based SSRF later, fail fast or mark for
async DNS resolution (follow-up) — ensure this validation runs before building
params/parsed and before axios.post, and on invalid URLs set isError/accepted
appropriately and return/log a clear error without performing the HTTP request.
| accepted = Array.isArray(resp.accepted) && resp.accepted.includes(trigger.emails) | ||
| } catch (err: any) { | ||
| isError = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Email acceptance check fails for multiple recipients.
resp.accepted is an array of individual addresses; comparing against the entire trigger.emails string may always fail.
Apply this diff:
- data = { email: trigger.emails, responseData: resp.response }
- accepted = Array.isArray(resp.accepted) && resp.accepted.includes(trigger.emails)
+ data = { email: trigger.emails, responseData: resp.response }
+ const recipients = trigger.emails.split(/[ ,;]+/).map(s => s.trim()).filter(Boolean)
+ accepted = Array.isArray(resp.accepted) && recipients.some(r => resp.accepted.includes(r))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| data = { email: trigger.emails, responseData: resp.response } | |
| accepted = Array.isArray(resp.accepted) && resp.accepted.includes(trigger.emails) | |
| } catch (err: any) { | |
| data = { email: trigger.emails, responseData: resp.response } | |
| const recipients = trigger.emails | |
| .split(/[ ,;]+/) | |
| .map(s => s.trim()) | |
| .filter(Boolean) | |
| accepted = Array.isArray(resp.accepted) && recipients.some(r => resp.accepted.includes(r)) | |
| } catch (err: any) { |
🤖 Prompt for AI Agents
In services/triggerService.ts around lines 371-373, the code checks
resp.accepted.includes(trigger.emails) which fails when there are multiple
recipients; update the logic to handle both string and array recipient inputs by
normalizing trigger.emails into an array (if it's a comma-separated string,
split and trim; if it's already an array, use it), then set accepted =
Array.isArray(resp.accepted) && triggerEmailsArray.some(addr =>
resp.accepted.includes(addr)); keep data = { email: trigger.emails,
responseData: resp.response } as-is.
| const quoteSlug = SUPPORTED_QUOTES_FROM_ID[user.preferredCurrencyId] | ||
| ;(postTaskQueueByUser[user.id] ??= []).push({ | ||
| run: makePostTask(trigger, tx, currency, values[quoteSlug].toString(), logs, address), | ||
| triggerId: trigger.id, | ||
| actionType: 'PostData' | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Default preferred currency slug to 'usd' if mapping is missing.
Avoid runtime crash when preferredCurrencyId is unexpected.
Apply this diff:
- const quoteSlug = SUPPORTED_QUOTES_FROM_ID[user.preferredCurrencyId]
+ const quoteSlug = SUPPORTED_QUOTES_FROM_ID[user.preferredCurrencyId] ?? 'usd'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| userPostCredits[user.id] ??= user.postCredits | |
| const quoteSlug = SUPPORTED_QUOTES_FROM_ID[user.preferredCurrencyId] | |
| ;(postTaskQueueByUser[user.id] ??= []).push({ | |
| run: makePostTask(trigger, tx, currency, values[quoteSlug].toString(), logs, address), | |
| triggerId: trigger.id, | |
| actionType: 'PostData' | |
| }) | |
| userPostCredits[user.id] ??= user.postCredits | |
| const quoteSlug = SUPPORTED_QUOTES_FROM_ID[user.preferredCurrencyId] ?? 'usd' | |
| ;(postTaskQueueByUser[user.id] ??= []).push({ | |
| run: makePostTask(trigger, tx, currency, values[quoteSlug].toString(), logs, address), | |
| triggerId: trigger.id, | |
| actionType: 'PostData' | |
| }) |
🤖 Prompt for AI Agents
In services/triggerService.ts around lines 540 to 546, the code retrieves
quoteSlug with SUPPORTED_QUOTES_FROM_ID[user.preferredCurrencyId] but doesn’t
handle a missing mapping; default quoteSlug to 'usd' when the lookup returns
falsy and use that slug when accessing values so the push uses values[quoteSlug]
safely, e.g., assign quoteSlug =
SUPPORTED_QUOTES_FROM_ID[user.preferredCurrencyId] || 'usd' before creating the
task.
| export async function runAsyncInBatches (taskName: string, tasks: Array<() => Promise<void>>, batchSize: number): Promise<void> { | ||
| console.log(`[${batchSize}-PARALLEL: ${taskName}] Will run ${tasks.length} tasks.`) | ||
| for (let i = 0; i < tasks.length; i += batchSize) { | ||
| const slice = tasks.slice(i, i + batchSize) | ||
| console.log(`[${batchSize}-PARALLEL: ${taskName}] Running batch of ${slice.length} tasks....`) | ||
| await Promise.all(slice.map(async fn => await fn())) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against batchSize <= 0 to prevent infinite loop
With batchSize 0, i += batchSize never progresses. Add validation.
export async function runAsyncInBatches (taskName: string, tasks: Array<() => Promise<void>>, batchSize: number): Promise<void> {
+ if (!Number.isInteger(batchSize) || batchSize <= 0) {
+ throw new Error(`[${taskName}] batchSize must be a positive integer`)
+ }
console.log(`[${batchSize}-PARALLEL: ${taskName}] Will run ${tasks.length} tasks.`)
for (let i = 0; i < tasks.length; i += batchSize) {
const slice = tasks.slice(i, i + batchSize)
console.log(`[${batchSize}-PARALLEL: ${taskName}] Running batch of ${slice.length} tasks....`)
await Promise.all(slice.map(async fn => await fn()))
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function runAsyncInBatches (taskName: string, tasks: Array<() => Promise<void>>, batchSize: number): Promise<void> { | |
| console.log(`[${batchSize}-PARALLEL: ${taskName}] Will run ${tasks.length} tasks.`) | |
| for (let i = 0; i < tasks.length; i += batchSize) { | |
| const slice = tasks.slice(i, i + batchSize) | |
| console.log(`[${batchSize}-PARALLEL: ${taskName}] Running batch of ${slice.length} tasks....`) | |
| await Promise.all(slice.map(async fn => await fn())) | |
| } | |
| } | |
| export async function runAsyncInBatches ( | |
| taskName: string, | |
| tasks: Array<() => Promise<void>>, | |
| batchSize: number | |
| ): Promise<void> { | |
| if (!Number.isInteger(batchSize) || batchSize <= 0) { | |
| throw new Error(`[${taskName}] batchSize must be a positive integer`) | |
| } | |
| console.log(`[${batchSize}-PARALLEL: ${taskName}] Will run ${tasks.length} tasks.`) | |
| for (let i = 0; i < tasks.length; i += batchSize) { | |
| const slice = tasks.slice(i, i + batchSize) | |
| console.log( | |
| `[${batchSize}-PARALLEL: ${taskName}] Running batch of ${slice.length} tasks....` | |
| ) | |
| await Promise.all(slice.map(async fn => await fn())) | |
| } | |
| } |
🤖 Prompt for AI Agents
In utils/index.ts around lines 240 to 247 the loop increments i by batchSize
which will never advance if batchSize <= 0; add an upfront validation that
batchSize is a positive integer and fail-fast. Specifically, at the top of the
function validate Number.isInteger(batchSize) && batchSize > 0 and if not, throw
a clear Error/TypeError (e.g. "batchSize must be a positive integer"), or
alternatively coerce to 1, then proceed with the existing batching logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
services/chronikService.ts (1)
1052-1054: Add missingnetworkSlugto unit test call
tests/unittests/chronikService.test.ts:706 – changeawait client.getTransactionDetails('tx123')to
await client.getTransactionDetails('tx123', '<networkSlug>')
♻️ Duplicate comments (2)
services/addressService.ts (1)
275-287: Batch, normalize, and guard updateManyLastSynced to avoid oversized IN-lists and case mismatchesRe-raising a prior suggestion: add early return on empty arrays, lowercase normalization, and chunking to prevent large IN queries from failing and to match stored lowercase addresses.
-export async function updateManyLastSynced (addressStringArray: string[]): Promise<void> { - await prisma.address.updateMany({ - where: { - address: { - in: addressStringArray - } - }, - data: { - syncing: false, - lastSynced: new Date() - } - }) -} +export async function updateManyLastSynced (addressStringArray: string[]): Promise<void> { + if (addressStringArray.length === 0) return + const addresses = addressStringArray.map(a => a.toLowerCase()) + const BATCH = 1000 + const now = new Date() + for (let i = 0; i < addresses.length; i += BATCH) { + const slice = addresses.slice(i, i + BATCH) + await prisma.address.updateMany({ + where: { address: { in: slice } }, + data: { syncing: false, lastSynced: now } + }) + } +}services/chronikService.ts (1)
742-819: Do not clear syncing mid-run; clear once in finally + update lastSynced only for successesClearing
syncinginside the fetch/emit loop can re-enable mempool processing before commits finish, defeatingwaitForSyncing. Move all clears to afinallyso they run once, and only setlastSyncedfor OK addresses. Unblock failures by clearingsyncingwithout updatinglastSynced.try { const pfx = `${this.CHRONIK_MSG_PREFIX}[PARALLEL FETCHING]` console.log(`${pfx} will fetch batches of ${INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY} addresses from chronik`) for await (const batch of this.fetchLatestTxsForAddresses(addresses)) { console.log(`${pfx} fetched batch of ${batch.txs.length} txs from chronik`) // count per address before committing for (const tx of batch.txs) { perAddrCount.set(tx.addressId, (perAddrCount.get(tx.addressId) ?? 0) + 1) } toCommit.push(...batch.txs) - await setSyncingBatch(batch.addressesSynced, false) if (toCommit.length >= DB_COMMIT_BATCH_SIZE) { console.log(`${this.CHRONIK_MSG_PREFIX} committing batch to DB — txs=${toCommit.length}`) const created = await createManyTransactions(toCommit) console.log(`${this.CHRONIK_MSG_PREFIX} committed — created=${created.length}`) toCommit = [] @@ // build success map addresses.forEach(a => { successfulAddressesWithCount[a.address] = perAddrCount.get(a.id) ?? 0 }) - const okAddresses = addresses.filter(a => !(a.address in failedAddressesWithErrors)) - await updateManyLastSynced(okAddresses.map(a => a.address)) } catch (err: any) { console.error(`${this.CHRONIK_MSG_PREFIX}: ERROR in parallel sync: ${err.message as string}`) addresses.forEach(a => { if ((perAddrCount.get(a.id) ?? 0) === 0) { failedAddressesWithErrors[a.address] = err.stack ?? String(err) } }) - } + } finally { + const okAddresses = addresses.filter(a => !(a.address in failedAddressesWithErrors)) + if (okAddresses.length > 0) { + await updateManyLastSynced(okAddresses.map(a => a.address)) + } + const failed = addresses.filter(a => (a.address in failedAddressesWithErrors)) + if (failed.length > 0) { + await setSyncingBatch(failed.map(a => a.address), false) + } + }
🧹 Nitpick comments (3)
services/addressService.ts (1)
289-299: LGTM: also clearing syncing on single-address updatesSetting
syncing: falsealongsidelastSyncedis consistent with the bulk path. Consider normalizing the lookup for robustness.- where: { - address: addressString - }, + where: { address: addressString.toLowerCase() },services/chronikService.ts (2)
331-338: Tighten typing for fetched page itemsUse
Tx[]instead ofany[]to keep type-safety in the burst loop.- let pageTxs: any[] = [] + let pageTxs: Tx[] = []
296-418: Generator design reads well; small note on addressesSynced semanticsEmitting
addressesSyncedas the last slice’s addresses is fine if used only for post-slice bookkeeping. If stronger coupling is needed later (e.g., per-address progress), consider emitting the exact address set that produced the txs in that yield.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
services/addressService.ts(1 hunks)services/chronikService.ts(9 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
services/chronikService.ts (7)
prisma-local/seeds/addresses.ts (2)
addresses(1-122)productionAddresses(124-130)constants/index.ts (6)
INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY(282-282)CHRONIK_FETCH_N_TXS_PER_PAGE(280-280)TX_EMIT_BATCH_SIZE(283-283)DB_COMMIT_BATCH_SIZE(284-284)NETWORK_IDS_FROM_SLUGS(132-135)MainNetworkSlugsType(187-187)services/transactionService.ts (3)
TransactionWithAddressAndPrices(104-104)createManyTransactions(494-532)connectAllTransactionsToPrices(474-487)services/addressService.ts (2)
setSyncingBatch(316-327)updateManyLastSynced(275-287)prisma-local/seeds/transactions.ts (1)
appendTxsToFile(12-20)ws-service/types.ts (1)
BroadcastTxData(5-9)services/triggerService.ts (1)
executeTriggersBatch(500-607)
🪛 GitHub Actions: Pull Request Tests
services/chronikService.ts
[warning] 336-336: CHRONIK: page=0 failed: Invalid checksum for ecash:qqkv9wr69ry2p9l53lxp635va4h86wv435995w8p2h
🔇 Additional comments (4)
services/chronikService.ts (4)
109-113: Type addition looks good
FetchedTxsBatchshape is clear and minimal.
158-160: LGTM: lastSync seconds helperCompact and avoids NaN on null.
954-959: Initialization handoff is cleanPropagating
setInitialized()to both network clients avoids races inwaitForStart.
1041-1043: Network partitioning for sync is correctFiltering by
NETWORK_IDS_FROM_SLUGS[networkSlug]before delegating is the right approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
prisma-local/migrations/20250910180000_transaction_addressid_timestamp_index/migration.sql (2)
2-4: Add online DDL options to avoid long locks on large Transaction tablesUse INPLACE and LOCK=NONE to minimize impact during deploys.
ALTER TABLE `Transaction` ADD INDEX `Transaction_addressId_timestamp_idx`(`addressId`, `timestamp`), - DROP INDEX `Transaction_addressId_fkey`; + DROP INDEX `Transaction_addressId_fkey`, + ALGORITHM=INPLACE, LOCK=NONE;
3-3: Optional: Match index order to dominant sort (DESC) if applicableIf queries are predominantly
WHERE addressId = ? ORDER BY timestamp DESC, consider defining the second key part as DESC on MySQL 8+.Example:
ADD INDEX `Transaction_addressId_timestamp_idx`(`addressId`, `timestamp` DESC)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
prisma-local/migrations/20250910180000_transaction_addressid_timestamp_index/migration.sql(1 hunks)prisma-local/schema.prisma(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- prisma-local/schema.prisma
🔇 Additional comments (2)
prisma-local/migrations/20250910180000_transaction_addressid_timestamp_index/migration.sql (2)
1-4: Composite (addressId, timestamp) index + removal of single-column index looks goodThis should improve lookups and ordering by timestamp while retaining leftmost-prefix coverage for FK needs on addressId. Nice cleanup of redundant index.
2-4: ConfirmTransaction_addressId_fkeyis an index, not a foreign key constraint
The mysql client isn’t available here—please run these queries in your MySQL instance to verify thatTransaction_addressId_fkeyexists only as an index and that your new composite index covers its leftmost column before dropping it:-- Check for FK constraint: SELECT CONSTRAINT_NAME FROM information_schema.REFERENTIAL_CONSTRAINTS WHERE CONSTRAINT_SCHEMA = '<your_db>' AND TABLE_NAME = 'Transaction' AND CONSTRAINT_NAME = 'Transaction_addressId_fkey'; -- Check for index: SELECT INDEX_NAME, COLUMN_NAME, SEQ_IN_INDEX FROM information_schema.STATISTICS WHERE TABLE_SCHEMA = '<your_db>' AND TABLE_NAME = 'Transaction' AND INDEX_NAME = 'Transaction_addressId_fkey' ORDER BY SEQ_IN_INDEX; -- Verify composite index: SELECT INDEX_NAME, COLUMN_NAME, SEQ_IN_INDEX FROM information_schema.STATISTICS WHERE TABLE_SCHEMA = '<your_db>' AND TABLE_NAME = 'Transaction' AND INDEX_NAME = 'Transaction_addressId_timestamp_idx' ORDER BY SEQ_IN_INDEX;
…et in some cases.
…putaddr Fix payment triggers with <outputaddresses> set
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (6)
services/triggerService.ts (6)
577-581: Correct task label for email batch.Use “EMAIL TRIGGERS” to reflect the action.
- await runAsyncInBatches( - `${currency} POST TRIGGERS`, + await runAsyncInBatches( + `${currency} EMAIL TRIGGERS`, emailUserRunners.map(run => async () => { emailResults.push(await run()) }), TRIGGER_EMAIL_CONCURRENCY )
13-29: Useincludefor addresses to ensure hydration (prevents undefined at runtime).Switch
paybutton.addressesfromselecttoincludeso Prisma consistently returns the relation. This also aligns with previous feedback.const triggerWithPaybuttonAndUserInclude = { paybutton: { include: { user: true, - addresses: { - select: { - address: { - select: { - address: true - } - } - } - } + addresses: { + include: { + address: { select: { address: true } } + } + } } } }
371-374: Fix email acceptance for multiple recipients (prevents under-counting credits).Normalize recipients and check membership accordingly.
- data = { email: trigger.emails, responseData: resp.response } - accepted = Array.isArray(resp.accepted) && resp.accepted.includes(trigger.emails) + data = { email: trigger.emails, responseData: resp.response } + const recipients = trigger.emails.split(/[ ,;]+/).map(s => s.trim()).filter(Boolean) + accepted = Array.isArray(resp.accepted) && recipients.some(r => resp.accepted.includes(r))
542-547: Default preferred currency slug to 'usd' to avoid undefined access.Prevents runtime crash when mapping is missing.
- const quoteSlug = SUPPORTED_QUOTES_FROM_ID[user.preferredCurrencyId] + const quoteSlug = SUPPORTED_QUOTES_FROM_ID[user.preferredCurrencyId] ?? 'usd'
278-286: Fix crash: t.paybutton.addresses may be undefined (seen in pipeline).Guard before iterating. This addresses “TypeError: t.paybutton.addresses is not iterable”.
- for (const t of triggers as TriggerWithPaybuttonAndUser[]) { - for (const conn of t.paybutton.addresses) { + for (const t of triggers as TriggerWithPaybuttonAndUser[]) { + const addrConns = Array.isArray(t.paybutton?.addresses) ? t.paybutton.addresses : [] + for (const conn of addrConns) { const addr = conn.address.address const arr = triggersByAddress.get(addr) ?? [] arr.push(t) triggersByAddress.set(addr, arr) } }
327-335: Add SSRF and redirect guards before axios.post.Validate URL scheme/host and disable redirects to reduce SSRF risk.
- const params = buildPostParams(trigger, tx, currency, valueStr, address) - const parsed = parseTriggerPostData({ userId: trigger.paybutton.user.id, postData: trigger.postData, postDataParameters: params }) - const resp = await axios.post(trigger.postURL, parsed, { timeout: config.triggerPOSTTimeout }) + // Basic SSRF guard + const u = new URL(trigger.postURL) + if (!['http:', 'https:'].includes(u.protocol)) throw new Error('Invalid postURL protocol') + const host = u.hostname.toLowerCase() + const forbiddenHost = + /^(localhost|127\.)/.test(host) || + host === '::1' || + host === '0.0.0.0' || + host === '169.254.169.254' || + /^10\./.test(host) || /^192\.168\./.test(host) || /^172\.(1[6-9]|2\d|3[0-1])\./.test(host) + if (forbiddenHost) throw new Error('Forbidden postURL host') + + const params = buildPostParams(trigger, tx, currency, valueStr, address) + const parsed = parseTriggerPostData({ userId: trigger.paybutton.user.id, postData: trigger.postData, postDataParameters: params }) + const resp = await axios.post(trigger.postURL, parsed, { timeout: config.triggerPOSTTimeout, maxRedirects: 0 })
🧹 Nitpick comments (1)
services/triggerService.ts (1)
469-471: Throttle per-user burst concurrency to avoid spikes.Current code fires up to
limitconcurrent requests per user in one go. Consider chunking to a small parallelism (e.g., 25).- const results = await Promise.all(batch.map(async t => await t.run())) - for (const ok of results) if (ok) accepted += 1 + const PER_USER_PARALLEL = 25 + for (let j = 0; j < batch.length; j += PER_USER_PARALLEL) { + const slice = batch.slice(j, j + PER_USER_PARALLEL) + const results = await Promise.all(slice.map(async t => await t.run())) + for (const ok of results) if (ok) accepted += 1 + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
services/triggerService.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
services/triggerService.ts (6)
ws-service/types.ts (2)
BroadcastTxData(5-9)SimplifiedTransaction(11-34)utils/validators.ts (3)
OpReturnData(415-419)EMPTY_OP_RETURN(421-425)parseTriggerPostData(262-289)constants/mail.ts (4)
MAIL_FROM(38-38)MAIL_SUBJECT(39-39)MAIL_HTML_REPLACER(51-65)getMailerTransporter(15-36)constants/index.ts (6)
TRIGGER_LOG_BATCH_SIZE(289-289)RESPONSE_MESSAGES(8-108)NETWORK_TICKERS_FROM_ID(190-193)SUPPORTED_QUOTES_FROM_ID(161-164)TRIGGER_POST_CONCURRENCY(286-286)TRIGGER_EMAIL_CONCURRENCY(287-287)services/transactionService.ts (1)
getTransactionValue(13-30)utils/index.ts (1)
runAsyncInBatches(240-247)
🪛 GitHub Actions: Pull Request Tests
services/triggerService.ts
[error] 280-280: TypeError: t.paybutton.addresses is not iterable. fetchTriggersGroupedByAddress attempted to iterate over t.paybutton.addresses (triggerService.ts:280).
🔇 Additional comments (3)
services/triggerService.ts (3)
290-311: LGTM: POST payload now includes outputAddresses.This unblocks JSON templating in parseTriggerPostData.
384-444: LGTM: batched logs + transactional decrements with clamping.Good separation of concerns and safe clamping to available balances.
238-243: LGTM: env flag short-circuit for single-address path.Keeps behavior consistent with batch path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
services/chronikService.ts (2)
587-587: Fix missing input/output addresses in broadcast calls.The
broadcastIncomingTxmethod now correctly computesinputAddressesandoutputAddresses, but there are still calls missing these parameters in the sync logic.This matches the previously identified issue where
broadcastIncomingTxcalls need to include the proper input/output addresses.Also applies to: 611-625, 644-644
700-821: Major refactoring of sync logic introduces complexity.The parallel sync implementation in
syncAddressesis a significant improvement but introduces several concerns:
- Syncing flag management: The method clears syncing flags per address batch (line 726) but doesn't have proper cleanup in error cases
- Memory management: The
toCommitarray could grow large before batching- Error handling: Partial failures might leave some addresses in inconsistent states
- Trigger execution: Uses new batch execution which is good for performance
The syncing flag cleanup should be moved to a finally block to ensure flags are always cleared:
try { // ... existing sync logic ... } catch (err: any) { console.error(`${this.CHRONIK_MSG_PREFIX}: ERROR in parallel sync: ${err.message as string}`) addresses.forEach(a => { if ((perAddrCount.get(a.id) ?? 0) === 0) { failedAddressesWithErrors[a.address] = err.stack ?? String(err) } }) + } finally { + // Ensure all addresses have syncing cleared + const allAddresses = addresses.map(a => a.address) + await setSyncingBatch(allAddresses, false).catch(err => + console.error(`Failed to clear syncing flags: ${err.message}`) + ) }
🧹 Nitpick comments (1)
services/chronikService.ts (1)
302-395: Tighten stopping condition, stop silently swallowing per-address errors, and emit address markers only after their txs
Stopping condition: simplify/remove the hasReachedStoppingCondition flag — after computing newestTs/oldestTs, break immediately when newestTs < lastSyncedTimestampSeconds or oldestTs < lastSyncedTimestampSeconds to make loop exits deterministic. (services/chronikService.ts ~353–363)
Error handling: per-address failures are logged then ignored (getPaginatedTxs catch + Promise.all worker.catch). Capture/propagate or aggregate these errors (or record metrics) instead of swallowing. (services/chronikService.ts ~345–374)
Batching/markers: addressesSynced is yielded before leftover chronikTxs are flushed (chronikTxs slice loop ~379–382; addressesSynced yield ~386) — emit addressesSynced only after all txs for that batch have been yielded.
Context: INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY is defined at constants/index.ts:282 (512).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
services/chronikService.ts(12 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
services/chronikService.ts (7)
prisma-local/seeds/addresses.ts (2)
addresses(1-122)productionAddresses(124-130)constants/index.ts (6)
INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY(282-282)CHRONIK_FETCH_N_TXS_PER_PAGE(280-280)TX_EMIT_BATCH_SIZE(283-283)DB_COMMIT_BATCH_SIZE(284-284)NETWORK_IDS_FROM_SLUGS(132-135)MainNetworkSlugsType(187-187)services/transactionService.ts (3)
TransactionWithAddressAndPrices(104-104)createManyTransactions(494-532)connectAllTransactionsToPrices(474-487)ws-service/types.ts (1)
BroadcastTxData(5-9)services/addressService.ts (2)
setSyncingBatch(316-327)updateManyLastSynced(275-287)prisma-local/seeds/transactions.ts (1)
appendTxsToFile(12-20)services/triggerService.ts (1)
executeTriggersBatch(501-608)
🔇 Additional comments (7)
services/chronikService.ts (7)
4-4: Import organization looks good with new constants and utilities.The imports correctly include the new concurrency constants and batch utilities needed for the parallel sync implementation.
109-114: Well-defined interfaces for batch processing.The
ChronikTxWithAddressandFetchedTxsBatchinterfaces clearly define the data structures for the new parallel fetching mechanism.
159-162: Clean helper method for timestamp conversion.The
getLastSyncTsmethod properly handles null lastSynced values and converts to seconds for chronik comparison.
176-178: New initialization control method.The
setInitializedmethod provides explicit control over the initialization state, which is good for coordinating multi-network initialization.
931-931: Good initialization coordination.The
setInitializedmethod properly coordinates initialization across multiple network clients, ensuring consistent state management.Also applies to: 940-940, 945-949
1032-1033: Correct network filtering in sync operations.The network filtering logic correctly uses
NETWORK_IDS_FROM_SLUGSfor consistent network identification.
1043-1045: Enhanced API with network parameter.The
getTransactionDetailsmethod now correctly accepts anetworkSlugparameter for proper network delegation.
| const rawByHash = new Map(commitPairs.map(p => [p.raw.txid, p.raw])) | ||
| const triggerBatch: BroadcastTxData[] = [] | ||
| for (const createdTx of createdTxs) { | ||
| const raw = rawByHash.get(createdTx.hash) | ||
| if (raw == null) continue | ||
| const bd = this.broadcastIncomingTx(createdTx.address.address, raw, createdTx) | ||
| triggerBatch.push(bd) | ||
| } | ||
| if (runTriggers && triggerBatch.length > 0) { | ||
| await executeTriggersBatch(triggerBatch, this.networkId) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential data inconsistency in trigger execution.
The trigger batch building logic has a potential issue where it tries to get raw transaction data from rawByHash after toCommit has been cleared (line 778). This could result in missing trigger executions.
Fix the data flow issue in the final flush section:
if (createdTxs.length > 0) {
- const rawByHash = new Map(toCommit.map(p => [p.raw.txid, p.raw]))
+ const rawByHash = new Map(toCommit.map(p => [p.row.hash, p.raw]))
const triggerBatch: BroadcastTxData[] = []
for (const createdTx of createdTxs) {
const raw = rawByHash.get(createdTx.hash)
if (raw == null) continue
const bd = this.broadcastIncomingTx(createdTx.address.address, raw, createdTx)
triggerBatch.push(bd)
}
if (runTriggers && triggerBatch.length > 0) {
await executeTriggersBatch(triggerBatch, this.networkId)
}
}Also, move the rawByHash creation before clearing toCommit on line 778.
Also applies to: 786-796
🤖 Prompt for AI Agents
In services/chronikService.ts around lines 758 to 768, the trigger batch is
built using rawByHash after the toCommit collection is cleared later (line 778),
which can cause missing raw transaction lookups; move the creation of rawByHash
to before toCommit is cleared so it captures all raw entries, then build
triggerBatch from that map and call executeTriggersBatch; apply the same change
to the similar final-flush section around lines 786-796 so rawByHash is
constructed prior to any clearing of toCommit and used to populate triggerBatch
before running triggers.
76269c4 to
023b909
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
services/chronikService.ts (2)
365-368: Unreachable condition; safe to remove.This check can never run because pageTxs.length === 0 breaks earlier.
- nextBurstBasePageIndex += 1 - if (pageTxs.length === 0 && oldestTs < lastSyncedTimestampSeconds) { - hasReachedStoppingCondition = true - } + nextBurstBasePageIndex += 1
757-767: Group OldTx broadcasts per address to reduce WS churn.Create one OldTx payload per address from createdTxs instead of one per tx — group createdTxs by createdTx.address.address and emit a single BroadcastTxData per address (adjust executeTriggersBatch / executeAddressTriggers if payload shape changes).
Call sites to update: services/chronikService.ts — lines ~585–589, 611–613 (method), 642–646, 761–765, 789–793.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
services/chronikService.ts(12 hunks)tests/unittests/triggerService.test.ts(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/unittests/triggerService.test.ts
🧰 Additional context used
🧬 Code graph analysis (1)
services/chronikService.ts (7)
prisma-local/seeds/addresses.ts (2)
addresses(1-122)productionAddresses(124-130)constants/index.ts (6)
INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY(282-282)CHRONIK_FETCH_N_TXS_PER_PAGE(280-280)TX_EMIT_BATCH_SIZE(283-283)DB_COMMIT_BATCH_SIZE(284-284)NETWORK_IDS_FROM_SLUGS(132-135)MainNetworkSlugsType(187-187)services/transactionService.ts (2)
TransactionWithAddressAndPrices(104-104)createManyTransactions(494-532)ws-service/types.ts (1)
BroadcastTxData(5-9)services/addressService.ts (2)
setSyncingBatch(316-327)updateManyLastSynced(275-287)prisma-local/seeds/transactions.ts (1)
appendTxsToFile(12-20)services/triggerService.ts (1)
executeTriggersBatch(501-608)
🔇 Additional comments (8)
services/chronikService.ts (8)
398-398: LGTM: page size constantized.
159-162: LGTM: lastSync seconds helper.
945-949: LGTM: explicit initialization propagation.
1031-1034: LGTM: per‑network address routing.
723-731: Don’t clear syncing flags mid‑run; accumulate finished addresses and clear once.Clearing in-loop re-enables mempool processing before history is committed, defeating waitForSyncing. Record finished addresses and clear at the end.
Apply:
@@ - if (batch.addressesSynced.length > 0) { - // marcador de slice => desmarca syncing - await setSyncingBatch(batch.addressesSynced, false) - continue - } + if (batch.addressesSynced.length > 0) { + // record finished addresses; do not clear syncing mid-run + batch.addressesSynced.forEach(a => finishedAddresses.add(a)) + continue + }And declare the accumulator:
@@ - const perAddrCount = new Map<string, number>() - addresses.forEach(a => perAddrCount.set(a.id, 0)) + const perAddrCount = new Map<string, number>() + addresses.forEach(a => perAddrCount.set(a.id, 0)) + const finishedAddresses = new Set<string>()
757-767: Initial sync is broadcasting as NewTx; should be OldTx.Initial history commits must be stamped OldTx to preserve consumer semantics.
Apply:
@@ - for (const createdTx of createdTxs) { + for (const createdTx of createdTxs) { const raw = rawByHash.get(createdTx.hash) if (raw == null) continue - const bd = this.broadcastIncomingTx(createdTx.address.address, raw, createdTx) + const bd = this.broadcastIncomingTx(createdTx.address.address, raw, createdTx, 'OldTx') triggerBatch.push(bd) }
815-821: Always clear syncing before return.Ensure flags are cleared even on error paths; removes dependency on mid-run clears.
@@ const total = Object.values(successfulAddressesWithCount).reduce((p, c) => p + c, 0) console.log(`${this.CHRONIK_MSG_PREFIX} (PARALLEL) Finished syncing ${total} txs for ${addresses.length} addresses with ${failed.length} errors.`) console.timeEnd(`${this.CHRONIK_MSG_PREFIX} syncAddresses`) + await setSyncingBatch(addresses.map(a => a.address), false) return { failedAddressesWithErrors, successfulAddressesWithCount }
785-796: Final flush uses cleared buffer; triggers never get raw txs.rawByHash is built from toCommit after it’s emptied; raw lookups are always undefined.
Apply:
@@ - if (toCommit.length > 0) { - const rows = toCommit.map(p => p.row) + if (toCommit.length > 0) { + const commitPairs = toCommit.slice() + const rows = commitPairs.map(p => p.row) const createdTxs = await createManyTransactions(rows) console.log(`${this.CHRONIK_MSG_PREFIX} committed FINAL — created=${createdTxs.length}`) toCommit = [] @@ - if (createdTxs.length > 0) { - const rawByHash = new Map(toCommit.map(p => [p.raw.txid, p.raw])) + if (createdTxs.length > 0) { + const rawByHash = new Map(commitPairs.map(p => [p.raw.txid, p.raw])) const triggerBatch: BroadcastTxData[] = [] for (const createdTx of createdTxs) { const raw = rawByHash.get(createdTx.hash) if (raw == null) continue - const bd = this.broadcastIncomingTx(createdTx.address.address, raw, createdTx) + const bd = this.broadcastIncomingTx(createdTx.address.address, raw, createdTx, 'OldTx') triggerBatch.push(bd) } if (runTriggers && triggerBatch.length > 0) { await executeTriggersBatch(triggerBatch, this.networkId) } }
| private broadcastIncomingTx (addressString: string, chronikTx: Tx, createdTx: TransactionWithAddressAndPrices): BroadcastTxData { | ||
| const broadcastTxData: BroadcastTxData = {} as BroadcastTxData | ||
| broadcastTxData.address = addressString | ||
| broadcastTxData.messageType = 'NewTx' | ||
| const inputAddresses = this.getSortedInputAddresses(chronikTx) | ||
| const outputAddresses = this.getSortedOutputAddresses(chronikTx) | ||
| const newSimplifiedTransaction = getSimplifiedTrasaction(createdTx, inputAddresses, outputAddresses) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Parameterize broadcast type; reuse for both NewTx and OldTx.
Let callers pick the message type; default remains NewTx for mempool/blocks.
Apply:
- private broadcastIncomingTx (addressString: string, chronikTx: Tx, createdTx: TransactionWithAddressAndPrices): BroadcastTxData {
+ private broadcastIncomingTx (
+ addressString: string,
+ chronikTx: Tx,
+ createdTx: TransactionWithAddressAndPrices,
+ messageType: BroadcastTxData['messageType'] = 'NewTx'
+ ): BroadcastTxData {
const broadcastTxData: BroadcastTxData = {} as BroadcastTxData
broadcastTxData.address = addressString
- broadcastTxData.messageType = 'NewTx'
+ broadcastTxData.messageType = messageType
const inputAddresses = this.getSortedInputAddresses(chronikTx)
const outputAddresses = this.getSortedOutputAddresses(chronikTx)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private broadcastIncomingTx (addressString: string, chronikTx: Tx, createdTx: TransactionWithAddressAndPrices): BroadcastTxData { | |
| const broadcastTxData: BroadcastTxData = {} as BroadcastTxData | |
| broadcastTxData.address = addressString | |
| broadcastTxData.messageType = 'NewTx' | |
| const inputAddresses = this.getSortedInputAddresses(chronikTx) | |
| const outputAddresses = this.getSortedOutputAddresses(chronikTx) | |
| const newSimplifiedTransaction = getSimplifiedTrasaction(createdTx, inputAddresses, outputAddresses) | |
| private broadcastIncomingTx( | |
| addressString: string, | |
| chronikTx: Tx, | |
| createdTx: TransactionWithAddressAndPrices, | |
| messageType: BroadcastTxData['messageType'] = 'NewTx' | |
| ): BroadcastTxData { | |
| const broadcastTxData: BroadcastTxData = {} as BroadcastTxData | |
| broadcastTxData.address = addressString | |
| broadcastTxData.messageType = messageType | |
| const inputAddresses = this.getSortedInputAddresses(chronikTx) | |
| const outputAddresses = this.getSortedOutputAddresses(chronikTx) | |
| const newSimplifiedTransaction = getSimplifiedTrasaction(createdTx, inputAddresses, outputAddresses) |
🤖 Prompt for AI Agents
In services/chronikService.ts around lines 611-617, the broadcastIncomingTx
function hardcodes messageType to 'NewTx' but should be parameterized so callers
can reuse it for 'OldTx'; change the signature to accept an optional messageType
parameter typed as 'NewTx'|'OldTx' with a default of 'NewTx', set
broadcastTxData.messageType = messageType, and update any callers that need to
send 'OldTx' to pass that value; ensure the BroadcastTxData type usage remains
compatible and update any call sites/tests to pass the new argument where
appropriate.
| const okAddresses = addresses.filter(a => !(a.address in failedAddressesWithErrors)) | ||
| await updateManyLastSynced(okAddresses.map(a => a.address)) | ||
| } catch (err: any) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update lastSynced only for addresses that actually finished.
Use finishedAddresses to avoid falsely advancing lastSynced on addresses that errored mid-run.
- const okAddresses = addresses.filter(a => !(a.address in failedAddressesWithErrors))
+ const okAddresses = addresses.filter(a =>
+ finishedAddresses.has(a.address) && !(a.address in failedAddressesWithErrors)
+ )
await updateManyLastSynced(okAddresses.map(a => a.address))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const okAddresses = addresses.filter(a => !(a.address in failedAddressesWithErrors)) | |
| await updateManyLastSynced(okAddresses.map(a => a.address)) | |
| } catch (err: any) { | |
| const okAddresses = addresses.filter(a => | |
| finishedAddresses.has(a.address) && !(a.address in failedAddressesWithErrors) | |
| ) | |
| await updateManyLastSynced(okAddresses.map(a => a.address)) | |
| } catch (err: any) { |
🤖 Prompt for AI Agents
In services/chronikService.ts around lines 804 to 806, the code updates
lastSynced using okAddresses which may include addresses that errored mid-run;
replace that call to updateManyLastSynced so it uses finishedAddresses (i.e.,
await updateManyLastSynced(finishedAddresses.map(a => a.address))) to ensure
only addresses that actually completed are advanced; ensure finishedAddresses is
defined and contains the addresses that finished before making the call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We agreed we'll just merge this as-is.


Related to #1049
Description
Parallelizes initial sync of transactions, making it way faster. This includes the parallelization of price connections, trigger execution, chronik txs fetch per address and almost everything else in the pipeline.
Furthermore:
emailCreditsfrompostCreditsundefinedPayButtonandUserProfile(yes, this didn't exist until now)devuser seeded triggerArray.flatMapTest plan
make devor withmake dev-from-dumpusing a production DB to test how it performs with a big DB.... to see how long it took to sync all
ecash/bitcoincashaddresses (it took 1m27s in the example above)TriggerLogtable in the db to see that logs are behaving as expected. We don't fire triggers when using prod DB locally, so this can be tested when running the app locally with the seeded DB.Summary by CodeRabbit
New Features
Improvements
Behavior Changes
Documentation