Added MegaETH and updated subgraphs#2372
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThe SIR adapter has been refactored to add MegaETH support with new configuration (chainId 4326, RPC, DEX parameters), rework price retrieval logic to use V3-style DEX pools, and change the data structure to return prices keyed by chain-address pairs. Active-chain filtering now drives execution, and the lookback window is set to 30 days. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
src/adaptors/sir/index.js (3)
340-342: Sequential subgraph calls for fee data will be slow with many vaults.
fetchFeesForVaultis awaited inside aforloop (one HTTP round-trip per vault, serially). With a growing number of vaults across three chains, this becomes the dominant latency bottleneck. Consider batching vaults per chain into a single subgraph query (fetching fees for all vault IDs at once), or at minimum parallelizing withPromise.all(with concurrency control if needed).♻️ Sketch: batch fee queries per chain
One approach — restructure the fee query to accept multiple vault IDs:
query getMultiVaultFees($vaultIds: [Bytes!]!, $timestampThreshold: BigInt!) { fees(where: { vaultId_in: $vaultIds, timestamp_gte: $timestampThreshold }) { vaultId lpApy timestamp } }Then group vaults by chain, make one query per chain, and distribute fee records back to the corresponding vault.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 340 - 342, The loop is calling fetchFeesForVault serially (one await per vault) causing high latency; refactor to batch or parallelize fee retrieval by implementing a new multi-vault fetch and replacing per-vault calls: group vaults by chain, create a batched subgraph query (e.g., a getMultiVaultFees that accepts vaultIds and timestampThreshold), call that once per chain to fetch all fees, then map returned fee records back to each vault and pass them into computeFeesApy; alternatively, if batching is not feasible immediately, parallelize current calls using Promise.all with concurrency control (throttle) around fetchFeesForVault to avoid overwhelming the subgraph.
122-126: Potential precision loss when convertingsqrtPriceX96toNumber.
sqrtPriceX96is auint160(up to ~1.46 × 10⁴⁸), butNumberonly has ~15–17 significant digits. For extreme price ratios (very large or very small), this conversion can silently produce inaccurate prices. Consider using ethersBigNumberarithmetic or a library likebignumber.jsfor the division and squaring.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 122 - 126, The conversion of sqrtPriceX96 and Q96 to JS Number loses precision; replace the Number(...) + Math.pow approach by performing the division and squaring with arbitrary-precision arithmetic: use ethers.BigNumber or a decimal library (e.g., bignumber.js or decimal.js) to compute ratio = sqrtPriceX96.div(Q96) or ratio = new BigNumber(sqrtPriceX96.toString()).div(Q96.toString()), then compute rawPrice = ratio.pow(2) (using the library's methods) and only convert to a JS Number if you validate it's within safe range; update the code around sqrtPriceX96, Q96 and rawPrice to use the chosen big-number API and return/emit a string or BigNumber if full precision is required.
138-140: Silent error swallowing makes debugging difficult.Returning
0is a safe fallback, but with zero logging here (and the summary notes thatconsole.logwas removed from error paths), operators won't have visibility into why SIR rewards APY drops to zero for a chain. Consider at least logging the chain key and error message at a warn level.Suggested minimal logging
} catch (error) { + console.error(`fetchSirPriceFromDex failed for ${chainCfg.key}: ${error.message}`); return 0; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 138 - 140, The catch block currently swallows errors (catch (error) { return 0; })—change it to log the failure at warn level before returning 0: use the module's existing logger (the same logger used elsewhere in src/adaptors/sir/index.js, e.g., logger.warn or processLogger.warn) to emit a message that includes the chain identifier (the variable used in this scope, e.g., chainKey or chain?.key) and the error message/stack, then return 0 as before; keep the log concise and use warn level so operators can see why SIR rewards fell to zero.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adaptors/sir/index.js`:
- Around line 5-10: ETHEREUM_SUBGRAPH_URL is incorrectly set to the same
endpoint as HYPER_SUBGRAPH_URL; update the ETHEREUM_SUBGRAPH_URL constant in
src/adaptors/sir/index.js to point to the Ethereum-specific subgraph (replace
"sir-hyperevm-yield-1/yield/gn" with the correct "sir-ethereum-yield-1/yield/gn"
or the proper Ethereum subgraph URL), ensuring ETHEREUM_SUBGRAPH_URL and
HYPER_SUBGRAPH_URL reference distinct, chain-correct endpoints.
- Line 349: The conversion to decimal using BigInt(vault.id) can throw if
vault.id lacks the "0x" prefix; update the logic around the vaultIdDecimal
assignment to defensively normalize vault.id (and handle null/undefined) by
ensuring a "0x" prefix before calling BigInt, or wrap the conversion in a
try/catch and retry with "0x" + vault.id on failure; refer to the vaultIdDecimal
variable and the vault.id input when making this change.
- Around line 328-333: The pricesByAddress map returned from utils.getPrices is
being used with raw addresses (collateralAddress derived from
vault.collateralToken.id and chainCfg.nativeWrapped) which loses chain context
and may cause cross-chain collisions; update the lookup to use chain-prefixed
keys consistent with getPrices (e.g., build keys like
`${chainIdOrName}:${address}` before indexing pricesByAddress) or change
utils.getPrices to return a nested map keyed by chain then address so you can
safely reference pricesByAddress[chain][address]; locate usages around
collateralAddress, nativePriceUsd, vault.collateralToken.id and
chainCfg.nativeWrapped and ensure all lookups preserve the same chain-aware key
format returned by getPrices.
---
Nitpick comments:
In `@src/adaptors/sir/index.js`:
- Around line 340-342: The loop is calling fetchFeesForVault serially (one await
per vault) causing high latency; refactor to batch or parallelize fee retrieval
by implementing a new multi-vault fetch and replacing per-vault calls: group
vaults by chain, create a batched subgraph query (e.g., a getMultiVaultFees that
accepts vaultIds and timestampThreshold), call that once per chain to fetch all
fees, then map returned fee records back to each vault and pass them into
computeFeesApy; alternatively, if batching is not feasible immediately,
parallelize current calls using Promise.all with concurrency control (throttle)
around fetchFeesForVault to avoid overwhelming the subgraph.
- Around line 122-126: The conversion of sqrtPriceX96 and Q96 to JS Number loses
precision; replace the Number(...) + Math.pow approach by performing the
division and squaring with arbitrary-precision arithmetic: use ethers.BigNumber
or a decimal library (e.g., bignumber.js or decimal.js) to compute ratio =
sqrtPriceX96.div(Q96) or ratio = new
BigNumber(sqrtPriceX96.toString()).div(Q96.toString()), then compute rawPrice =
ratio.pow(2) (using the library's methods) and only convert to a JS Number if
you validate it's within safe range; update the code around sqrtPriceX96, Q96
and rawPrice to use the chosen big-number API and return/emit a string or
BigNumber if full precision is required.
- Around line 138-140: The catch block currently swallows errors (catch (error)
{ return 0; })—change it to log the failure at warn level before returning 0:
use the module's existing logger (the same logger used elsewhere in
src/adaptors/sir/index.js, e.g., logger.warn or processLogger.warn) to emit a
message that includes the chain identifier (the variable used in this scope,
e.g., chainKey or chain?.key) and the error message/stack, then return 0 as
before; keep the log concise and use warn level so operators can see why SIR
rewards fell to zero.
| const vaultIdDecimal = parseInt(vault.id, 16); | ||
| const sirRewardsApy = computeSirRewardsApy(vault, sirPriceInCollateral, sirAddress); | ||
|
|
||
| const vaultIdDecimal = BigInt(vault.id).toString(); |
There was a problem hiding this comment.
Good switch from parseInt to BigInt for large vault IDs — but add a safety net for missing 0x prefix.
BigInt("0x1a") works, but BigInt("1a") throws a SyntaxError. If any subgraph ever returns an ID without the 0x prefix, this crashes the entire adaptor. A small guard avoids that:
🛡️ Suggested defensive conversion
- const vaultIdDecimal = BigInt(vault.id).toString();
+ const vaultIdDecimal = BigInt(
+ vault.id.startsWith("0x") ? vault.id : `0x${vault.id}`
+ ).toString();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const vaultIdDecimal = BigInt(vault.id).toString(); | |
| const vaultIdDecimal = BigInt( | |
| vault.id.startsWith("0x") ? vault.id : `0x${vault.id}` | |
| ).toString(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/adaptors/sir/index.js` at line 349, The conversion to decimal using
BigInt(vault.id) can throw if vault.id lacks the "0x" prefix; update the logic
around the vaultIdDecimal assignment to defensively normalize vault.id (and
handle null/undefined) by ensuring a "0x" prefix before calling BigInt, or wrap
the conversion in a try/catch and retry with "0x" + vault.id on failure; refer
to the vaultIdDecimal variable and the vault.id input when making this change.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/adaptors/sir/index.js (1)
338-347:⚠️ Potential issue | 🔴 CriticalCritical: Price lookups are completely broken — adaptor will return zero pools.
fetchPrices()(line 304) returns{ pricesByChainAddress }with chain-prefixed keys like"ethereum:0xabc...", but lines 344 and 347 accessprices.pricesByAddress(wrong property name) with raw addresses (missing chain prefix). This means:
prices.pricesByAddressisundefined→ every.pricesByAddress[...]returnsundefined→ falls back to0- All TVLs compute to
0→ every vault is skipped at line 353- The adaptor returns an empty array — no pools are yielded at all
🐛 Proposed fix
const collateralAddress = vault.collateralToken.id.toLowerCase(); - const collateralPriceUsd = prices.pricesByAddress[collateralAddress] || 0; + const collateralPriceUsd = prices.pricesByChainAddress[`${chainCfg.priceKey}:${collateralAddress}`] || 0; const nativePriceUsd = - prices.pricesByAddress[chainCfg.nativeWrapped.toLowerCase()] || 0; + prices.pricesByChainAddress[`${chainCfg.priceKey}:${chainCfg.nativeWrapped.toLowerCase()}`] || 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 338 - 347, The price lookups in the allVaults loop are using the wrong property and unprefixed addresses: replace usages of prices.pricesByAddress when computing collateralPriceUsd and nativePriceUsd with prices.pricesByChainAddress and look up keys using the chain-prefixed form (e.g., `${chainKey}:${address.toLowerCase()}`) so that collateralPriceUsd = prices.pricesByChainAddress[`${chainKey}:${vault.collateralToken.id.toLowerCase()}`] || 0 and nativePriceUsd = prices.pricesByChainAddress[`${chainKey}:${chainCfg.nativeWrapped.toLowerCase()}`] || 0; update any related references in this block (the for-loop over allVaults, variables collateralPriceUsd and nativePriceUsd) to use the corrected lookup.
🧹 Nitpick comments (3)
src/adaptors/sir/index.js (3)
293-302: Consider batching all price lookups into a singlegetPricescall.Currently,
utils.getPricesis called once per chain (sequentially). Since coin identifiers are already chain-prefixed (ethereum:0x...,megaeth:0x...), you can flatten all coins into a single array and make one call, reducing API round-trips.💡 Suggested optimization
- for (const [pk, coinSet] of coinsByPriceKey.entries()) { - const res = await utils.getPrices(Array.from(coinSet)); - - // res.pricesByAddress is flat, so re-key it back to pk scoped keys - for (const coin of coinSet) { - const [, address] = coin.split(":"); - const p = res.pricesByAddress[address] || 0; - pricesByChainAddress[`${pk}:${address}`] = p; - } - } + const allCoins = []; + for (const coinSet of coinsByPriceKey.values()) { + allCoins.push(...coinSet); + } + const res = await utils.getPrices(Array.from(new Set(allCoins))); + for (const coin of allCoins) { + const [, address] = coin.split(":"); + const p = res.pricesByAddress[address] || 0; + pricesByChainAddress[coin] = p; + }Note: If you batch this way, the cross-chain collision risk in
res.pricesByAddress(flat address-only map) reappears. You'd need to useres.pricesBySymbolorres.coins(the fullchain:addresskeyed map) instead to safely resolve per-chain prices.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 293 - 302, The loop calls utils.getPrices per pk; instead flatten all coin identifiers from coinsByPriceKey (each coin already "chain:address") into one array, call utils.getPrices once, then populate pricesByChainAddress by reading the chain-prefixed entries from the single result (use res.coins or res.pricesBySymbol instead of res.pricesByAddress to avoid address-only collisions) and map back to `${pk}:${address}` keys; update references to coinsByPriceKey, utils.getPrices, pricesByChainAddress, and res.pricesByAddress/res.coins accordingly.
138-140: Silent error swallowing hinders debugging.When this
catchreturns0without logging, RPC failures, ABI mismatches, or misconfigured factory addresses become invisible. Consider logging a warning so issues can be diagnosed in production.💡 Suggested improvement
} catch (error) { + console.error(`[SIR] fetchSirPriceFromDex failed for ${chainCfg.key}:`, error.message); return 0; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 138 - 140, The catch block in src/adaptors/sir/index.js currently swallows errors by returning 0; update that catch (the one that returns 0) to log the error before returning the fallback so failures are visible in production—use the module's logger (e.g., logger.warn or processLogger.warn) or console.warn to emit a message including the error.message and error.stack (or the whole error object) and context (function name or input values), then return 0 as the fallback. Ensure you reference the same catch/return path so behavior stays the same except now errors are logged.
338-356: Sequential fee queries could become a bottleneck.Each vault issues its own
await fetchFeesForVault(...)sequentially (line 355). With many vaults across three chains, this serializes all subgraph calls. Consider batching fee queries per chain in a single GraphQL request (querying all vault IDs at once), or using bounded concurrency (e.g.,Promise.allwith chunking) to reduce total wall time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 338 - 356, The loop over allVaults calls await fetchFeesForVault(...) sequentially (inside the for...of that references chainCfg and vault), which serializes subgraph calls; update this to batch or parallelize: group vault IDs by chainKey (from chainCfg) and replace per-vault fetchFeesForVault calls with a single batch fetch (e.g., create or reuse a fetchFeesForVaultBatch(chainCfg, vaultIds, timestampThreshold) that queries multiple vault IDs per chain) or implement bounded concurrency (use Promise.all over chunks or a p-limit/p-map style limiter) to call fetchFeesForVault for many vaults in parallel, then map results back to each vault before calling computeFeesApy; ensure you preserve use of timestampThreshold and that tvlUsd filtering (computeTvlUsd) still skips zero/invalid vaults before fetching fees.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/adaptors/sir/index.js`:
- Around line 338-347: The price lookups in the allVaults loop are using the
wrong property and unprefixed addresses: replace usages of
prices.pricesByAddress when computing collateralPriceUsd and nativePriceUsd with
prices.pricesByChainAddress and look up keys using the chain-prefixed form
(e.g., `${chainKey}:${address.toLowerCase()}`) so that collateralPriceUsd =
prices.pricesByChainAddress[`${chainKey}:${vault.collateralToken.id.toLowerCase()}`]
|| 0 and nativePriceUsd =
prices.pricesByChainAddress[`${chainKey}:${chainCfg.nativeWrapped.toLowerCase()}`]
|| 0; update any related references in this block (the for-loop over allVaults,
variables collateralPriceUsd and nativePriceUsd) to use the corrected lookup.
---
Duplicate comments:
In `@src/adaptors/sir/index.js`:
- Line 363: Guard against missing or invalid hex prefix before calling BigInt on
vault.id: check that vault.id is a non-empty string, normalize it to a hex
string by ensuring it starts with "0x" (e.g., if not, prepend "0x"), then pass
that normalized value to BigInt when computing vaultIdDecimal; also handle
null/undefined vault.id by throwing or returning a clear error/skip path. Update
the logic around the vaultIdDecimal assignment (the expression using vault.id
and the constant named vaultIdDecimal) to perform these validations and
normalization before converting to decimal.
---
Nitpick comments:
In `@src/adaptors/sir/index.js`:
- Around line 293-302: The loop calls utils.getPrices per pk; instead flatten
all coin identifiers from coinsByPriceKey (each coin already "chain:address")
into one array, call utils.getPrices once, then populate pricesByChainAddress by
reading the chain-prefixed entries from the single result (use res.coins or
res.pricesBySymbol instead of res.pricesByAddress to avoid address-only
collisions) and map back to `${pk}:${address}` keys; update references to
coinsByPriceKey, utils.getPrices, pricesByChainAddress, and
res.pricesByAddress/res.coins accordingly.
- Around line 138-140: The catch block in src/adaptors/sir/index.js currently
swallows errors by returning 0; update that catch (the one that returns 0) to
log the error before returning the fallback so failures are visible in
production—use the module's logger (e.g., logger.warn or processLogger.warn) or
console.warn to emit a message including the error.message and error.stack (or
the whole error object) and context (function name or input values), then return
0 as the fallback. Ensure you reference the same catch/return path so behavior
stays the same except now errors are logged.
- Around line 338-356: The loop over allVaults calls await
fetchFeesForVault(...) sequentially (inside the for...of that references
chainCfg and vault), which serializes subgraph calls; update this to batch or
parallelize: group vault IDs by chainKey (from chainCfg) and replace per-vault
fetchFeesForVault calls with a single batch fetch (e.g., create or reuse a
fetchFeesForVaultBatch(chainCfg, vaultIds, timestampThreshold) that queries
multiple vault IDs per chain) or implement bounded concurrency (use Promise.all
over chunks or a p-limit/p-map style limiter) to call fetchFeesForVault for many
vaults in parallel, then map results back to each vault before calling
computeFeesApy; ensure you preserve use of timestampThreshold and that tvlUsd
filtering (computeTvlUsd) still skips zero/invalid vaults before fetching fees.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
src/adaptors/sir/index.js (4)
291-311: Minor: native-wrapped loop iterates all vaults instead of unique chains, andgetPricescalls are sequential.Lines 291-297 loop over
allVaultsto add each chain's native wrapped token, but the first loop already grouped bypriceKey. The native token could be added in the first loop (or you could iteratecoinsByPriceKey.keys()/ a deduplicated chain list). TheSetdeduplication makes this correct but slightly wasteful.Also, the
getPricescalls at Line 302 are sequential per chain. With 3+ chains,Promise.allwould cut latency:♻️ Suggested parallel fetch
- for (const [pk, coinSet] of coinsByPriceKey.entries()) { - const res = await utils.getPrices(Array.from(coinSet)); - const pricesByAddress = (res && res.pricesByAddress) || {}; - - for (const coin of coinSet) { - const parts = coin.split(":"); - const address = parts.slice(1).join(":").toLowerCase(); - const p = pricesByAddress[address] || 0; - pricesByChainAddress[`${pk}:${address}`] = p; - } - } + const entries = Array.from(coinsByPriceKey.entries()); + const results = await Promise.all( + entries.map(([, coinSet]) => utils.getPrices(Array.from(coinSet))) + ); + entries.forEach(([pk, coinSet], idx) => { + const pricesByAddress = (results[idx] && results[idx].pricesByAddress) || {}; + for (const coin of coinSet) { + const parts = coin.split(":"); + const address = parts.slice(1).join(":").toLowerCase(); + pricesByChainAddress[`${pk}:${address}`] = pricesByAddress[address] || 0; + } + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 291 - 311, The loop over allVaults redundantly adds nativeWrapped entries per vault instead of once per unique chain and the getPrices calls are done sequentially; fix by adding each chain's nativeWrapped when you initially populate coinsByPriceKey (use chainCfg.priceKey and chainCfg.nativeWrapped there) or iterate the deduplicated coinsByPriceKey.keys() instead of allVaults, and replace the sequential await in the for-loop over coinsByPriceKey with a parallel fetch: map each coinSet to a utils.getPrices promise, await Promise.all, then aggregate each result into pricesByChainAddress (referencing coinsByPriceKey, utils.getPrices, and pricesByChainAddress to locate where to change).
128-130: Precision loss when convertingsqrtPriceX96toNumber.
sqrtPriceX96can be up to ~2¹⁶⁰, far exceedingNumber.MAX_SAFE_INTEGER(2⁵³). Converting viaNumber(sqrtPriceX96.toString())silently truncates precision, which can produce an inaccurate SIR price — especially for token pairs with extreme price ratios.For a yield display adaptor this is often tolerable, but if accuracy matters you can keep the computation in
BigNumberor useBigIntarithmetic up to the final division.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 128 - 130, The conversion to Number loses precision for very large sqrtPriceX96; update the calculation that produces rawPrice to perform the arithmetic using BigInt or a BigNumber library instead of Number so you don't truncate sqrtPriceX96 or Q96. Locate the variables sqrtPriceX96, Q96 and rawPrice in src/adaptors/sir/index.js and compute the squared ratio in high-precision (e.g., use BigInt or an existing BigNumber instance, do the division and squaring in that type, then convert to Number only at the very end if needed), ensuring you preserve precision until the final result.
346-369: SequentialfetchFeesForVaultinside the loop is an O(n) waterfall of HTTP requests.Each vault triggers a separate subgraph query (Line 368), executed one at a time. With dozens of vaults across 3 chains, this becomes the dominant latency bottleneck of the adaptor. Consider batching fee queries per chain (fetch all vault fees in a single subgraph query using an
id_infilter) or at minimum usingPromise.allwith a concurrency limit.♻️ Example: batch fee queries per chain
You could pre-fetch fees for all vaults of a chain in a single query before entering the per-vault loop:
+ // Pre-fetch fees for all vaults, grouped by chain + const feesByChainVault = {}; + for (const chainCfg of activeChainCfgs) { + const chainVaults = allVaults + .filter(({ chainCfg: c }) => c.key === chainCfg.key) + .map(({ vault }) => vault.id); + // Batch query: fees for all vaults in one request + const query = ` + query getAllFees($vaultIds: [Bytes!]!, $ts: BigInt!) { + fees(where: { vaultId_in: $vaultIds, timestamp_gte: $ts }) { + vaultId + lpApy + timestamp + } + } + `; + const { fees } = await querySubgraph(chainCfg.subgraphUrl, query, { + vaultIds: chainVaults, + ts: String(timestampThreshold), + }); + for (const fee of (fees || [])) { + const key = `${chainCfg.key}:${fee.vaultId}`; + if (!feesByChainVault[key]) feesByChainVault[key] = []; + feesByChainVault[key].push(fee); + } + }Then in the loop replace the per-vault fetch:
- const fees = await fetchFeesForVault(chainCfg, vault.id, timestampThreshold); + const fees = feesByChainVault[`${chainKey}:${vault.id}`] || [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 346 - 369, The loop over allVaults calls fetchFeesForVault sequentially, causing a waterfall of subgraph queries; fix by batching or parallelizing per-chain fee fetches: group vaults by chainKey (use chainCfg.key/chainCfg.priceKey and vault.id), perform one bulk fetch per chain (add or use a new fetchFeesForChain that queries fees with id_in for all vault ids) to build a map of vaultId->fees, then inside the per-vault loop replace the await fetchFeesForVault(...) call with a synchronous lookup from that map and computeFeesApy(fees); if batching per-chain is not feasible, use Promise.all with a concurrency limiter (e.g., p-limit) to call fetchFeesForVault in parallel instead of awaiting each iteration.
145-147: Silent error swallowing hides RPC/contract failures.Returning
0without any logging means misconfigured RPC URLs, incorrect factory addresses, or network outages will silently zero out SIR reward APY with no trace in logs for debugging.♻️ Suggested: add a warning log
} catch (error) { + console.error(`fetchSirPriceFromDex failed for ${chainCfg.key}:`, error.message); return 0; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 145 - 147, The catch block that currently does "catch (error) { return 0; }" silently swallows all errors; update that block to log a warning including the caught error and relevant context before returning 0. In the catch for the SIR APY routine (the catch around the code that uses RPC/factory addresses in src/adaptors/sir/index.js), call your module logger (or console.warn/console.error if no logger exists) with a message that includes the error (error.message or error.stack) and contextual identifiers such as rpcUrl, factoryAddress, network or the function name, then return 0 as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adaptors/sir/index.js`:
- Line 387: The poolMeta string uses 1 + 2 ** Number(vault.leverageTier) which
yields "NaN" if vault.leverageTier is missing or non-numeric; fix by parsing and
validating the tier first (e.g., const tierNum = Number(vault.leverageTier);
const tier = Number.isFinite(tierNum) ? tierNum : 0) and then build poolMeta
with 1 + 2 ** tier so the display never becomes NaN and falls back to a sensible
default. Reference: poolMeta and vault.leverageTier.
---
Duplicate comments:
In `@src/adaptors/sir/index.js`:
- Line 380: The code calls BigInt(vault.id) directly (see vaultIdDecimal) which
throws if vault.id contains hex letters but lacks a "0x" prefix; fix by
normalizing vault.id before conversion: compute a local normalizedId (check
vault.id via startsWith('0x') and whether it contains hex letters like
/[a-fA-F]/) and if it contains hex letters but no "0x" prefix, prepend "0x" to
normalizedId, then use BigInt(normalizedId).toString() to set vaultIdDecimal;
update the code around vaultIdDecimal to use normalizedId instead of vault.id.
---
Nitpick comments:
In `@src/adaptors/sir/index.js`:
- Around line 291-311: The loop over allVaults redundantly adds nativeWrapped
entries per vault instead of once per unique chain and the getPrices calls are
done sequentially; fix by adding each chain's nativeWrapped when you initially
populate coinsByPriceKey (use chainCfg.priceKey and chainCfg.nativeWrapped
there) or iterate the deduplicated coinsByPriceKey.keys() instead of allVaults,
and replace the sequential await in the for-loop over coinsByPriceKey with a
parallel fetch: map each coinSet to a utils.getPrices promise, await
Promise.all, then aggregate each result into pricesByChainAddress (referencing
coinsByPriceKey, utils.getPrices, and pricesByChainAddress to locate where to
change).
- Around line 128-130: The conversion to Number loses precision for very large
sqrtPriceX96; update the calculation that produces rawPrice to perform the
arithmetic using BigInt or a BigNumber library instead of Number so you don't
truncate sqrtPriceX96 or Q96. Locate the variables sqrtPriceX96, Q96 and
rawPrice in src/adaptors/sir/index.js and compute the squared ratio in
high-precision (e.g., use BigInt or an existing BigNumber instance, do the
division and squaring in that type, then convert to Number only at the very end
if needed), ensuring you preserve precision until the final result.
- Around line 346-369: The loop over allVaults calls fetchFeesForVault
sequentially, causing a waterfall of subgraph queries; fix by batching or
parallelizing per-chain fee fetches: group vaults by chainKey (use
chainCfg.key/chainCfg.priceKey and vault.id), perform one bulk fetch per chain
(add or use a new fetchFeesForChain that queries fees with id_in for all vault
ids) to build a map of vaultId->fees, then inside the per-vault loop replace the
await fetchFeesForVault(...) call with a synchronous lookup from that map and
computeFeesApy(fees); if batching per-chain is not feasible, use Promise.all
with a concurrency limiter (e.g., p-limit) to call fetchFeesForVault in parallel
instead of awaiting each iteration.
- Around line 145-147: The catch block that currently does "catch (error) {
return 0; }" silently swallows all errors; update that block to log a warning
including the caught error and relevant context before returning 0. In the catch
for the SIR APY routine (the catch around the code that uses RPC/factory
addresses in src/adaptors/sir/index.js), call your module logger (or
console.warn/console.error if no logger exists) with a message that includes the
error (error.message or error.stack) and contextual identifiers such as rpcUrl,
factoryAddress, network or the function name, then return 0 as before.
| vault.collateralToken.symbol || "UNKNOWN" | ||
| )}-${utils.formatSymbol(vault.debtToken.symbol || "UNKNOWN")}`; | ||
|
|
||
| const poolMeta = `Leverage ratio: ${1 + 2 ** Number(vault.leverageTier)}`; |
There was a problem hiding this comment.
poolMeta will show "Leverage ratio: NaN" if leverageTier is missing or non-numeric.
Number(undefined) → NaN, so 2 ** NaN → NaN. A simple fallback avoids broken display text:
🛡️ Suggested guard
- const poolMeta = `Leverage ratio: ${1 + 2 ** Number(vault.leverageTier)}`;
+ const tier = Number(vault.leverageTier);
+ const poolMeta = Number.isFinite(tier)
+ ? `Leverage ratio: ${1 + 2 ** tier}`
+ : `Leverage ratio: unknown`;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const poolMeta = `Leverage ratio: ${1 + 2 ** Number(vault.leverageTier)}`; | |
| const tier = Number(vault.leverageTier); | |
| const poolMeta = Number.isFinite(tier) | |
| ? `Leverage ratio: ${1 + 2 ** tier}` | |
| : `Leverage ratio: unknown`; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/adaptors/sir/index.js` at line 387, The poolMeta string uses 1 + 2 **
Number(vault.leverageTier) which yields "NaN" if vault.leverageTier is missing
or non-numeric; fix by parsing and validating the tier first (e.g., const
tierNum = Number(vault.leverageTier); const tier = Number.isFinite(tierNum) ?
tierNum : 0) and then build poolMeta with 1 + 2 ** tier so the display never
becomes NaN and falls back to a sensible default. Reference: poolMeta and
vault.leverageTier.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adaptors/.claude/settings.local.json`:
- Around line 1-17: Remove the developer-local Claude settings file
(settings.local.json) from version control and prevent it from being committed
again: add an appropriate .gitignore entry (e.g., settings.local.json or
.claude/settings.local.json) at the repository root or in the .claude directory,
run git rm --cached on the tracked settings.local.json to untrack it, and commit
the .gitignore and removal; ensure no other local-only config files are tracked
in the same directory.
| { | ||
| "permissions": { | ||
| "allow": [ | ||
| "Bash(npm run test:*)", | ||
| "Bash(npm install:*)", | ||
| "Bash(npm_config_adapter=sir npm run test:*)", | ||
| "Bash(node:*)", | ||
| "Bash(cat:*)", | ||
| "Bash(git checkout:*)", | ||
| "Bash(curl:*)", | ||
| "Bash(npm_config_adapter=sir npm_config_fast=true npm run test)", | ||
| "Bash(npx jest:*)" | ||
| ], | ||
| "deny": [], | ||
| "ask": [] | ||
| } | ||
| } |
There was a problem hiding this comment.
Local AI tool config should not be committed to the repository.
src/adaptors/.claude/settings.local.json is a developer-local Claude AI tool settings file. It encodes personal Bash permission preferences and should be excluded via .gitignore rather than tracked in version control. Committing it leaks developer workflow context and risks unintended enforcement of these permissions if the file is loaded in a shared or CI environment.
Add a .gitignore entry at the repository root or in the src/adaptors/.claude/ directory:
+# Claude AI local settings
+.claude/settings.local.json🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/adaptors/.claude/settings.local.json` around lines 1 - 17, Remove the
developer-local Claude settings file (settings.local.json) from version control
and prevent it from being committed again: add an appropriate .gitignore entry
(e.g., settings.local.json or .claude/settings.local.json) at the repository
root or in the .claude directory, run git rm --cached on the tracked
settings.local.json to untrack it, and commit the .gitignore and removal; ensure
no other local-only config files are tracked in the same directory.
0xkr3p
left a comment
There was a problem hiding this comment.
hey @Codonyat, thanks for the PR! I think you may have accidentally added a bunch of unnecessary files in your last commit, pls revert
has MegaETH been added to the TVL Adapter yet? If not, can you pls add
pls ensure the pool values do not change for existing pools, case must be exactly the same for these too otherwise the historical data is lost
Nope, I forgot about the TVL. Thanks, will add it now and fix this. 🙏 |
|
once this has been reviewed and merged by the team, i will approve and merge the yield adapter. have you confirmed the existing pool values are the same? |
Yes |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/adaptors/sir/index.js (2)
348-369: O(n) sequential subgraph calls for fee fetching — consider pre-fetching.
fetchFeesForVaultis awaited inside theforloop, issuing one subgraph request per vault. With many vaults across three chains this could be the dominant latency contributor. Pre-fetching all fees before the loop (either by parallelizing the calls withPromise.allor by extending the subgraph query to include fee events directly) would collapse the latency from O(vaults) to O(chains).♻️ Proposed refactor — parallel pre-fetch
+ // Pre-fetch all fees in parallel before entering the vault loop + const feesMap = new Map(); + await Promise.all( + allVaults.map(async ({ chainCfg, vault }) => { + const fees = await fetchFeesForVault(chainCfg, vault.id, timestampThreshold); + feesMap.set(`${chainCfg.key}:${vault.id}`, fees); + }) + ); + for (const { chainCfg, vault } of allVaults) { // ... existing setup ... - const fees = await fetchFeesForVault(chainCfg, vault.id, timestampThreshold); + const fees = feesMap.get(`${chainCfg.key}:${vault.id}`) || [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 348 - 369, The code currently awaits fetchFeesForVault inside the for-loop (involving allVaults), causing O(vaults) sequential subgraph calls; change this by pre-fetching fees for all vaults before iterating: build an array of promises by mapping allVaults to fetchFeesForVault(chainCfg, vault.id, timestampThreshold), await Promise.all to get fees results, index them by vault.id (or by the same order), then inside the loop replace the awaited call with the pre-fetched fees and pass them to computeFeesApy; alternatively, if feasible extend the subgraph query to return fee events per vault so fetchFeesForVault becomes unnecessary — in either case keep references to fetchFeesForVault and computeFeesApy and preserve timestampThreshold when pre-fetching.
301-311: Parallelize per-chaingetPricescalls.The
awaitinside thefor...ofloop makes the price fetches sequential. Since the calls are independent,Promise.allreduces wall-clock time.♻️ Proposed refactor
- for (const [pk, coinSet] of coinsByPriceKey.entries()) { - const res = await utils.getPrices(Array.from(coinSet)); - const pricesByAddress = (res && res.pricesByAddress) || {}; - - for (const coin of coinSet) { - const parts = coin.split(":"); - const address = parts.slice(1).join(":").toLowerCase(); - const p = pricesByAddress[address] || 0; - pricesByChainAddress[`${pk}:${address}`] = p; - } - } + await Promise.all( + Array.from(coinsByPriceKey.entries()).map(async ([pk, coinSet]) => { + const res = await utils.getPrices(Array.from(coinSet)); + const pricesByAddress = (res && res.pricesByAddress) || {}; + for (const coin of coinSet) { + const parts = coin.split(":"); + const address = parts.slice(1).join(":").toLowerCase(); + const p = pricesByAddress[address] || 0; + pricesByChainAddress[`${pk}:${address}`] = p; + } + }) + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sir/index.js` around lines 301 - 311, The loop over coinsByPriceKey is making sequential calls to utils.getPrices; instead, build an array/map of promises for each entry (referencing coinsByPriceKey, pk, and coinSet) by calling utils.getPrices(Array.from(coinSet)) for each pk, then await Promise.all on those promises and iterate the results to populate pricesByChainAddress using the same address extraction logic (parts, address, p) so the getPrices calls run in parallel and preserve the existing pricesByChainAddress assignment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adaptors/sir/index.js`:
- Around line 126-130: The code converts sqrtPriceX96 and Q96 to Number which
loses precision; fix by doing the ratio with BigInt fixed-point math instead:
use sqrtPriceX96.toBigInt() and compute numerator = sqrtPriceX96BigInt ** 2 and
denom = (1n << 96n) ** 2 (or Q96.toBigInt() ** 2), then compute a scaled
fixed-point value like scaled = (numerator * 10n ** 18n) / denom and derive
rawPrice = Number(scaled) / 1e18 so nativePerSir uses a high-precision ratio;
update references to sqrtPriceX96, Q96, rawPrice and nativePerSir accordingly.
---
Duplicate comments:
In `@src/adaptors/sir/index.js`:
- Line 387: poolMeta currently computes using Number(vault.leverageTier) which
yields NaN when leverageTier is absent; update the logic around the poolMeta
assignment (the poolMeta variable that references vault.leverageTier) to check
for a missing/invalid leverageTier first and handle it deterministically—e.g.,
if vault.leverageTier is null/undefined or not a finite number set poolMeta to a
safe fallback like "Leverage ratio: N/A" (or use a default tier like 0),
otherwise compute the ratio using a validated numeric value (parse/Number and
ensure finite) before doing 2 ** tier.
- Line 380: The conversion line using BigInt(vault.id) (variable/assignment
vaultIdDecimal) must guard against non-prefixed hex strings; change it to ensure
vault.id is prefixed with "0x" before calling BigInt (e.g., if typeof vault.id
=== "string" and !vault.id.startsWith("0x") then use "0x"+vault.id), then call
BigInt on the prefixed value and toString(); keep the same vaultIdDecimal name
and ensure this normalization happens immediately before the BigInt conversion
so non-prefixed IDs do not throw a SyntaxError.
---
Nitpick comments:
In `@src/adaptors/sir/index.js`:
- Around line 348-369: The code currently awaits fetchFeesForVault inside the
for-loop (involving allVaults), causing O(vaults) sequential subgraph calls;
change this by pre-fetching fees for all vaults before iterating: build an array
of promises by mapping allVaults to fetchFeesForVault(chainCfg, vault.id,
timestampThreshold), await Promise.all to get fees results, index them by
vault.id (or by the same order), then inside the loop replace the awaited call
with the pre-fetched fees and pass them to computeFeesApy; alternatively, if
feasible extend the subgraph query to return fee events per vault so
fetchFeesForVault becomes unnecessary — in either case keep references to
fetchFeesForVault and computeFeesApy and preserve timestampThreshold when
pre-fetching.
- Around line 301-311: The loop over coinsByPriceKey is making sequential calls
to utils.getPrices; instead, build an array/map of promises for each entry
(referencing coinsByPriceKey, pk, and coinSet) by calling
utils.getPrices(Array.from(coinSet)) for each pk, then await Promise.all on
those promises and iterate the results to populate pricesByChainAddress using
the same address extraction logic (parts, address, p) so the getPrices calls run
in parallel and preserve the existing pricesByChainAddress assignment.
| const Q96 = ethers.BigNumber.from(2).pow(96); | ||
|
|
||
| // Calculate price from sqrtPriceX96 | ||
| // price = (sqrtPriceX96 / 2^96)^2 | ||
| // This gives token1/token0 ratio in raw terms | ||
| const sqrtPriceX96Num = Number(sqrtPriceX96.toString()); | ||
| const Q96Num = Number(Q96.toString()); | ||
| const rawPrice = Math.pow(sqrtPriceX96Num / Q96Num, 2); |
There was a problem hiding this comment.
Number() conversion of sqrtPriceX96 loses precision for large values.
Both sqrtPriceX96 and Q96 (2⁹⁶ ≈ 7.9×10²⁸) exceed JavaScript's safe-integer limit of 2⁵³. Converting them to Number via .toString() introduces floating-point error that compounds when squaring the ratio. For extreme price ratios (SIR very cheap or very expensive vs. the native token), the resulting nativePerSir could be noticeably off.
A more precise alternative using BigInt arithmetic before converting to float:
🛡️ Higher-precision alternative
- const sqrtPriceX96Num = Number(sqrtPriceX96.toString());
- const Q96Num = Number(Q96.toString());
- const rawPrice = Math.pow(sqrtPriceX96Num / Q96Num, 2);
+ // Compute price as a BigInt-scaled integer, then convert to float
+ const PRECISION = BigInt(1e18.toString()); // 18 decimal places of precision
+ const sqrtPriceBig = BigInt(sqrtPriceX96.toString());
+ const Q96Big = BigInt(Q96.toString());
+ const scaledSqrt = (sqrtPriceBig * PRECISION) / Q96Big;
+ const sqrtRatio = Number(scaledSqrt.toString()) / 1e18;
+ const rawPrice = sqrtRatio * sqrtRatio;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/adaptors/sir/index.js` around lines 126 - 130, The code converts
sqrtPriceX96 and Q96 to Number which loses precision; fix by doing the ratio
with BigInt fixed-point math instead: use sqrtPriceX96.toBigInt() and compute
numerator = sqrtPriceX96BigInt ** 2 and denom = (1n << 96n) ** 2 (or
Q96.toBigInt() ** 2), then compute a scaled fixed-point value like scaled =
(numerator * 10n ** 18n) / denom and derive rawPrice = Number(scaled) / 1e18 so
nativePerSir uses a high-precision ratio; update references to sqrtPriceX96,
Q96, rawPrice and nativePerSir accordingly.
Summary by CodeRabbit
New Features
Improvements