Conversation
📝 WalkthroughWalkthroughA new Hyperswap V2 APY adapter module has been added that queries a subgraph for liquidity pools, aggregates volume data over 7 days, computes APY metrics based on trading fees and TVL, and exports formatted pool information with underlying tokens and metadata. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
The hyperswap-v2 adapter exports pools: Test Suites: 1 passed, 1 total |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/adaptors/hyperswap-v2/index.js`:
- Around line 41-54: The pairDayDatas query currently uses a single large
pairAddress_in and first:1000 which silently truncates results and risks
timeouts; update the code that invokes pairDayDataQuery to (1) split
pairAddresses into batches of ~50–200 addresses, (2) perform keyset pagination
using the date field (do NOT use skip)—i.e. request pages in a loop passing a
date cursor (use date_gt when ordering asc, or use date_lt when ordering desc)
and continue fetching while the returned array length == pageSize, and (3)
accumulate results per batch before moving to the next address batch; reference
the pairDayDataQuery GraphQL constant and the caller function (the function that
builds/executes this query) to implement batching and date-based pagination.
| const pairDayDataQuery = gql` | ||
| query getPairDayData($pairAddresses: [String!], $startTime: Int!) { | ||
| pairDayDatas( | ||
| first: 1000 | ||
| orderBy: date | ||
| orderDirection: desc | ||
| where: { pairAddress_in: $pairAddresses, date_gt: $startTime } | ||
| ) { | ||
| pairAddress | ||
| dailyVolumeUSD | ||
| date | ||
| } | ||
| } | ||
| `; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the actual code at the specified lines
cat -n src/adaptors/hyperswap-v2/index.js | sed -n '35,135p'Repository: DefiLlama/yield-server
Length of output: 3074
🌐 Web query:
Goldsky Hyperswap v2 subgraph pairDayDatas query skip pagination limits
💡 Result:
On Goldsky, Hyperswap v2 subgraphs use Graph Node query semantics, so you’ll hit the usual pagination constraints:
- Per-request page size cap:
firstis effectively capped at 1000 entities per query (common across The Graph / Graph Node deployments). You must paginate in multiple requests. [1], [2] skiphard limit: many Graph Node deployments enforce a maximumskip(commonly you’ll see errors like “skip must be between 0 and 5000” when you go past it). This limit is enforced specifically because large offsets perform badly; Graph Node even added a configurable globalskiplimit (GRAPH_GRAPHQL_MAX_SKIP) for this purpose. [3], [4]
What to do instead of skip for pairDayDatas
Use keyset pagination (cursor-like) with orderBy + where:
Option A (recommended): paginate by date
query PairDayDatas($pair: String!, $lastDate: Int!) {
pairDayDatas(
first: 1000
orderBy: date
orderDirection: asc
where: { pair: $pair, date_gt: $lastDate }
) {
id
date
dailyVolumeUSD
reserveUSD
}
}Loop: start with lastDate = 0, then set lastDate to the last returned date each page.
Option B: paginate by id
Same idea, but orderBy: id and where: { id_gt: $lastId }.
This avoids the skip ceiling and is the approach Graph Node recommends for large result sets. [1], [4]
Sources: [1] https://thegraph.com/docs/en/subgraphs/querying/graphql-api/ • [2] https://docs.uniswap.org/contracts/v2/reference/API/queries • [3] https://www.reddit.com/r/thegraph/comments/nx97nk • [4] graphprotocol/graph-node#1912
🏁 Script executed:
# Check for other GraphQL queries in the file to see how they handle pagination
cat -n src/adaptors/hyperswap-v2/index.js | head -40Repository: DefiLlama/yield-server
Length of output: 1106
🏁 Script executed:
# Search for other uses of date_gt or similar keyset patterns in hyperswap-v2
rg -A 5 "date_gt|id_gt|orderBy" src/adaptors/hyperswap-v2/Repository: DefiLlama/yield-server
Length of output: 860
🌐 Web query:
Graph Node pairAddress_in list size limits Uniswap subgraph batch size
💡 Result:
- Hard limit you will hit first: on The Graph (including Uniswap subgraphs), the maximum
firstper entity query is 1000 (default is often 100 if you omitfirst). So you can’t fetch more than 1000pairs(orswaps, etc.) in a single request—use pagination (skip, or preferably cursor-style where available). [1] [2] pairAddress_in/_infilter list size: The Graph’s GraphQL API supports_in/_not_infilters, but there isn’t a clearly documented fixed maximum number of values you can put in an_inlist; in practice you’ll be constrained by request size + query complexity/timeouts rather than an advertised “N addresses max”. [3]- Practical batching guidance (for Uniswap subgraphs): keep
pairAddress_inbatches small enough to avoid gateway/node timeouts—typically ~50–200 addresses per query, then loop batches; separately, keepfirst≤ 1000 for the returned entity list. (If you need “all pairs”, page in chunks of 1000.) [1]
Sources
[1] Uniswap docs noting The Graph limits returns to 1000 entities per query: https://docs.uniswap.org/contracts/v2/reference/API/queries
[2] Ethereum StackExchange answer summarizing default first=100 and max first=1000 on The Graph: https://ethereum.stackexchange.com/questions/114818/what-is-the-default-query-limit-on-the-graph
[3] The Graph GraphQL API docs listing _in / _not_in filters (but no stated list-size max): https://thegraph.com/docs/en/subgraphs/querying/graphql-api/
Batch pairDayDatas requests to avoid silent volume truncation; use date-based keyset pagination, not skip.
pairDayDatas(first: 1000) with pairAddress_in: $pairAddresses silently truncates results in two ways:
- Results truncation: only first 1000 records per batch are returned; no pagination fetches the rest.
- Query timeout risk: passing many addresses in
pairAddress_inwithout batching can exceed query complexity limits.
Graph Node subgraphs (Goldsky included) discourage skip pagination due to hard limits (commonly ~5000) and poor performance on large offsets. Instead, use keyset pagination via date_gt, which is already available in the query. Batch addresses into groups of 50–200 per request to stay within timeout/complexity budgets.
🔧 Suggested fix (batching + date-based pagination)
const pairDayDataQuery = gql`
- query getPairDayData($pairAddresses: [String!], $startTime: Int!) {
+ query getPairDayData($pairAddresses: [String!], $startTime: Int!, $first: Int!) {
pairDayDatas(
- first: 1000
+ first: $first
orderBy: date
orderDirection: asc
where: { pairAddress_in: $pairAddresses, date_gt: $startTime }
) {
pairAddress
dailyVolumeUSD
date
}
}
`;
async function fetchPairDayData(pairAddresses) {
const startTime = Math.floor(Date.now() / 1000) - 7 * 24 * 60 * 60;
+ const first = 1000;
+ const batchSize = 150; // tune to API limits
+ const volumesByPair = {};
+ const lastDayVolumeByPair = {};
+ const batches = [];
+ for (let i = 0; i < pairAddresses.length; i += batchSize) {
+ batches.push(pairAddresses.slice(i, i + batchSize));
+ }
try {
- const result = await request(SUBGRAPH_URL, pairDayDataQuery, {
- pairAddresses,
- startTime,
- });
-
- const volumesByPair = {};
- const lastDayVolumeByPair = {};
-
- const pairDayDatas = result.pairDayDatas || [];
-
- for (const dayData of pairDayDatas) {
- const pairAddr = dayData.pairAddress.toLowerCase();
- const volume = parseFloat(dayData.dailyVolumeUSD) || 0;
-
- if (!volumesByPair[pairAddr]) {
- volumesByPair[pairAddr] = 0;
- }
- volumesByPair[pairAddr] += volume;
-
- if (
- !lastDayVolumeByPair[pairAddr] ||
- dayData.date > lastDayVolumeByPair[pairAddr].date
- ) {
- lastDayVolumeByPair[pairAddr] = {
- date: dayData.date,
- volume: volume,
- };
- }
+ for (const batch of batches) {
+ let lastDate = startTime;
+ while (true) {
+ const result = await request(SUBGRAPH_URL, pairDayDataQuery, {
+ pairAddresses: batch,
+ startTime: lastDate,
+ first,
+ });
+
+ const pairDayDatas = result.pairDayDatas || [];
+ if (pairDayDatas.length === 0) break;
+
+ for (const dayData of pairDayDatas) {
+ const pairAddr = dayData.pairAddress.toLowerCase();
+ const volume = parseFloat(dayData.dailyVolumeUSD) || 0;
+
+ if (!volumesByPair[pairAddr]) volumesByPair[pairAddr] = 0;
+ volumesByPair[pairAddr] += volume;
+
+ if (
+ !lastDayVolumeByPair[pairAddr] ||
+ dayData.date > lastDayVolumeByPair[pairAddr].date
+ ) {
+ lastDayVolumeByPair[pairAddr] = {
+ date: dayData.date,
+ volume,
+ };
+ }
+ lastDate = Math.max(lastDate, dayData.date);
+ }
+
+ if (pairDayDatas.length < first) break;
+ }
}
return {
weeklyVolumes: volumesByPair,
lastDayVolumes: Object.fromEntries(
Object.entries(lastDayVolumeByPair).map(([k, v]) => [k, v.volume])
),
};
} catch (error) {
console.error('Error fetching pair day data:', error);
return { weeklyVolumes: {}, lastDayVolumes: {} };
}
}🤖 Prompt for AI Agents
In `@src/adaptors/hyperswap-v2/index.js` around lines 41 - 54, The pairDayDatas
query currently uses a single large pairAddress_in and first:1000 which silently
truncates results and risks timeouts; update the code that invokes
pairDayDataQuery to (1) split pairAddresses into batches of ~50–200 addresses,
(2) perform keyset pagination using the date field (do NOT use skip)—i.e.
request pages in a loop passing a date cursor (use date_gt when ordering asc, or
use date_lt when ordering desc) and continue fetching while the returned array
length == pageSize, and (3) accumulate results per batch before moving to the
next address batch; reference the pairDayDataQuery GraphQL constant and the
caller function (the function that builds/executes this query) to implement
batching and date-based pagination.
Summary
Adds hyperswap-v2 adapter
Resources:
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.