Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
201 changes: 201 additions & 0 deletions src/adaptors/hyperswap-v2/index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,201 @@
const { request, gql } = require('graphql-request');
const utils = require('../utils');

const PROJECT = 'hyperswap-v2';
const CHAIN = 'hyperevm';
const MIN_TVL_USD = 1000;

const SUBGRAPH_URL =
'https://api.goldsky.com/api/public/project_cm97l77ib0cz601wlgi9wb0ec/subgraphs/hyperswap-v2/1.0.5/gn';

const FEE_TIER = 3000;

const pairsQuery = gql`
query getPairs($first: Int!, $skip: Int!) {
pairs(
first: $first
skip: $skip
orderBy: reserveUSD
orderDirection: desc
where: { reserveUSD_gt: 1000 }
) {
id
token0 {
id
symbol
decimals
}
token1 {
id
symbol
decimals
}
reserve0
reserve1
reserveUSD
volumeUSD
}
}
`;

const pairDayDataQuery = gql`
query getPairDayData($pairAddresses: [String!], $startTime: Int!) {
pairDayDatas(
first: 1000
orderBy: date
orderDirection: desc
where: { pairAddress_in: $pairAddresses, date_gt: $startTime }
) {
pairAddress
dailyVolumeUSD
date
}
}
`;
Comment on lines +41 to +54
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's see the actual code at the specified lines
cat -n src/adaptors/hyperswap-v2/index.js | sed -n '35,135p'

Repository: DefiLlama/yield-server

Length of output: 3074


🌐 Web query:

Goldsky Hyperswap v2 subgraph pairDayDatas query skip pagination limits

💡 Result:

On Goldsky, Hyperswap v2 subgraphs use Graph Node query semantics, so you’ll hit the usual pagination constraints:

  • Per-request page size cap: first is effectively capped at 1000 entities per query (common across The Graph / Graph Node deployments). You must paginate in multiple requests. [1], [2]
  • skip hard limit: many Graph Node deployments enforce a maximum skip (commonly you’ll see errors like “skip must be between 0 and 5000” when you go past it). This limit is enforced specifically because large offsets perform badly; Graph Node even added a configurable global skip limit (GRAPH_GRAPHQL_MAX_SKIP) for this purpose. [3], [4]

What to do instead of skip for pairDayDatas

Use keyset pagination (cursor-like) with orderBy + where:

Option A (recommended): paginate by date

query PairDayDatas($pair: String!, $lastDate: Int!) {
  pairDayDatas(
    first: 1000
    orderBy: date
    orderDirection: asc
    where: { pair: $pair, date_gt: $lastDate }
  ) {
    id
    date
    dailyVolumeUSD
    reserveUSD
  }
}

Loop: start with lastDate = 0, then set lastDate to the last returned date each page.

Option B: paginate by id
Same idea, but orderBy: id and where: { id_gt: $lastId }.

This avoids the skip ceiling and is the approach Graph Node recommends for large result sets. [1], [4]

Sources: [1] https://thegraph.com/docs/en/subgraphs/querying/graphql-api/ • [2] https://docs.uniswap.org/contracts/v2/reference/API/queries • [3] https://www.reddit.com/r/thegraph/comments/nx97nk • [4] graphprotocol/graph-node#1912


🏁 Script executed:

# Check for other GraphQL queries in the file to see how they handle pagination
cat -n src/adaptors/hyperswap-v2/index.js | head -40

Repository: DefiLlama/yield-server

Length of output: 1106


🏁 Script executed:

# Search for other uses of date_gt or similar keyset patterns in hyperswap-v2
rg -A 5 "date_gt|id_gt|orderBy" src/adaptors/hyperswap-v2/

Repository: DefiLlama/yield-server

Length of output: 860


🌐 Web query:

Graph Node pairAddress_in list size limits Uniswap subgraph batch size

💡 Result:

  • Hard limit you will hit first: on The Graph (including Uniswap subgraphs), the maximum first per entity query is 1000 (default is often 100 if you omit first). So you can’t fetch more than 1000 pairs (or swaps, etc.) in a single request—use pagination (skip, or preferably cursor-style where available). [1] [2]
  • pairAddress_in / _in filter list size: The Graph’s GraphQL API supports _in / _not_in filters, but there isn’t a clearly documented fixed maximum number of values you can put in an _in list; in practice you’ll be constrained by request size + query complexity/timeouts rather than an advertised “N addresses max”. [3]
  • Practical batching guidance (for Uniswap subgraphs): keep pairAddress_in batches small enough to avoid gateway/node timeouts—typically ~50–200 addresses per query, then loop batches; separately, keep first1000 for the returned entity list. (If you need “all pairs”, page in chunks of 1000.) [1]

Sources
[1] Uniswap docs noting The Graph limits returns to 1000 entities per query: https://docs.uniswap.org/contracts/v2/reference/API/queries
[2] Ethereum StackExchange answer summarizing default first=100 and max first=1000 on The Graph: https://ethereum.stackexchange.com/questions/114818/what-is-the-default-query-limit-on-the-graph
[3] The Graph GraphQL API docs listing _in / _not_in filters (but no stated list-size max): https://thegraph.com/docs/en/subgraphs/querying/graphql-api/


Batch pairDayDatas requests to avoid silent volume truncation; use date-based keyset pagination, not skip.

pairDayDatas(first: 1000) with pairAddress_in: $pairAddresses silently truncates results in two ways:

  1. Results truncation: only first 1000 records per batch are returned; no pagination fetches the rest.
  2. Query timeout risk: passing many addresses in pairAddress_in without batching can exceed query complexity limits.

Graph Node subgraphs (Goldsky included) discourage skip pagination due to hard limits (commonly ~5000) and poor performance on large offsets. Instead, use keyset pagination via date_gt, which is already available in the query. Batch addresses into groups of 50–200 per request to stay within timeout/complexity budgets.

🔧 Suggested fix (batching + date-based pagination)
 const pairDayDataQuery = gql`
-  query getPairDayData($pairAddresses: [String!], $startTime: Int!) {
+  query getPairDayData($pairAddresses: [String!], $startTime: Int!, $first: Int!) {
     pairDayDatas(
-      first: 1000
+      first: $first
       orderBy: date
       orderDirection: asc
       where: { pairAddress_in: $pairAddresses, date_gt: $startTime }
     ) {
       pairAddress
       dailyVolumeUSD
       date
     }
   }
 `;
 
 async function fetchPairDayData(pairAddresses) {
   const startTime = Math.floor(Date.now() / 1000) - 7 * 24 * 60 * 60;
+  const first = 1000;
+  const batchSize = 150; // tune to API limits
+  const volumesByPair = {};
+  const lastDayVolumeByPair = {};
 
+  const batches = [];
+  for (let i = 0; i < pairAddresses.length; i += batchSize) {
+    batches.push(pairAddresses.slice(i, i + batchSize));
+  }
 
   try {
-    const result = await request(SUBGRAPH_URL, pairDayDataQuery, {
-      pairAddresses,
-      startTime,
-    });
-
-    const volumesByPair = {};
-    const lastDayVolumeByPair = {};
-
-    const pairDayDatas = result.pairDayDatas || [];
-    
-    for (const dayData of pairDayDatas) {
-      const pairAddr = dayData.pairAddress.toLowerCase();
-      const volume = parseFloat(dayData.dailyVolumeUSD) || 0;
-
-      if (!volumesByPair[pairAddr]) {
-        volumesByPair[pairAddr] = 0;
-      }
-      volumesByPair[pairAddr] += volume;
-
-      if (
-        !lastDayVolumeByPair[pairAddr] ||
-        dayData.date > lastDayVolumeByPair[pairAddr].date
-      ) {
-        lastDayVolumeByPair[pairAddr] = {
-          date: dayData.date,
-          volume: volume,
-        };
-      }
+    for (const batch of batches) {
+      let lastDate = startTime;
+      while (true) {
+        const result = await request(SUBGRAPH_URL, pairDayDataQuery, {
+          pairAddresses: batch,
+          startTime: lastDate,
+          first,
+        });
+
+        const pairDayDatas = result.pairDayDatas || [];
+        if (pairDayDatas.length === 0) break;
+
+        for (const dayData of pairDayDatas) {
+          const pairAddr = dayData.pairAddress.toLowerCase();
+          const volume = parseFloat(dayData.dailyVolumeUSD) || 0;
+
+          if (!volumesByPair[pairAddr]) volumesByPair[pairAddr] = 0;
+          volumesByPair[pairAddr] += volume;
+
+          if (
+            !lastDayVolumeByPair[pairAddr] ||
+            dayData.date > lastDayVolumeByPair[pairAddr].date
+          ) {
+            lastDayVolumeByPair[pairAddr] = {
+              date: dayData.date,
+              volume,
+            };
+          }
+          lastDate = Math.max(lastDate, dayData.date);
+        }
+
+        if (pairDayDatas.length < first) break;
+      }
     }
 
     return {
       weeklyVolumes: volumesByPair,
       lastDayVolumes: Object.fromEntries(
         Object.entries(lastDayVolumeByPair).map(([k, v]) => [k, v.volume])
       ),
     };
   } catch (error) {
     console.error('Error fetching pair day data:', error);
     return { weeklyVolumes: {}, lastDayVolumes: {} };
   }
 }
🤖 Prompt for AI Agents
In `@src/adaptors/hyperswap-v2/index.js` around lines 41 - 54, The pairDayDatas
query currently uses a single large pairAddress_in and first:1000 which silently
truncates results and risks timeouts; update the code that invokes
pairDayDataQuery to (1) split pairAddresses into batches of ~50–200 addresses,
(2) perform keyset pagination using the date field (do NOT use skip)—i.e.
request pages in a loop passing a date cursor (use date_gt when ordering asc, or
use date_lt when ordering desc) and continue fetching while the returned array
length == pageSize, and (3) accumulate results per batch before moving to the
next address batch; reference the pairDayDataQuery GraphQL constant and the
caller function (the function that builds/executes this query) to implement
batching and date-based pagination.


async function fetchAllPairs() {
let allPairs = [];
let skip = 0;
const first = 1000;

while (true) {
try {
const data = await request(SUBGRAPH_URL, pairsQuery, {
first,
skip,
});
const pairs = data.pairs;

if (pairs.length === 0) break;

allPairs = allPairs.concat(pairs);

if (pairs.length < first) break;

skip += first;
} catch (error) {
console.error('Error fetching pairs from subgraph:', error);
throw error;
}
}

return allPairs;
}

async function fetchPairDayData(pairAddresses) {
// Get data from the last 7 days
const startTime = Math.floor(Date.now() / 1000) - 7 * 24 * 60 * 60;

try {
const result = await request(SUBGRAPH_URL, pairDayDataQuery, {
pairAddresses,
startTime,
});

const volumesByPair = {};
const lastDayVolumeByPair = {};

const pairDayDatas = result.pairDayDatas || [];

for (const dayData of pairDayDatas) {
const pairAddr = dayData.pairAddress.toLowerCase();
const volume = parseFloat(dayData.dailyVolumeUSD) || 0;

if (!volumesByPair[pairAddr]) {
volumesByPair[pairAddr] = 0;
}
volumesByPair[pairAddr] += volume;

if (
!lastDayVolumeByPair[pairAddr] ||
dayData.date > lastDayVolumeByPair[pairAddr].date
) {
lastDayVolumeByPair[pairAddr] = {
date: dayData.date,
volume: volume,
};
}
}

return {
weeklyVolumes: volumesByPair,
lastDayVolumes: Object.fromEntries(
Object.entries(lastDayVolumeByPair).map(([k, v]) => [k, v.volume])
),
};
} catch (error) {
console.error('Error fetching pair day data:', error);
return { weeklyVolumes: {}, lastDayVolumes: {} };
}
}

function calculateApyBase(volumeUSD1d, tvlUSD) {
if (!tvlUSD || tvlUSD <= 0) return 0;
if (!volumeUSD1d || volumeUSD1d <= 0) return 0;

// Fee is 0.3% of volume (FEE_TIER / 1e6)
const feeUSD1d = (volumeUSD1d * FEE_TIER) / 1e6;
const apyBase = ((feeUSD1d * 365) / tvlUSD) * 100;
return apyBase;
}

async function apy() {
try {
const pairs = await fetchAllPairs();

const pairAddresses = pairs.map((p) => p.id.toLowerCase());

const { weeklyVolumes, lastDayVolumes } =
await fetchPairDayData(pairAddresses);

const formattedPools = pairs
.map((pair) => {
const tvlUSD = Number(pair.reserveUSD) || 0;

if (tvlUSD < MIN_TVL_USD) return null;

const pairId = pair.id.toLowerCase();
const volumeUSD1d = lastDayVolumes[pairId] || 0;
const volumeUSD7d = weeklyVolumes[pairId] || 0;

const apyBase = calculateApyBase(volumeUSD1d, tvlUSD);

// 7-day APY calculation (annualised from weekly)
const apyBase7d =
volumeUSD7d > 0
? ((volumeUSD7d * FEE_TIER) / 1e6 / tvlUSD) * 52 * 100
: null;

return {
pool: pairId,
chain: utils.formatChain(CHAIN),
project: PROJECT,
symbol: utils.formatSymbol(
`${pair.token0.symbol}-${pair.token1.symbol}`
),
tvlUsd: tvlUSD,
apyBase: apyBase || 0,
apyBase7d: apyBase7d,
underlyingTokens: [
pair.token0.id.toLowerCase(),
pair.token1.id.toLowerCase(),
],
url: `https://app.hyperswap.exchange/#/add/v2/${pair.token0.id}/${pair.token1.id}`,
volumeUsd1d: volumeUSD1d,
volumeUsd7d: volumeUSD7d,
};
})
.filter((pool) => pool !== null);

return formattedPools.filter((p) => utils.keepFinite(p));
} catch (error) {
console.error('Error in HyperSwap V2 adapter:', error);
throw error;
}
}

module.exports = {
timetravel: false,
apy,
url: 'https://app.hyperswap.exchange',
};
Loading