Skip to content

v0.38.0

Latest
Compare
Choose a tag to compare
@encalypto encalypto released this 23 Apr 03:47
2342702

v0.38.0

Breaking changes

  • Removal of GraphQL subscriptions – the WebSocket server and every Subscription‑related code path were deleted. Applications that need live updates must switch to polling or an external event system. (#5836)
  • Removal of Cosmos support – all Cosmos chain and runtime code was removed. (#5833)
  • GRAPH_STORE_LAST_ROLLUP_FROM_POI deleted – this setting is no longer needed and will be ignored. (#5936)
  • No dependency on pg_stat_statements – Graph Node no longer executes pg_stat_statements_reset() after running database migrations and therefore is agnostic to whether that extension is installed or not (#5926)

New features & improvements

  1. Faster grafting and copying

    • A single graft/copy operation can now copy multiple tables in parallel. The number of parallel operations per graft/copy can be configured by setting GRAPH_STORE_BATCH_WORKERS, which defaults to 1. The total number of parallel operations is also limited by the fdw_pool_size which defaults to 5 and can be set in graph-node.toml
    • A large number of parallel grafts/copies might get blocked by the size of tokio's blocking thread pool. To avoid that block, the size of that pool can now be configured with GRAPH_MAX_BLOCKING_THREADS. It defaults to 512 (tokio's default) but setting this to a large number like 2048 should be safe. (#5948)
  2. Better control of graft/copy batches

    • To avoid table bloat, especially in subgraphs.subgraph_deployment, graft/copy splits the work up into smaller batches which should take GRAPH_STORE_BATCH_TARGET_DURATION each. Sometimes, the estimation for how long a batch will take goes horribly wrong. To guard against that, the new setting GRAPH_STORE_BATCH_TIMEOUT sets a timeout, which is unlimited by default. When set, batches that take longer than this are aborted and restarted with a much smaller size. (3c183731)
    • The number of rows that are fetched from the source shard for cross-shard grafts/copies in a single operation can be controlled through GRAPH_STORE_FDW_FETCH_SIZE. Its default has been lowered from 10 000 to 1 000, as larger sizes have shown to not be effective and actually cause the graft/copy to slow down in some cases (#5924)
    • Deployments being copied are excluded from pruning. (#5893)
    • Failed deployments can now be copied. (#5893)
  3. Composable subgraphs

    • Mutable entities are no longer allowed in composed subgraphs. (#5909)
    • Graft chains are validated for incompatible spec versions. (#5911)
    • New env‑var GRAPH_ETHEREUM_FORCE_RPC_FOR_BLOCK_PTRS prefers RPC over Firehose. (#5876)
  4. Aggregations

    • Aggregate entities can be ordered by any field, not only timestamp / id. (#5829)
  5. Start‑up hardening

    • Database setup now uses a single lock on the primary to prevent race conditions when multiple nodes start together. (#5926)
    • Shards with pool size 0 are filtered out at boot. (#5926)
  6. Graphman quality‑of‑life

    • Load management disabled while running graphman commands. (c765ce7a)
    • graphman copy create errors include the source deployment ID. (701f77d2)
  7. Combined views of sharded tables in the primary

    • The primary now has a namespace sharded that contains views that combines sharded tables such as subgraph_deployment into one view across shards. (#5820)

Fixes (selected)

  • Aggregate indexes were sometimes created twice when postponed. (4be64c16)
  • Duplicate remove operations in a write batch no longer cause failures. (17360f56)
  • Incorrect hashing when grafting from subgraphs with specVersion < 0.0.6 fixed. (#5917)
  • Firehose TLS configuration corrected. (36ad6a24)
  • Numerous small fixes in estimation of graft/copy batch size, namespace mapping, copy status display, and error messages.

Contributors

@lutter, @zorancv, @incrypto32, @filipeazevedo, @encalypto, @shiyasmohd, and many others – thank you for your work on this release.


Full changelog: v0.37.0...v0.38.0