v0.38.0
Breaking changes
- Removal of GraphQL subscriptions – the WebSocket server and every
Subscription
‑related code path were deleted. Applications that need live updates must switch to polling or an external event system. (#5836) - Removal of Cosmos support – all Cosmos chain and runtime code was removed. (#5833)
GRAPH_STORE_LAST_ROLLUP_FROM_POI
deleted – this setting is no longer needed and will be ignored. (#5936)- No dependency on
pg_stat_statements
– Graph Node no longer executespg_stat_statements_reset()
after running database migrations and therefore is agnostic to whether that extension is installed or not (#5926)
New features & improvements
-
Faster grafting and copying
- A single graft/copy operation can now copy multiple tables in parallel. The number of parallel operations per graft/copy can be configured by setting
GRAPH_STORE_BATCH_WORKERS
, which defaults to 1. The total number of parallel operations is also limited by thefdw_pool_size
which defaults to 5 and can be set ingraph-node.toml
- A large number of parallel grafts/copies might get blocked by the size of tokio's blocking thread pool. To avoid that block, the size of that pool can now be configured with
GRAPH_MAX_BLOCKING_THREADS
. It defaults to 512 (tokio's default) but setting this to a large number like 2048 should be safe. (#5948)
- A single graft/copy operation can now copy multiple tables in parallel. The number of parallel operations per graft/copy can be configured by setting
-
Better control of graft/copy batches
- To avoid table bloat, especially in
subgraphs.subgraph_deployment
, graft/copy splits the work up into smaller batches which should takeGRAPH_STORE_BATCH_TARGET_DURATION
each. Sometimes, the estimation for how long a batch will take goes horribly wrong. To guard against that, the new settingGRAPH_STORE_BATCH_TIMEOUT
sets a timeout, which is unlimited by default. When set, batches that take longer than this are aborted and restarted with a much smaller size. (3c183731) - The number of rows that are fetched from the source shard for cross-shard grafts/copies in a single operation can be controlled through
GRAPH_STORE_FDW_FETCH_SIZE
. Its default has been lowered from 10 000 to 1 000, as larger sizes have shown to not be effective and actually cause the graft/copy to slow down in some cases (#5924) - Deployments being copied are excluded from pruning. (#5893)
- Failed deployments can now be copied. (#5893)
- To avoid table bloat, especially in
-
Composable subgraphs
-
Aggregations
- Aggregate entities can be ordered by any field, not only
timestamp
/id
. (#5829)
- Aggregate entities can be ordered by any field, not only
-
Start‑up hardening
-
Graphman quality‑of‑life
-
Combined views of sharded tables in the primary
- The primary now has a namespace
sharded
that contains views that combines sharded tables such assubgraph_deployment
into one view across shards. (#5820)
- The primary now has a namespace
Fixes (selected)
- Aggregate indexes were sometimes created twice when postponed. (4be64c16)
- Duplicate
remove
operations in a write batch no longer cause failures. (17360f56) - Incorrect hashing when grafting from subgraphs with
specVersion
< 0.0.6 fixed. (#5917) - Firehose TLS configuration corrected. (36ad6a24)
- Numerous small fixes in estimation of graft/copy batch size, namespace mapping, copy status display, and error messages.
Contributors
@lutter, @zorancv, @incrypto32, @filipeazevedo, @encalypto, @shiyasmohd, and many others – thank you for your work on this release.
Full changelog: v0.37.0...v0.38.0