Skip to content

Commit e4d5ba1

Browse files
clarify I
1 parent 8a5173e commit e4d5ba1

File tree

1 file changed

+9
-11
lines changed
  • public/content/developers/docs/data-and-analytics

1 file changed

+9
-11
lines changed

public/content/developers/docs/data-and-analytics/index.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -37,18 +37,18 @@ We’ve seen a ton of change in the indexing layer over the last year, with two
3737
- Forks-as-a-Service
3838
- Fork any contract and add events and calculations, and then pull this data from a new “forked” RPC/data service.
3939
- Some of the main providers for this are [shadow.xyz](https://www.shadow.xyz/), [ghostlogs](https://ghostlogs.xyz/), and [smlXL](https://smlxl.io/).
40-
- I gave my thoughts on [shortcomings and difficulties](https://twitter.com/andrewhong5297/status/1732230186966413484) on this approach. I believe that it will see adoption by some app teams who have the budget, but will be hard to integrate consistently across the data ecosystem.
40+
- ilemi gave his thoughts on [shortcomings and difficulties](https://twitter.com/andrewhong5297/status/1732230186966413484) on this approach.
4141

4242
- Rollups-as-a-Service (RaaS):
4343
- The big theme of the year has been rollups, with Coinbase kicking it off by launching a rollup (Base) on the Optimism Stack (OP Stack) earlier this year.
4444
- Teams are building products specifically for running the nodes and sequencer(s) for your own Rollup. We’ve already seen dozens of rollups launch.
45-
- New startups like [Conduit](https://conduit.xyz/), [Caldera](https://caldera.xyz/), and [Astria](https://www.astria.org/) are offering full stack rollup services. Quicknode announced a similar offering last month, and I expect Infura and Alchemy to do so as well in 2024.
45+
- New startups like [Conduit](https://conduit.xyz/), [Caldera](https://caldera.xyz/), and [Astria](https://www.astria.org/) are offering full stack rollup services. Quicknode and Alchemy have launched similar RaaS offerings.
4646

4747
Alchemy and Quicknode have expanded further into crypto native infra and data engineering infra. Alchemy launched [account abstraction](https://www.alchemy.com/account-abstraction-infrastructure) and [subgraph services](https://www.alchemy.com/subgraphs). Quicknode has been busy with [alerts](https://www.quicknode.com/quickalerts), [data streaming](https://www.quicknode.com/quickstreams), and [rollup services](https://blog.quicknode.com/introducing-quicknode-custom-chains).
4848

4949
We should see our first “intents” clients/services soon. Intents are a part of the modular stack, and are essentially transactions handled outside the mempool that have extra preferences attached. UniswapX and Cowswap both operate limit order intent pools, and should both release clients within the year. Account abstraction bundlers like [stackup](https://www.stackup.sh/) and [biconomy](https://www.biconomy.io/) should venture into intents as well. It’s unclear if data providers like Alchemy will index these “intents” clients, or if it will be like MEV where we have specialized providers like [Blocknative](https://www.blocknative.com/) and [Bloxroute](https://bloxroute.com/).
5050

51-
Another up-and-coming type of provider is the “all-in-one” service, which combines indexing, querying, and defining. There are a few products here such as [indexing.co](https://www.indexing.co/) and [spec.dev](https://spec.dev/) - I have not included them in the landscape since they are still nascent.
51+
Another up-and-coming type of provider is the “all-in-one” service, which combines indexing, querying, and defining. There are a few products here such as [indexing.co](https://www.indexing.co/) and [spec.dev](https://spec.dev/) - they are not included them in the landscape since they are still nascent.
5252

5353
### Explore: Quickly look into addresses, transactions, protocols, and chains
5454

@@ -64,36 +64,34 @@ Outside of etherscan, we now have this plethora of explorers to choose from:
6464
- MEV explorers like [Eigenphi](https://eigenphi.io/mev/ethereum/txr) for transactions or [mevboost.pics](https://mevboost.pics/) for bundles and [beaconcha.in](https://beaconcha.in/) for blocks
6565
- Nansen launched a 2.0 of their token and wallet tracking product, with cool new features like “smart segments”
6666

67-
The dashboard layer hasn’t changed much. It’s still the wild west here. If you spend a day on Twitter, you’ll see charts from dozens of different platforms covering similar data but all with slight differences or twists. I believe that verification and attribution are becoming a bigger issue, especially now with the large growth in both teams and chains. I don’t expect this to be solved in 2024, there just isn’t enough incentive for platforms to do anything about it yet.
67+
The dashboard layer hasn’t changed much. It’s still the wild west here. If you spend a day on Twitter, you’ll see charts from dozens of different platforms covering similar data but all with slight differences or twists. Verification and attribution are becoming a bigger issue, especially now with the large growth in both teams and chains.
6868

6969
Marketing specific address explorers are on the rise. Teams like spindl and bello will lead the way here. Cross-chain explorers (and pre-chain like MEV/intents) will see expansion in development.
7070

71-
Across platforms, I still think wallets are still poorly labelled and tracked, and it’s getting worse with intents/account abstraction now. I don’t mean static labels like “Coinbase” but instead more dynamic ones like “Experienced Contract Deployer”. I know some teams are trying to tackle this such as [walletlabels](https://walletlabels.xyz/), [onceupon context](https://github.com/Once-Upon/context), and [syve.ai](https://www.syve.ai/). It will also improve naturally alongside web3 social, which is growing mainly on Farcaster.
71+
Across platforms, wallets are still poorly labelled and tracked, and it’s getting worse with intents/account abstraction now. We don’t mean static labels like “Coinbase” but instead more dynamic ones like “Experienced Contract Deployer”. Some teams are trying to tackle this such as [walletlabels](https://walletlabels.xyz/), [onceupon context](https://github.com/Once-Upon/context), and [syve.ai](https://www.syve.ai/). It will also improve naturally alongside web3 social, which is growing mainly on Farcaster.
7272

7373
### Query: Raw, decoded, and abstracted data that can be queried
7474

7575
Most SQL query engines are cloud-based, so you can use an in-browser IDE to query against raw and aggregated data (like nf.trades/dex.trades). These also allow for definition of great tables such as NFT wash trading filters. All these products come with their own APIs to grab results from your queries.
7676

7777
GraphQL APIs here let you define your own schemas (in typescript or SQL) and then generates a graphQL endpoint by running the full blockchain history through your schema.
7878

79-
For predefined APIs (where you query prebuilt schemas), there are a ton of niche data providers that I have not included in my chart. Data providers covering domains like mempool, nft, governance, orderbook, prices, and more. You can check out my friend Primo’s site to find those ones.
79+
For predefined APIs (where you query prebuilt schemas), there are a ton of niche data providers that are not included in my chart. Data providers covering domains like mempool, nft, governance, orderbook, prices, and more.
8080

8181
Holistically, every platform has got a lot more efficient with their infra (meaning your queries run faster). Most platforms explored advanced methods of getting data out like odbc connectors, data streams, s3 parquet sharing, bigquery/snowflake direct transfers, etc.
8282

83-
I think I speak for most of us when I say a lot of our time went into just trying to keep up with Solana and L2s this year.
84-
85-
Notable changes to existing products:
83+
Recent changes to existing products:
8684

8785
1. Query engines like [Dune](https://dune.com/) and [Flipside](https://flipsidecrypto.xyz/) have accepted there is more data than can possibly be ingested in custom data pipelines, and have launched products that allow the user to bring in that data instead. Flipside launched livequery (query an API in SQL) and Dune launched uploads/syncs (upload csv or api, or sync your database to Dune directly).
8886
2. Our favorite decentralized data child, [The Graph](https://thegraph.com/), has had to really beef up their infra to not lose market share to centralized subgraph players like goldsky and satsuma (alchemy). They’ve partnered closely with [StreamingFast](https://www.streamingfast.io/), separating out “reader” and “relayer” of data and also introducing [substreams](https://thegraph.com/docs/en/substreams/) which allow you to write rust based subgraphs across chains.
8987

90-
I believe that no provider here is truly set up for the rollup world yet, either in terms of scaling ingestion or fixing cross-chain schemas/transformations. And by not ready, I mean not ready for the case of 500 rollups launching in a week. Dune has launched a [rollup ingestion product](https://dune.com/product/dune-catalyst) to start making this easier, especially if you use an existing RaaS provider like Conduit.
88+
No provider here is truly set up for the rollup world yet, either in terms of scaling ingestion or fixing cross-chain schemas/transformations. And by not ready, we mean not ready for the case of 500 rollups launching in a week. Dune has launched a [rollup ingestion product](https://dune.com/product/dune-catalyst) to start making this easier, especially if you use an existing RaaS provider like Conduit.
9189

9290
LLM query/dashboard products like Dune AI will start to gain stronger traction in certain domains, such as wallet analysis or token analysis. Labels datasets will play a strong part in enabling this.
9391

9492
### Define and Store: Create and store aggregations that rely on heavy data transformations
9593

96-
Raw data is great, but to get to better metrics, you need to be able to standardize and aggregate data across contracts of different protocols. Once you have aggregations, you can create new metrics and labels that enhance everyone's analysis. I have only included products that have active contribution from both inside and outside the platform’s team, and are publicly accessible.
94+
Raw data is great, but to get to better metrics, you need to be able to standardize and aggregate data across contracts of different protocols. Once you have aggregations, you can create new metrics and labels that enhance everyone's analysis. We've have only included products that have active contribution from both inside and outside the platform’s team, and are publicly accessible.
9795

9896
The collaborative layer of data definition has not really evolved over the last year. Product teams and engineering are barely keeping up as is. To give a sense of growth rate here, in the month of December 2023:
9997

0 commit comments

Comments
 (0)