Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eth_getLogs is not returning logs after PR pull/695 #761

Open
niklabh opened this issue Jul 5, 2022 · 10 comments
Open

eth_getLogs is not returning logs after PR pull/695 #761

niklabh opened this issue Jul 5, 2022 · 10 comments
Labels
bug Something isn't working

Comments

@niklabh
Copy link

niklabh commented Jul 5, 2022

Description

eth_getLogs has stopped working after PR #695. It is returning empty array for new queries.

Steps to Reproduce

For Astar network parachain

  1. Request for eth_getLogs
    curl -X POST --data '{"id": 1, "jsonrpc":"2.0","method": "eth_getLogs","params": [{"address":"0x841ce48f9446c8e281d3f1444cb859b4a6d0738c", "topics":[], "fromBlock": "0x14D7F5", "toBlock": "0x14D7F7"}]}' https://astar-rpc.dwellir.com -H "Content-Type: application/json"

  2. Check empty results are coming
    {"jsonrpc":"2.0","result":[],"id":1}

  3. Nodes which have rolledback before PR upgrade polkadot to v0.9.22 and integrate jsonrsee #695 is returning correct result.
    curl -X POST --data '{"id": 1, "jsonrpc":"2.0","method": "eth_getLogs","params": [{"address":"0x841ce48f9446c8e281d3f1444cb859b4a6d0738c", "topics":[], "fromBlock": "0x14D7F5", "toBlock": "0x14D7F7"}]}' https://astar.api.onfinality.io/public-ws -H "Content-Type: application/json"
    {"jsonrpc":"2.0","result":[{"address":"0x841ce48f9446c8e281d3f1444cb859b4a6d0738c","blockHash":"0x7afa51ec6ef05e41cd4cbe8879f4ecc6a765f7892e547f6341b4b1392381ea96","blockNumber":"0x14d7f6","data":"0x5cd11dbc09c77658576c5c498fb2f4c69d23fdba95df1c42bc494a9f3afc0bab00000000000000000000000027ee985d1e446ec71c277c89cd877ec4eeaa236c00000000000000000000000027ee985d1e446ec71c277c89cd877ec4eeaa236c0000000000000000000000006a2d262d56735dba19dd70682b39f6be9a931d980000000000000000000000000000000000000000000000000000000005ebb0b60000000000000000000000000000000000000000000000000000000000000038e446e1edeb0820082749ec71be560d667c964744fa39cf179ee1f6b8b6826cb8","logIndex":"0x10","removed":false,"topics":["0x79fa08de5149d912dce8e5e8da7a7c17ccdf23dd5d3bfe196802e6eb86347c7c"],"transactionHash":"0xb92e1c21bbeb31f6854456a1529faa3dcea79de446ecfdc92118aca3aed9b40f","transactionIndex":"0x6","transactionLogIndex":"0x1"}],"id":1}

  4. Logs were created for the transactions https://blockscout.com/astar/tx/0xb92e1c21bbeb31f6854456a1529faa3dcea79de446ecfdc92118aca3aed9b40f/logs

Expected vs. Actual Behavior

Expected : Logs are being returned.

Actual Behavior: No logs are being returned

Logs, Errors or Screenshots

Screenshot from 2022-07-05 11-22-40

@niklabh niklabh added the bug Something isn't working label Jul 5, 2022
@niklabh niklabh changed the title eth_getLogs is not returning logs after PR https://github.com/paritytech/frontier/pull/695 eth_getLogs is not returning logs after PR pull/695 Jul 5, 2022
@Dinonard
Copy link
Contributor

Dinonard commented Jul 5, 2022

@niklabh the issue cannot be reproduced on local development nodes, logs will be visible just fine.

@Dinonard
Copy link
Contributor

Dinonard commented Jul 5, 2022

@tgmichel could it be due to this PR: #626?

This issue started happening for us after we upgraded from polkadot-v0.9.19 to polkadot-v0.9.24 and we included the linked PR then. We can fetch the logs just fine if we specify blockHash.
I've looked at the code but can't see what might cause this problem.

@crystalin
Copy link
Contributor

@Dinonard how can we test it on your network (step to compile, step to sync)... so we can troubleshoot it.

@tgmichel
Copy link
Contributor

tgmichel commented Jul 5, 2022

You have some options to further debug this and confirm is a schema cache related issue by exploring its contents in the offchain db:

  • If you happen to have access to the FrontierDb subcommand you can run frontier-template-node frontier-db read meta --key :ethereum_schema_cache --base-path=.. --pruning=archive --chain=.. on a synced node. This will give you details on what the background task might have cached. That'd be ideal for debugging this and check if there is missing data.
  • Another option is if you have the possibility of modifying a client and run it over your synced network, just log the schema_cache contents when calling eth_getLogs in https://github.com/paritytech/frontier/blob/175e42fc47cb6cd2772cccb7ac3ff59fd2d1a4dd/client/rpc/src/eth/filter.rs#L465. That also should be enough to get what's currently on-cache.

On my side I did some checks over your RPC.

If I run this command, I get some logs:

curl -X POST --data '{"id": 1, "jsonrpc":"2.0","method": "eth_getLogs","params": [{"address": "0x30ad33583dd0bc9b58e23f34d9c189a136b93a57","topics":[], "fromBlock": "0x11ce8d", "toBlock": "0x11ce8d"}]}' https://astar-rpc.dwellir.com -H "Content-Type: application/json"

Block number 0x11ce8d is substrate block hash 0x3cae9a9cd37d21545ff462fd35ac1dbc3e01d84786df0f55213ca507640f4a10.

If I query the ethereum storage schema at that height:

rpc.state.queryStorageAt(0x3a657468657265756d5f736368656d61, 0x3cae9a9cd37d21545ff462fd35ac1dbc3e01d84786df0f55213ca507640f4a10)

0x03

Meaning that the curl above was using (cached) schema version 3, which is the same schema version as the Astar's tip and thus your original curl command that's returning empty result. If the schema is not available in your offchain db cache - because there is a bug in the background task and it didn't map it correctly - theoretically would not be possible to retrieve any log on this schema.

@Dinonard
Copy link
Contributor

Dinonard commented Jul 5, 2022

@crystalin does this help?

The commands that Nikhil put into issue description can be used to test.

EDIT: @tgmichel I will check your reply now (or tomorrow :) ). Thank you!

@tgmichel
Copy link
Contributor

tgmichel commented Jul 6, 2022

Indeed, PR#626 has a major flaw. Using import_notification_stream for this is just wrong because when the node is in major syncing there is no notifications, so the cache is not written. This is why locally tests pass, because we are naturally reacting to block import notifications.. For this to work we should use a combination of notifications + timer, like the MappingSync worker does.

For the moment I'm proposing to just remove this cache and in the future come back to it if needed:
#763

@Dinonard
Copy link
Contributor

Dinonard commented Jul 6, 2022

@tgmichel I have logs:

Jul 06 10:52:26 Ubuntu-2004-focal-64-minimal astar-collator[2629193]: 2022-07-06 10:52:26 >>eth_getLogs(1): called with filter: Filter { from_block: Some(Num(1366005)), to_block: Some(Num(1366007)), block_hash: None, address: Some(Single(0x841ce48f9446c8e281d3f1444cb859b4a6d0738c)), topics: Some(Single(Some(Multiple([])))) }
Jul 06 10:52:26 Ubuntu-2004-focal-64-minimal astar-collator[2629193]: 2022-07-06 10:52:26 >>eth_getLogs(2): for range from(1366005) to(1366007)
Jul 06 10:52:26 Ubuntu-2004-focal-64-minimal astar-collator[2629193]: 2022-07-06 10:52:26 >>eth_getLogs(3): schema loaded from cache: {0: V1}
Jul 06 10:52:26 Ubuntu-2004-focal-64-minimal astar-collator[2629193]: 2022-07-06 10:52:26 >>eth_getLogs(4): default schema is Some(V1)
Jul 06 10:52:26 Ubuntu-2004-focal-64-minimal astar-collator[2629193]: 2022-07-06 10:52:26 >>eth_getLogs(5_1366005): for schema V1 and block number BlockId::Number(1366005), read block None
Jul 06 10:52:26 Ubuntu-2004-focal-64-minimal astar-collator[2629193]: 2022-07-06 10:52:26 >>eth_getLogs(5_1366006): for schema V1 and block number BlockId::Number(1366006), read block None
Jul 06 10:52:26 Ubuntu-2004-focal-64-minimal astar-collator[2629193]: 2022-07-06 10:52:26 >>eth_getLogs(5_1366007): for schema V1 and block number BlockId::Number(1366007), read block None

PR with modifications: https://github.com/AstarNetwork/frontier/pull/23/files
V1 scheme is used and I guess the schema_v1_override.rs and self.query_storage::<ethereum::BlockV0> fails due to not being able to decode BlockV0.

Thank you for investigating and your help! 🙏
We will apply your fix to our fork.

@Dinonard
Copy link
Contributor

Dinonard commented Jul 6, 2022

Just to confirm that with your fix, the aforementioned calls work as expected. 👍

@tgmichel
Copy link
Contributor

tgmichel commented Jul 7, 2022

Awesome, glad it solved it for you. To be honest with the current design there is not much room for optimizing eth_getLogs and I'd be happy to take a radically different approach for it, as it's kind of silly to expect performant requests by sequentially scanning an embedded key/value DB.

A few weeks ago I opened #721 so we can explore alternatives but by now nobody chimed in. Please if you have some time / thoughts regarding this that'd be great.

@Dinonard
Copy link
Contributor

Dinonard commented Jul 8, 2022

I haven't really worked with that code so I'm not sure how beneficial my comments would be :)

However, I wonder why not just add a concrete storage item to pallet-ethereum where all info related to schemas would be stored? E.g. starting from which block which schema is used.
If that was stored as a single value but map type, getLogs would just need to read the DB once and it would have all the necessary information.

Of course, this isn't future-proof since it's not scalable but I guess we don't expect a new schema to come every few weeks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants