forked from ethereum/go-ethereum
-
Notifications
You must be signed in to change notification settings - Fork 1
[pull] master from ethereum:master #30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pull
wants to merge
4,582
commits into
scope-demo:master
Choose a base branch
from
ethereum:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Here we are modifying the port mapping logic so that existing port mappings will only be removed when they were previously created by geth. The AddAnyPortMapping functionality has been adapted to work consistently between the IGDv1 and IGDv2 backends.
Co-authored-by: lightclient <lightclient@protonmail.com> Add GetHeaderByNumber and GetReceiptsByNumber to allow more efficient API request filling from Era files.
Signed-off-by: jsvisa <delweng@gmail.com>
This PR improves error handling in the remotedb package by fixing two issues: 1. In the `Has` method, we now properly propagate errors instead of silently returning false. This makes the behavior more predictable and helps clients better understand when there are connection issues. 2. In the `New` constructor, we add a nil check for the client parameter to prevent potential panics. This follows Go best practices for constructor functions. These changes make the code more robust and follow Go's error handling idioms without requiring any changes to other parts of the codebase. Changes: - Modified `Has` method to return errors instead of silently returning false - Added nil check in `New` constructor - Fixed field name in constructor to match struct definition
This pull request introduces two constraints in the blobPool: (a) If the sender has a pending authorization or delegation, only one in-flight executable transaction can be cached. (b) If the authority address in a SetCode transaction is already reserved by the blobPool, the transaction will be rejected. These constraints mitigate an attack where an attacker spams the pool with numerous blob transactions, evicts other transactions, and then cancels all pending blob transactions by draining the sender’s funds if they have a delegation. Note, because there is no exclusive lock held between different subpools when processing transactions, it's totally possible the SetCode transaction and blob transactions with conflict sender and authorities are accepted simultaneously. I think it's acceptable as it's very hard to be exploited. --------- Co-authored-by: lightclient <lightclient@protonmail.com>
During my benchmarks on Holesky, around 10% of all CPU time was spent in PUSH2 ``` ROUTINE ======================== github.com/ethereum/go-ethereum/core/vm.newFrontierInstructionSet.makePush.func1 in github.com/ethereum/go-ethereum/core/vm/instructions.go 16.38s 20.35s (flat, cum) 10.31% of Total 740ms 740ms 976: return func(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { . . 977: var ( 40ms 40ms 978: codeLen = len(scope.Contract.Code) 970ms 970ms 979: start = min(codeLen, int(*pc+1)) 200ms 200ms 980: end = min(codeLen, start+pushByteSize) . . 981: ) 670ms 2.39s 982: a := new(uint256.Int).SetBytes(scope.Contract.Code[start:end]) . . 983: . . 984: // Missing bytes: pushByteSize - len(pushData) 410ms 410ms 985: if missing := pushByteSize - (end - start); missing > 0 { . . 986: a.Lsh(a, uint(8*missing)) . . 987: } 12.69s 14.94s 988: scope.Stack.push2(*a) 10ms 10ms 989: *pc += size 650ms 650ms 990: return nil, nil . . 991: } . . 992:} ``` Which is quite crazy. We have a handwritten encoder for PUSH1 already, this PR adds one for PUSH2. PUSH2 is the second most used opcode as shown here: https://gist.github.com/shemnon/fb9b292a103abb02d98d64df6fbd35c8 since it is used by solidity quite significantly. Its used ~20 times as much as PUSH20 and PUSH32. # Benchmarks ``` BenchmarkPush/makePush-14 94196547 12.27 ns/op 0 B/op 0 allocs/op BenchmarkPush/push-14 429976924 2.829 ns/op 0 B/op 0 allocs/op ``` --------- Co-authored-by: jwasinger <j-wasinger@hotmail.com>
Make UPnP more robust - Once a random port was mapped, we try to stick to it even if a UPnP refresh fails. Previously we were immediately moving back to try the default port, leading to frequent ENR changes. - We were deleting port mappings before refresh as a possible workaround. This created issues in some UPnP servers. The UPnP (and PMP) specification is explicit about the refresh requirements, and delete is clearly not needed (see #30265 (comment)). From now on we only delete when closing. - We were trying to add port mappings only once, and then moved on to random ports. Now we insist a bit more, so that a simple failed request won't lead to ENR changes. Fixes #31418 --------- Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com> Co-authored-by: Felix Lange <fjl@twurst.com>
…ode (#31567) The `Sync(..)` function explicitly says not to rely on in production code, but it is used in `Clear(..)` so should add a similar mention.
…1373) This PR proposes a change to the authorizations' validation introduced in commit cdb66c8. These changes make the expected behavior independent of the order of admission of authorizations, improving the predictability of the resulting state and the usability of the system with it. The current implementation behavior is dependent on the transaction submission order: This issue is related to authorities and the sender of a transaction, and can be reproduced respecting the normal nonce rules. The issue can be reproduced by the two following cases: **First case** - Given an empty pool. - Submit transaction `{ from: B, auths [ A ] }`: is accepted. - Submit transaction `{ from: A }`: Is accepted: it becomes the one in-flight transaction allowed. **Second case** - Given an empty pool. - Submit transaction `{ from: A }`: is accepted - Submit transaction `{ from: B, auths [ A ] }`: is rejected since there is already a queued/pending transaction from A. The expected behavior is that both sequences of events would lead to the same sets of accepted and rejected transactions. **Proposed changes** The queued/pending transactions issued from any authority of the transaction being validated have to be counted, allowing one transaction from accounts submitting an authorization. - Notice that the expected behavior was explicitly forbidden in the case "reject-delegation-from-pending-account", I believe that this behavior conflicts to the definition of the limitation, and it is removed in this PR. The expected behavior is tested in "accept-authorization-from-sender-of-one-inflight-tx". - Replacement tests have been separated to improve readability of the acceptance test. - The test "allow-more-than-one-tx-from-replaced-authority" has been extended with one extra transaction, since the system would always have accepted one transaction (but not two). - The test "accept-one-inflight-tx-of-delegated-account" is extended to clean-up state, avoiding leaking the delegation used into the other tests. Additionally, replacement check is removed to be tested in its own test case. **Expected behavior** The expected behavior of the authorizations' validation shall be as follows:  Notice that replacement shall be allowed, and behavior shall remain coherent with the table, according to the replaced transaction. --------- Co-authored-by: lightclient <lightclient@protonmail.com>
…#31468) Fixes #31169 The TestTransactionForgotten test was flaky due to real time dependencies. This PR: - Replaces real time with mock clock for deterministic timing control - Adds precise state checks at timeout boundaries - Verifies underpriced cache states and cleanup - Improves test reliability by controlling transaction timestamps - Adds checks for transaction re-enqueueing behavior The changes ensure consistent test behavior without timing-related flakiness. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
This fix allows Trezor to support full 32bit chainId in geth, with the next version of firmware. For `chainId > 2147483630` case, Trezor returns signature bit only. - Trezor returns only signature parity for `chainId > 2147483630` case. - for `chainId == 2147483630` case, Trezor returns `MAX_UINT32` or `0`, but it doesn't matter. (`2147483630 * 2 + 35` = `4294967295`(`MAX_UINT32`)) chainId | returned signature_v | compatible issue ---------|------------------------|-------------------- 0 < chainId <= 255 | chainId * 2 + 35 + v | no issue (firmware `1.6.2` for Trezor one) 255 < chainId <= 2147483630 | chainId * 2 + 35 + v | ***fixed.*** *firmware `1.6.3`* chainId > 2147483630 | v | *firmware `1.6.3`* Please see also: full 32bit chainId support for Trezor - Trezor one: trezor/trezor-mcu#399 ***merged*** - Trezor model T: trezor/trezor-core#311 ***merged*** --------- Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com> Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
…rtion to prevent race between txpool head reset and promotion of txs that will be subsequently added (#31595) before this changes, this will result in numerous test failures: ``` > go test -run=Eth2AssembleBlock -c > stress ./catalyst.test ``` The reason is that after creating/inserting the test chain, there is a race between the txpool head reset and the promotion of txs added from tests. Ensuring that the txpool state is up to date with the head of the chain before proceeding fixes these flaky tests.
When we instantiate a sub-logger via `go-ethereum/internal/testlog/logger.With`, we copy the reference to the `bufHandler` from the parent logger. However, internally, `go-ethereum/internal/testlog/logger.With` calls `log/slog/Logger.With` which creates a new handler instance (via `internal/bufHandler.WithAttrs`). This PR modifies sub-logger instantiation to use the newly-instantiated handler, instead of copying the reference from the parent instance. The type cast from `slog.Handler` to `*bufHandler` in `internal/testlog/Logger.With` is safe here because a `internal/testlog/Logger` can only be instantiated with a `*bufHandler` as the underlying handler type. Note, that I've also removed a pre-existing method that broke the above assumption. However, this method is not used in our codebase. I'm not sure if the assumption holds for forks of geth (e.g. optimism has modified the testlogger somewhat allowing test loggers to accept arbitrary handler types), but it seems okay to break API compatibility given that this is in the `internal` package. closes #31533
The submodule was accidentally updated to another commit by f64aa6e.
Our previous success metrics gave success even if a peer disconnected right after connection. These metrics only count peers that stayed connected for at least 1 min. The 1 min limit is an arbitrary choice. We do not use this for decision logic, only statistics.
As of now, Geth disconnects peers only on protocol error or timeout, meaning once connection slots are filled, the peerset is largely fixed. As mentioned in #31321, Geth should occasionally disconnect peers to ensure some churn. What/when to disconnect could depend on: - the state of geth (e.g. sync or not) - current number of peers - peer level metrics This PR adds a very slow churn using a random drop. --------- Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com> Co-authored-by: Felix Lange <fjl@twurst.com>
I added the history mode configuration in eth/ethconfig initially, since it seemed like the logical place. But it turns out we need access to the intended pruning setting at a deeper level, and it actually needs to be integrated with the blockchain startup procedure. With this change applied, if a node previously had its history pruned, and is subsequently restarted **without** the `--history.chain postmerge` flag, the `BlockChain` initialization code will now verify the freezer tail against the known pruning point of the predefined network and will restore pruning status. Note that this logic is quite restrictive, we allow non-zero tail only for known networks, and only for the specific pruning point that is defined.
Thank you, @holiman, for being an integral part of the Go-Ethereum and for your invaluable contributions over the years. This will always be your home and you're welcome back anytime!
Our metrics related to dial errors were off. The original error was not wrapped, so the caller function had no chance of picking it up. Therefore the most common error, which is "TooManyPeers", was not correctly counted. The metrics were originally introduced in #27621 I was thinking of various possible solutions. - the one proposed here wraps both the new error and the origial error. It is not a pattern we use in other parts of the code, but works. This is maybe the smallest possible change. - as an alternate, I could write a proper `errProtoHandshakeError` with it's own wrapped error - finally, I'm not even sure we need `errProtoHandshakeError`, maybe we could just pass up the original error. --------- Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com> Co-authored-by: Felix Lange <fjl@twurst.com>
This PR fixes a bug in the map renderer that sometimes used an obsolete block log value pointer to initialize the iterator for rendering from a snapshot. This bug was triggered by chain reorgs and sometimes caused indexing errors and invalid search results. A few other conditions are also made safer that were not reported to cause issues yet but could potentially be unsafe in some corner cases. A new unit test is also added that reproduced the bug but passes with the new fixes. Fixes #31593 Might also fix #31589 though this issue has not been reproduced yet, but it appears to be related to a log index database corruption around a specific block, similarly to the other issue. Note that running this branch resets and regenerates the log index database. For this purpose a `Version` field has been added to `rawdb.FilterMapsRange` which will also make this easier in the future if a breaking database change is needed or the existing one is considered potentially broken due to a bug, like in this case.
Log `key` in hexadecimal string format.
Fix the issue after initial snap sync with `gcmode=archive` enabled. ``` NewPayload: inserting block failed error="history indexing is out of order, last: null, requested: 1" ``` --------- Signed-off-by: Delweng <delweng@gmail.com> Co-authored-by: Gary Rong <garyrong0905@gmail.com>
The address filter was never checked against a maximum limit, which can be somewhat abusive for API nodes. This PR adds a limit similar to topics ## Description (AI generated) This pull request introduces a new validation to enforce a maximum limit on the number of addresses allowed in filter criteria for Ethereum logs. It includes updates to the `FilterAPI` and `EventSystem` logic, as well as corresponding test cases to ensure the new constraint is properly enforced. ### Core functionality changes: * **Validation for maximum addresses in filter criteria**: - Added a new constant, `maxAddresses`, set to 100, to define the maximum allowable addresses in a filter. - Introduced a new error, `errExceedMaxAddresses`, to handle cases where the number of addresses exceeds the limit. - Updated the `GetLogs` method in `FilterAPI` to validate the number of addresses against `maxAddresses`. - Modified the `UnmarshalJSON` method to return an error if the number of addresses in the input JSON exceeds `maxAddresses`. - Added similar validation to the `SubscribeLogs` method in `EventSystem`. ### Test updates: * **New test cases for address limit validation**: - Added a test in `TestUnmarshalJSONNewFilterArgs` to verify that exceeding the maximum number of addresses triggers the `errExceedMaxAddresses` error. - Updated `TestInvalidLogFilterCreation` to include a test case for an invalid filter with more than `maxAddresses` addresses. - Updated `TestInvalidGetLogsRequest` to test for invalid log requests with excessive addresses. These changes ensure that the system enforces a reasonable limit on the number of addresses in filter criteria, improving robustness and preventing potential performance issues. --------- Co-authored-by: zsfelfoldi <zsfelfoldi@gmail.com>
## Summary This PR resolves Issue #31929 by reducing log noise generated by the log indexer after `debug_setHead` operations. ## Problem Description When `debug_setHead` is called to rewind the blockchain, blocks are removed from the database. However, the log indexer's `ChainView` objects may still hold references to these deleted blocks. When `extendNonCanonical()` attempts to access these missing headers, it results in: 1. **Repeated ERROR logs**: `Header not found number=X hash=0x...` 2. **Log noise** that can mask other important errors 3. **User confusion** about whether this indicates a real problem ## Root Cause Analysis The issue occurs because: - `debug_setHead` removes blocks from the blockchain database - Log indexer's `ChainView` may still reference deleted block hashes - `extendNonCanonical()` in `core/filtermaps/chain_view.go` tries to fetch these missing headers - The existing `return false` logic properly handles the error, but logs at ERROR level ## Solution This is a **logging improvement only** - no functional logic changes: ### Changes Made 1. **Log level**: Changed from `ERROR` to `DEBUG` 2. **Log message**: Enhanced with descriptive context about chain view extension 3. **Comments**: Added explanation for when this situation occurs 4. **Behavior**: Maintains existing error handling (`return false` was already present) ### Code Changes ```go // Before log.Error("Header not found", "number", number, "hash", hash) return false // After // Header not found - this can happen after debug_setHead operations // where blocks have been deleted. Return false to indicate the chain view // is no longer valid rather than logging repeated errors. log.Debug("Header not found during chain view extension", "number", number, "hash", hash) return false ``` ## Testing ### Automated Tests - ✅ All existing filtermaps tests pass: `go test ./core/filtermaps -v` - ✅ No regressions in related functionality ### Manual Verification 1. **Before fix**: Started geth in dev mode, generated blocks, called `debug_setHead(3)` → **5 repeated ERROR logs** 2. **After fix**: Same scenario → **4 DEBUG logs, no ERROR noise** ### Test Environment ```bash # Setup test environment rm -rf ./dev-test-data ./build/bin/geth --dev --datadir ./dev-test-data --http --http.api debug,eth,net,web3 --verbosity 4 # Generate test blocks and trigger issue curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"debug_setHead","params":["0x3"],"id":1}' http://localhost:8545 ``` ## Related Issues - Fixes #31929 ## Additional Context This issue was reported as spurious error messages appearing after `debug_setHead` operations. The investigation revealed that while the error handling was functionally correct, the ERROR log level was inappropriate for this expected scenario in development/debugging workflows. The fix maintains full compatibility while significantly improving the debugging experience for developers using `debug_setHead`. --------- Co-authored-by: Sun Tae, Kim <38067691+humblefirm@users.noreply.github.com> Co-authored-by: zsfelfoldi <zsfelfoldi@gmail.com>
Sorry for not fully fixed in #31761, now the log format of unindexing message is cleaned up, to make it consistent with the indexing message.
It should be `newPayloadV4 must only be called for prague payloads` for the V4 payload error
use `make(map, len(txpool))` to prealloc the map for the txpool content, to avoid the map growing in the loop.
Similar to #31856, remove the not availabe shh, swarm modules in the console. --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This pull request refines the filtermap implementation, defining key APIs for map and epoch calculations to improve readability. This pull request doesn't change any logic, it's a pure cleanup. --------- Co-authored-by: zsfelfoldi <zsfelfoldi@gmail.com>
geth cmd: `geth --dev --dev.period 5` call: `debug.setHead` to rollback several blocks. If the `debug.setHead` call is delayed, it will trigger a panic with a small probability, due to using the null point of `fcResponse.PayloadID`. --------- Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
1. Fix the error return format. **todo**: ~~`bindtype` needs more complex logic to fix it.~~ ` if err != nil { return nil, err } if err == nil { return obj, nil } ` 2. ~~Return pointer type object to avoid copying the whole struct content.~~ 3. Give the panic decision to the user. 4. Fix empty line at the end of function. **TODO**: ~~fix some related test cases.~~ --------- Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com>
This PR changes the trace test to block level, aiming for better execution performance. --------- Co-authored-by: zsfelfoldi <zsfelfoldi@gmail.com>
Improves the SSTORE gas calculation a bit. Previously we would pull up the state object twice. This is okay for existing objects, since they are cached, however non-existing objects are not cached, thus we needed to go through all 128 diff layers as well as the disk layer twice, just for the gas calculation ``` goos: linux goarch: amd64 pkg: github.com/ethereum/go-ethereum/core/vm cpu: AMD Ryzen 9 5900X 12-Core Processor │ /tmp/old.txt │ /tmp/new.txt │ │ sec/op │ sec/op vs base │ Interpreter-24 1118.0n ± 2% 602.8n ± 1% -46.09% (p=0.000 n=10) ``` --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Some of the flags were deprecated, so try to hide them in the help message. And move the `--vmodule` and `--logjson` flags to the DeprecatedCategory.
This is a follow up PR after #32128 , Seems I've missed to add --txlookuplimit as hidden. In hte meanwhile, I also add the other deprecated flags into the output of `show-deprecated-flags`
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.2)
Can you help keep this open source service alive? 💖 Please sponsor : )