Skip to content

[pull] master from ethereum:master #30

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4,410 commits into
base: master
Choose a base branch
from
Open

Conversation

pull[bot]
Copy link

@pull pull bot commented Feb 28, 2020

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.1)

Can you help keep this open source service alive? 💖 Please sponsor : )

@pull pull bot added the ⤵️ pull label Feb 28, 2020
@pull pull bot added the merge-conflict Resolve conflicts manually label Mar 27, 2020
s1na and others added 26 commits February 5, 2025 13:58
Here we add some more changes for live tracing API v1.1:

- Hook `OnSystemCallStartV2` was introduced with `VMContext` as parameter.
- Hook `OnBlockHashRead` was introduced.
- `GetCodeHash` was added to the state interface
- The new `WrapWithJournal` construction helps with tracking EVM reverts in the tracer.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
After recent changes in Geth (removing TD):

39638c8#diff-d70a44d4b7a0e84fe9dcca25d368f626ae6c9bc0b8fe9690074ba92d298bcc0d

Non-Geth clients are failing many devp2p tests with an error:
`peering failed: status exchange failed: wrong TD in status: have 1 want 0`

Right now only Geth is passing it - all other clients are affected by
this change. I think there should be no validation of TD when checking `Status`
message in hive tests. Now Geth has 0 (and hive tests requires 0) and
all other clients have actual TD. And on real networks there is no validation
of TD when peering
Agreed to the following fork dates for Holesky and Sepolia on ACDC 150

Holesky slot: 3710976	(Mon, Feb 24 at 21:55:12 UTC)
Sepolia slot: 7118848	(Wed, Mar 5 at 07:29:36 UTC)
This removes the method `TestingTTDBlock` introduced by #30744. It was
added to make the beacon consensus engine aware of the merge block in
tests without relying on the total difficulty. However, tracking the
merge block this way is very annoying. We usually configure forks in the
`ChainConfig`, but the method is on the consensus engine, which isn't
always created in the same place. By sidestepping the `ChainConfig` we
don't get the usual fork-order checking, so it's possible to enable the
merge before the London fork, for example. This in turn can lead to very
hard-to-debug outputs and validation errors.

So here I'm changing the consensus engine to check the
`MergeNetsplitBlock` instead. Alternatively, we assume a network is
merged if it has a `TerminalTotalDifficulty` of zero, which is a very
common configuration in tests.
The new SetCode transaction type introduces some additional complexity
when handling the transaction pool.

This complexity stems from two new account behaviors:

1. The balance and nonce of an account can change during regular
   transaction execution *when they have a deployed delegation*.
2. The nonce and code of an account can change without any EVM execution
   at all. This is the "set code" mechanism introduced by EIP-7702.

The first issue has already been considered extensively during the design
of ERC-4337, and we're relatively confident in the solution of simply
limiting the number of in-flight pending transactions an account can have
to one. This puts a reasonable bound on transaction cancellation. Normally
to cancel, you would need to spend 21,000 gas. Now it's possible to cancel
for around the cost of warming the account and sending value
(`2,600+9,000=11,600`). So 50% cheaper.

The second issue is more novel and needs further consideration.
Since authorizations are not bound to a specific transaction, we
cannot drop transactions with conflicting authorizations. Otherwise,
it might be possible to cherry-pick authorizations from txs and front
run them with different txs at much lower fee amounts, effectively DoSing
the authority. Fortunately, conflicting authorizations do not affect the
underlying validity of the transaction so we can just accept both.

---------

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Felix Lange <fjl@twurst.com>
Fixes an error when the block is not found in debug methods.
This fixes an error where executing `evm run --dump ...` omits preimages
from the dump (because the statedb used for execution is a copy of
another instance).
closes #31072

BLST released their newest version which includes a fix for go v.1.24:
https://github.com/supranational/blst/releases/tag/v0.3.14

I went through all commits between 0.3.14 and 0.3.13 for a sanity check
This is to prevent a crash on startup with a custom genesis configuration.
With this change in place, upgrading a chain created by geth v1.14.x and
below will now print an error instead of crashing:

    Fatal: Failed to register the Ethereum service: invalid chain configuration: missing entry for fork "cancun" in blobSchedule

Arguably this is not great, and it should just auto-upgrade the config.
We'll address this in a follow-up PR for geth v1.15.2
This PR addresses a flaw in the freezer table upgrade path.

In v1.15.0, freezer table v2 was introduced, including an additional 
field (`flushOffset`) maintained in the metadata file. To ensure 
backward compatibility, an upgrade path was implemented for legacy
freezer tables by setting `flushOffset` to the size of the index file.

However, if the freezer table is opened in read-only mode, this file 
write operation is rejected, causing Geth to shut down entirely.

Given that invalid items in the freezer index file can be detected and 
truncated, all items in freezer v0 index files are guaranteed to be
complete. Therefore, when operating in read-only mode, it is safe to
use the  freezer data without performing an upgrade.
Currently, when calculating block's bloom, we loop through all the
receipt logs to calculate the hash value. However, normally, after going
through applyTransaction, the receipt's bloom is already calculated
based on the receipt log, so the block's bloom can be calculated by just
ORing these receipt's blooms.
```
goos: darwin
goarch: arm64
pkg: github.com/ethereum/go-ethereum/core/types
cpu: Apple M1 Pro
BenchmarkCreateBloom
BenchmarkCreateBloom/small
BenchmarkCreateBloom/small-10             810922              1481 ns/op             104 B/op          5 allocs/op
BenchmarkCreateBloom/large
BenchmarkCreateBloom/large-10               8173            143764 ns/op            9614 B/op        401 allocs/op
BenchmarkCreateBloom/small-mergebloom
BenchmarkCreateBloom/small-mergebloom-10                 5178918               232.0 ns/op             0 B/op          0 allocs/op
BenchmarkCreateBloom/large-mergebloom
BenchmarkCreateBloom/large-mergebloom-10                   54110             22207 ns/op               0 B/op          0 allocs/op
```

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
This fixes a regression introduced in #31153 where we didn't consider
mainnet to be in PoS, causing #31190.
The problem is, `params.MainnetChainConfig` does not have a defined
`MergeNetsplitBlock`, so it isn't considered to be in PoS in
`CalcDifficulty`.
This fixes an issue where a nat.Interface unmarshaled from the TOML
config file could not be re-marshaled to TOML correctly.

Fixes #31183
We somehow forgot to add this in #30302, so discv5 and DNS have actually
been disabled since then.

Fixes #31168
This PR removes the assumption of the stacktrie and trie to have the
same ordering. This was hit by the fuzzers on oss-fuzz

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
fjl and others added 30 commits April 25, 2025 13:27
TruncatePending shows up bright red on our nodes, because it computes
the length of a map multiple times.
I don't know why this is so expensive, but around 20% of our time is
spent on this, which is super weird.

```
//PR: BenchmarkTruncatePending-24    	   17498	     69397 ns/op	   32872 B/op	       3 allocs/op
//Master: BenchmarkTruncatePending-24    	    9960	    123954 ns/op	   32872 B/op	       3 allocs/op
```

```
benchmark                       old ns/op     new ns/op     delta
BenchmarkTruncatePending-24     123954        69397         -44.01%

benchmark                       old allocs     new allocs     delta
BenchmarkTruncatePending-24     3              3              +0.00%

benchmark                       old bytes     new bytes     delta
BenchmarkTruncatePending-24     32872         32872         +0.00%
```
This simple PR is a 44% improvement over the old state


``` 
OUTINE ======================== github.com/ethereum/go-ethereum/core/txpool/legacypool.(*LegacyPool).truncatePending in github.com/ethereum/go-ethereum/core/txpool/legacypool/legacypool.go
     1.96s     18.02s (flat, cum) 19.57% of Total
         .          .   1495:func (pool *LegacyPool) truncatePending() {
         .          .   1496:	pending := uint64(0)
      60ms      2.99s   1497:	for _, list := range pool.pending {
     250ms      5.48s   1498:		pending += uint64(list.Len())
         .          .   1499:	}
         .          .   1500:	if pending <= pool.config.GlobalSlots {
         .          .   1501:		return
         .          .   1502:	}
         .          .   1503:
         .          .   1504:	pendingBeforeCap := pending
         .          .   1505:	// Assemble a spam order to penalize large transactors first
         .      510ms   1506:	spammers := prque.New[int64, common.Address](nil)
     140ms      2.50s   1507:	for addr, list := range pool.pending {
         .          .   1508:		// Only evict transactions from high rollers
      50ms      5.08s   1509:		if uint64(list.Len()) > pool.config.AccountSlots {
         .          .   1510:			spammers.Push(addr, int64(list.Len()))
         .          .   1511:		}
         .          .   1512:	}
         .          .   1513:	// Gradually drop transactions from offenders
         .          .   1514:	offenders := []common.Address{}
```

```go
// Benchmarks the speed of batch transaction insertion in case of multiple accounts.
func BenchmarkTruncatePending(b *testing.B) {
	// Generate a batch of transactions to enqueue into the pool
	pool, _ := setupPool()
	defer pool.Close()
	b.ReportAllocs()
	batches := make(types.Transactions, 4096+1024+1)
	for i := range len(batches) {
		key, _ := crypto.GenerateKey()
		account := crypto.PubkeyToAddress(key.PublicKey)
		pool.currentState.AddBalance(account, uint256.NewInt(1000000), tracing.BalanceChangeUnspecified)
		tx := transaction(uint64(0), 100000, key)
		batches[i] = tx
	}
	for _, tx := range batches {
		pool.addRemotesSync([]*types.Transaction{tx})
	}
	b.ResetTimer()
	// benchmark truncating the pending
	for range b.N {
		pool.truncatePending()
	}
}
```
This PR adds checking for an edgecase which theoretically can happen in
the range-prover. Right now, we check that a key does not overwrite a
previous one by checking that the key is increasing. However, if keys
are of different lengths, it is possible to create a key which is
increasing _and_ overwrites the previous key. Example: `0xaabbcc`
followed by `0xaabbccdd`.

This can not happen in go-ethereum, which always uses fixed-size paths
for accounts and storage slot paths in the trie, but it might happen if
the range prover is used without guaranteed fixed-size keys.

This PR also adds some testcases for the errors that are expected.
This PR applies the config overrides to the new config as well,
otherwise they will not be applied to defined configs, making
shadowforks impossible.

To test:
```
>  ./build/bin/geth --override.prague 123 --dev --datadir /tmp/geth
INFO [04-28|21:20:47.009]  - Prague:                      @123
> ./build/bin/geth --override.prague 321 --dev --datadir /tmp/geth
INFO [04-28|21:23:59.760]  - Prague:                      @321
``
For PeerDAS, we need to compute cell proofs. Both ckzg and gokzg support
computing these cell proofs.
This PR does the following:

- Update the go-kzg library from "github.com/crate-crypto/go-kzg-4844"
to "github.com/crate-crypto/go-eth-kzg" which will be the new upstream
for go-kzg moving forward
- Update ckzg from v1.0.0 to v2.0.1 and switch to /v2
- Updates the trusted setup to contain the g1 points both in lagrange
and monomial form
- Expose `ComputeCells` to compute the cell proofs
This changes the filtermaps to only pull up the raw receipts, not the
derived receipts which saves a lot of allocations.

During normal execution this will reduce the allocations of the whole
geth node by ~15%.
Since the block hash is not returned for pending blocks, ethclient cannot unmarshal into RPC block. This makes hash optional on rpc block and compute the hash locally for pending blocks to correctly key the tx sender cache.


https://github.com/ethereum/go-ethereum/blob/a82303f4e3cedcebe31540a53dde4f24fc93da80/internal/ethapi/api.go#L500-L504

---------

Co-authored-by: lightclient <lightclient@protonmail.com>
Add tests for GetBlockHeaders that verify client does not disconnect when unlikely block numbers are requested, e.g. max uint64.
---------

Co-authored-by: lightclient <lightclient@protonmail.com>
The functions `rpcRequest` and `batchRpcRequest` call `baseRpcRequest`.
And `resp.Body` will be closed in the function `baseRpcRequest` later by
`t.Cleanup`:

```go
func baseRpcRequest(t *testing.T, url, bodyStr string, extraHeaders ...string) *http.Response {
        // ......
	t.Cleanup(func() { resp.Body.Close() })
	return resp
}
```
This PR improves gas estimation for data-heavy transactions which hit the floor data gas cost.
This PR fixes the out-of-range block number logic of `getBlockLvPointer`
which sometimes caused searches to fail if the head was updated in the
wrong moment. This logic ensures that querying the pointer of a future
block returns the pointer after the last fully indexed block (instead of
failing) and therefore an async range update will not cause the search
to fail. Earier this behaviour only worked when `headIndexed` was true
and `headDelimiter` pointed to the end of the indexed range. Now it also
works for an unfinished index.

This logic is also moved from `FilterMaps.getBlockLvPointer` to
`FilterMapsMatcherBackend.GetBlockLvPointer` because it is only required
by the search anyways. `FilterMaps.getBlockLvPointer` now only returns a
pointer for existing blocks, consistently with how it is used in the
indexer/renderer.

Note that this unhandled case has been present in the code for a long
time but went unnoticed because either one of two previously fixed bugs
did prevent it from being triggered; the incorrectly positive
`tempRange.headIndexed` (fixed in
#31680), though caused other
problems, prevented this one from being triggered as with a positive
`headIndexed` no database read was triggered in `getBlockLvPointer`.
Also, the unnecessary `indexLock` in `synced()` (fixed in
#31708) usually did prevent
the search seeing the temp range and therefore avoided noticeable
issues.
This PR fixes an initialization bug that in some cases caused the map
renderer to leave the last, partially rendered map as is and resume
rendering from the next map. At initialization we check whether the
existing rendered maps are consistent with the current chain view and
revert them if necessary. Until now this happened through an ugly hacky
solution, a "limited" chain view that was supposed to trigger a rollback
of some maps in the renderer logic if necessary. This whole setup worked
under assumptions that just weren't true any more. As a result it always
tried to revert the last map but also it did not shorten the indexed
range, only set `headIndexed` to false which indicated to the renderer
logic that the last map is fully populated (which it wasn't).
Now an explicit rollback of any unusable (reorged) maps happens at
startup, which also means that no hacky chain view is necessary, as soon
as the new `FilterMaps` is returned, the indexed range and view are
consistent with each other.

In the first commit an extra check is also added to `writeFinishedMaps`
so that if there is ever again a bug that would result in a gapped index
then it will not break the db with writing the incomplete data. Instead
it will return an indexing error which causes the indexer to revert to
unindexed mode and print an error log instantly. Hopefully this will not
ever happen in the future, but in order to test this safeguard check I
manually triggered the bug with only the first commit enabled, which
caused an indexing error as expected. With the second commit added (the
actual fix) the same operation succeeded without any issues.

Note that the database version is also bumped in this PR in order to
enforce a full reindexing as any existing database might be potentially
broken.

Fixes #31729
`DefaultBlobSchedule` is actually used downstream to calculate blob fees
(e.g.,
[src](https://github.com/ethereum-optimism/optimism/blob/601a380e47853c2922ea1f8944cda05f0eac16f4/op-service/eth/blob.go#L301)),
this PR makes it explicit that these params are for `Ethereum prod`
instead of `test chains`.
Fixes #31732.

This logic was removed in the recent refactoring in the txindexer to
handle history cutoff (#31393). It was first introduced in this PR:
#28908.

I have tested it and it works as an alternative to #31745.

This PR packs 3 changes to the flow of fetching txs from the API:

- It caches the indexer tail after each run is over to avoid hitting the
db all the time as was done originally in #28908.

- Changes `backend.GetTransaction`. It doesn't return an error anymore
when tx indexer is in progress. It shifts the responsibility to the
caller to check the progress. The reason is that in most cases we anyway
check the txpool for the tx. If it was indeed a pending tx we can avoid
the indexer progress check.

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
)

updates the log entries in `core/filtermaps/indexer.go` to remove double
quotes around keys like "first block" and "last block", changing them to
`firstblock` and `lastblock`. This brings them in line with the general
logging style used across the codebase, where log keys are unquoted
single words.

For example, the log:
`  INFO [...] "first block"=..., "last block"=...`

Is now rendered as:
`  INFO [...] firstblock=..., lastblock=...`

This change improves readability and maintains consistency with logs
such as:
`  INFO [...] number=2 sealhash=... uncles=0 txs=0 ...`

No functional behavior is changed — this is purely a formatting cleanup
for better developer experience.
Issue statement: when user requests eth_simulateV1 to return full
transaction objects, these objects always had an empty `from` field. The
reason is we lose the sender when translation the message into a
types.Transaction which is then later on serialized.

I did think of an alternative but opted to keep with this approach as it
keeps complexity at the edge. The alternative would be to pass down a
signer object to RPCMarshal* methods and define a custom signer which
keeps the senders in its state and doesn't attempt the signature
recovery.
This change adds a limit for RPC method names to prevent potential abuse
where large method names could lead to large response sizes.

The limit is enforced in:
- handleCall for regular RPC method calls
- handleSubscribe for subscription method calls

Added tests in websocket_test.go to verify the length limit
functionality for both regular method calls and subscriptions.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
Fixes methods debug_standardTraceBlockToFile
and debug_standardTraceBadBlockToFile which were
outputting empty files.

---------

Co-authored-by: maskpp <maskpp266@gmail.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
```
go get github.com/cockroachdb/pebble@v1.1.5
go mod tidy
```

Co-authored-by: lightclient <lightclient@protonmail.com>
This fixes an issue where blocks containing CL requests triggered an
error in the engine API. The encoding of requests used base64 instead of
hex.
Add metics detailing reasons we reject inbound connections for, and
reasons these connections fail during the handshake.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⤵️ pull merge-conflict Resolve conflicts manually
Projects
None yet
Development

Successfully merging this pull request may close these issues.