Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,36 @@ Please make sure your contributions adhere to our coding guidelines:
Before you submit a feature request, please check and make sure that it isn't
possible through some other means.

## Mocks

Mocks are auto-generated using [mockgen](https://pkg.go.dev/go.uber.org/mock/mockgen) and `//go:generate` commands in the code.

* To **re-generate all mocks**, use the command below from the root of the project:

```sh
go generate -run "go.uber.org/mock/mockgen" ./...
```

* To **add** an interface that needs a corresponding mock generated:
* if the file `mocks_generate_test.go` exists in the package where the interface is located, either:
* modify its `//go:generate go run go.uber.org/mock/mockgen` to generate a mock for your interface (preferred); or
* add another `//go:generate go run go.uber.org/mock/mockgen` to generate a mock for your interface according to specific mock generation settings
* if the file `mocks_generate_test.go` does not exist in the package where the interface is located, create it with content (adapt as needed):

```go
// Copyright (C) 2025-2025, Ava Labs, Inc. All rights reserved.
// See the file LICENSE for licensing terms.

package mypackage

//go:generate go run go.uber.org/mock/mockgen -package=${GOPACKAGE} -destination=mocks_test.go . YourInterface
```

Notes:
1. Ideally generate all mocks to `mocks_test.go` for the package you need to use the mocks for and do not export mocks to other packages. This reduces package dependencies, reduces production code pollution and forces to have locally defined narrow interfaces.
1. Prefer using reflect mode to generate mocks than source mode, unless you need a mock for an unexported interface, which should be rare.
* To **remove** an interface from having a corresponding mock generated:
1. Edit the `mocks_generate_test.go` file in the directory where the interface is defined
1. If the `//go:generate` mockgen command line:
* generates a mock file for multiple interfaces, remove your interface from the line
* generates a mock file only for the interface, remove the entire line. If the file is empty, remove `mocks_generate_test.go` as well.
14 changes: 11 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ jobs:
- uses: actions/setup-go@v5
with:
go-version: "~1.22.8"
check-latest: true
- name: change metalgo dep
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
Expand Down Expand Up @@ -74,7 +73,6 @@ jobs:
- uses: actions/setup-go@v5
with:
go-version: "~1.22.8"
check-latest: true
- name: change avalanchego dep
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
Expand All @@ -86,6 +84,17 @@ jobs:
run: echo "TIMEOUT=1200s" >> "$GITHUB_ENV"
- run: go mod download
shell: bash
- name: go mod tidy
run: |
go mod tidy
git diff --exit-code
- name: Mocks are up to date
shell: bash
run: |
grep -lr -E '^// Code generated by MockGen\. DO NOT EDIT\.$' . | xargs -r rm
go generate -run "go.uber.org/mock/mockgen" ./...
git add --intent-to-add --all
git diff --exit-code
- run: ./scripts/build.sh evm
shell: bash
- run: ./scripts/build_test.sh
Expand Down Expand Up @@ -113,7 +122,6 @@ jobs:
- uses: actions/setup-go@v5
with:
go-version: "~1.22.8"
check-latest: true
- name: Run e2e tests
run: E2E_SERIAL=1 ./scripts/tests.e2e.sh
shell: bash
Expand Down
1 change: 0 additions & 1 deletion .github/workflows/sync-subnet-evm-branch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,3 @@ jobs:
- uses: actions/setup-go@v5
with:
go-version: "~1.22.8"
check-latest: true
20 changes: 20 additions & 0 deletions RELEASES.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,25 @@
# Release Notes

## Pending Release

## [v0.14.1](https://github.com/ava-labs/coreth/releases/tag/v0.14.1)

- Removed deprecated `ExportKey`, `ExportAVAX`, `Export`, `ImportKey`, `ImportAVAX`, `Import` APIs
- IMPORTANT: `eth_getProof` calls for historical state will be rejected by default.
- On archive nodes (`"pruning-enabled": false`): queries for historical proofs for state older than approximately 24 hours preceding the last accepted block will be rejected by default. This can be adjusted with the new option `historical-proof-query-window` which defines the number of blocks before the last accepted block which should be accepted for state proof queries, or set to `0` to accept any block number state query (previous behavior).
- On `pruning` nodes: queries for proofs past the tip buffer (32 blocks) will be rejected. This is in support of moving to a path based storage scheme, which does not support historical state proofs.
- Remove API eth_getAssetBalance that was used to query ANT balances (deprecated since v0.10.0)
- Remove legacy gossip handler and metrics (deprecated since v0.10.0)
- Refactored trie_prefetcher.go to be structurally similar to [upstream](https://github.com/ethereum/go-ethereum/tree/v1.13.14).

## [v0.14.0](https://github.com/ava-labs/coreth/releases/tag/v0.14.0)
- Minor version update to correspond to avalanchego v1.12.0 / Etna.
- Remove unused historical opcodes CALLEX, BALANCEMC
- Remove unused pre-AP2 handling of genesis contract
- Fix to track tx size in block building
- Test fixes
- Update go version to 1.22

## [v0.13.8](https://github.com/ava-labs/coreth/releases/tag/v0.13.8)
- Update geth dependency to v1.13.14
- eupgrade: lowering the base fee to 1 nAVAX
Expand Down
20 changes: 4 additions & 16 deletions core/blockchain.go
Original file line number Diff line number Diff line change
Expand Up @@ -1288,16 +1288,6 @@ func (bc *BlockChain) insertBlock(block *types.Block, writes bool) error {
blockContentValidationTimer.Inc(time.Since(substart).Milliseconds())

// No validation errors for the block
var activeState *state.StateDB
defer func() {
// The chain importer is starting and stopping trie prefetchers. If a bad
// block or other error is hit however, an early return may not properly
// terminate the background threads. This defer ensures that we clean up
// and dangling prefetcher, without deferring each and holding on live refs.
if activeState != nil {
activeState.StopPrefetcher()
}
}()

// Retrieve the parent block to determine which root to build state on
substart = time.Now()
Expand All @@ -1316,8 +1306,8 @@ func (bc *BlockChain) insertBlock(block *types.Block, writes bool) error {
blockStateInitTimer.Inc(time.Since(substart).Milliseconds())

// Enable prefetching to pull in trie node paths while processing transactions
statedb.StartPrefetcher("chain", bc.cacheConfig.TriePrefetcherParallelism)
activeState = statedb
statedb.StartPrefetcher("chain", state.WithConcurrentWorkers(bc.cacheConfig.TriePrefetcherParallelism))
defer statedb.StopPrefetcher()

// Process block using the parent state as reference point
pstart := time.Now()
Expand Down Expand Up @@ -1675,10 +1665,8 @@ func (bc *BlockChain) reprocessBlock(parent *types.Block, current *types.Block)
}

// Enable prefetching to pull in trie node paths while processing transactions
statedb.StartPrefetcher("chain", bc.cacheConfig.TriePrefetcherParallelism)
defer func() {
statedb.StopPrefetcher()
}()
statedb.StartPrefetcher("chain", state.WithConcurrentWorkers(bc.cacheConfig.TriePrefetcherParallelism))
defer statedb.StopPrefetcher()

// Process previously stored block
receipts, _, usedGas, err := bc.processor.Process(current, parent.Header(), statedb, vm.Config{})
Expand Down
4 changes: 3 additions & 1 deletion core/state/snapshot/snapshot.go
Original file line number Diff line number Diff line change
Expand Up @@ -891,6 +891,8 @@ func (t *Tree) disklayer() *diskLayer {
case *diskLayer:
return layer
case *diffLayer:
layer.lock.RLock()
defer layer.lock.RUnlock()
return layer.origin
default:
panic(fmt.Sprintf("%T: undefined layer", snap))
Expand Down Expand Up @@ -922,7 +924,7 @@ func (t *Tree) generating() (bool, error) {
return layer.genMarker != nil, nil
}

// DiskRoot is a external helper function to return the disk layer root.
// DiskRoot is an external helper function to return the disk layer root.
func (t *Tree) DiskRoot() common.Hash {
t.lock.Lock()
defer t.lock.Unlock()
Expand Down
49 changes: 27 additions & 22 deletions core/state/statedb.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ package state

import (
"fmt"
"maps"
"math/big"
"sort"
"time"
Expand All @@ -42,6 +43,7 @@ import (
"github.com/MetalBlockchain/coreth/trie"
"github.com/MetalBlockchain/coreth/trie/trienode"
"github.com/MetalBlockchain/coreth/trie/triestate"
"github.com/MetalBlockchain/coreth/utils"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
Expand Down Expand Up @@ -201,16 +203,33 @@ func NewWithSnapshot(root common.Hash, db Database, snap snapshot.Snapshot) (*St
return sdb, nil
}

type workerPool struct {
*utils.BoundedWorkers
}

func (wp *workerPool) Done() {
// Done is guaranteed to only be called after all work is already complete,
// so Wait()ing is redundant, but it also releases resources.
wp.BoundedWorkers.Wait()
}

func WithConcurrentWorkers(prefetchers int) PrefetcherOption {
pool := &workerPool{
BoundedWorkers: utils.NewBoundedWorkers(prefetchers),
}
return WithWorkerPools(func() WorkerPool { return pool })
}

// StartPrefetcher initializes a new trie prefetcher to pull in nodes from the
// state trie concurrently while the state is mutated so that when we reach the
// commit phase, most of the needed data is already hot.
func (s *StateDB) StartPrefetcher(namespace string, maxConcurrency int) {
func (s *StateDB) StartPrefetcher(namespace string, opts ...PrefetcherOption) {
if s.prefetcher != nil {
s.prefetcher.close()
s.prefetcher = nil
}
if s.snap != nil {
s.prefetcher = newTriePrefetcher(s.db, s.originalRoot, namespace, maxConcurrency)
s.prefetcher = newTriePrefetcher(s.db, s.originalRoot, namespace, opts...)
}
}

Expand Down Expand Up @@ -802,18 +821,18 @@ func (s *StateDB) Copy() *StateDB {
db: s.db,
trie: s.db.CopyTrie(s.trie),
originalRoot: s.originalRoot,
accounts: make(map[common.Hash][]byte),
storages: make(map[common.Hash]map[common.Hash][]byte),
accountsOrigin: make(map[common.Address][]byte),
storagesOrigin: make(map[common.Address]map[common.Hash][]byte),
accounts: copySet(s.accounts),
storages: copy2DSet(s.storages),
accountsOrigin: copySet(s.accountsOrigin),
storagesOrigin: copy2DSet(s.storagesOrigin),
stateObjects: make(map[common.Address]*stateObject, len(s.journal.dirties)),
stateObjectsPending: make(map[common.Address]struct{}, len(s.stateObjectsPending)),
stateObjectsDirty: make(map[common.Address]struct{}, len(s.journal.dirties)),
stateObjectsDestruct: make(map[common.Address]*types.StateAccount, len(s.stateObjectsDestruct)),
stateObjectsDestruct: maps.Clone(s.stateObjectsDestruct),
refund: s.refund,
logs: make(map[common.Hash][]*types.Log, len(s.logs)),
logSize: s.logSize,
preimages: make(map[common.Hash][]byte, len(s.preimages)),
preimages: maps.Clone(s.preimages),
journal: newJournal(),
hasher: crypto.NewKeccakState(),

Expand Down Expand Up @@ -855,16 +874,6 @@ func (s *StateDB) Copy() *StateDB {
}
state.stateObjectsDirty[addr] = struct{}{}
}
// Deep copy the destruction markers.
for addr, value := range s.stateObjectsDestruct {
state.stateObjectsDestruct[addr] = value
}
// Deep copy the state changes made in the scope of block
// along with their original values.
state.accounts = copySet(s.accounts)
state.storages = copy2DSet(s.storages)
state.accountsOrigin = copySet(state.accountsOrigin)
state.storagesOrigin = copy2DSet(state.storagesOrigin)

// Deep copy the logs occurred in the scope of block
for hash, logs := range s.logs {
Expand All @@ -875,10 +884,6 @@ func (s *StateDB) Copy() *StateDB {
}
state.logs[hash] = cpy
}
// Deep copy the preimages occurred in the scope of block
for hash, preimage := range s.preimages {
state.preimages[hash] = preimage
}
// Do we need to copy the access list and transient storage?
// In practice: No. At the start of a transaction, these two lists are empty.
// In practice, we only ever copy state _between_ transactions/blocks, never
Expand Down
Loading
Loading