forked from wemixarchive/go-wemix
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dev 0.10.2 #1
Merged
Merged
Dev 0.10.2 #1
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This functionality is needed in new path-based storage scheme, but can be implemented in a seperate PR though. When an account is deleted, then all the storage slots should be nuked out from the disk as well. In hash-based storage scheme they are still left in the disk but in new scheme, they will be iterated and marked as deleted. But why the NodeBlob API is needed in this scenario? Because when the node is marked deleted, the previous value is also required to be recorded to construct the reverse diff.
…thereum#24392) This PR adds an addtional API called `NewBatchWithSize` for db batcher. It turns out that leveldb batch memory allocation is super inefficient. The main reason is the allocation step of leveldb Batch is too small when the batch size is large. It can take a few second to build a leveldb batch with 100MB size. Luckily, leveldb also offers another API called MakeBatch which can pre-allocate the memory area. So if the approximate size of batch is known in advance, this API can be used in this case. It's needed in new state scheme PR which needs to commit a batch of trie nodes in a single batch. Implement the feature in a seperate PR.
trie: implement NodeBlob API for trie iterator
…thereum#24392) This PR adds an addtional API called `NewBatchWithSize` for db batcher. It turns out that leveldb batch memory allocation is super inefficient. The main reason is the allocation step of leveldb Batch is too small when the batch size is large. It can take a few second to build a leveldb batch with 100MB size. Luckily, leveldb also offers another API called MakeBatch which can pre-allocate the memory area. So if the approximate size of batch is known in advance, this API can be used in this case. It's needed in new state scheme PR which needs to commit a batch of trie nodes in a single batch. Implement the feature in a seperate PR.
This Ubuntu release has reached EOL and Launchpad does not accept uploads for it anymore.
This change adds a code generator tool for creating EncodeRLP method implementations. The generated methods will behave identically to the reflect-based encoder, but run faster because there is no reflection overhead. Package rlp now provides the EncoderBuffer type for incremental encoding. This is used by generated code, but the new methods can also be useful for hand-written encoders. There is also experimental support for generating DecodeRLP, and some new methods have been added to the existing Stream type to support this. Creating decoders with rlpgen is not recommended at this time because the generated methods create very poor error reporting. More detail about package rlp changes: * rlp: externalize struct field processing / validation This adds a new package, rlp/internal/rlpstruct, in preparation for the RLP encoder generator. I think the struct field rules are subtle enough to warrant extracting this into their own package, even though it means that a bunch of adapter code is needed for converting to/from rlpstruct.Type. * rlp: add more decoder methods (for rlpgen) This adds new methods on rlp.Stream: - Uint64, Uint32, Uint16, Uint8, BigInt - ReadBytes for decoding into []byte - MoreDataInList - useful for optional list elements * rlp: expose encoder buffer (for rlpgen) This exposes the internal encoder buffer type for use in EncodeRLP implementations. The new EncoderBuffer type is a sort-of 'opaque handle' for a pointer to encBuffer. It is implemented this way to ensure the global encBuffer pool is handled correctly.
Also specify EOL dates of all listed releases.
…torage (ethereum#24420) This change makes use of the new code generator rlp/rlpgen to improve the performance of RLP encoding for Header and StateAccount. It also speeds up encoding of ReceiptForStorage using the new rlp.EncoderBuffer API. The change is much less transparent than I wanted it to be, because Header and StateAccount now have an EncodeRLP method defined with pointer receiver. It used to be possible to encode non-pointer values of these types, but the new method prevents that and attempting to encode unadressable values (even if part of another value) will return an error. The error can be surprising and may pop up in places that previously didn't expect any errors. To make things work, I also needed to update all code paths (mostly in unit tests) that lead to encoding of non-pointer values, and pass a pointer instead. Benchmark results: name old time/op new time/op delta EncodeRLP/legacy-header-8 328ns ± 0% 237ns ± 1% -27.63% (p=0.000 n=8+8) EncodeRLP/london-header-8 353ns ± 0% 247ns ± 1% -30.06% (p=0.000 n=8+8) EncodeRLP/receipt-for-storage-8 237ns ± 0% 123ns ± 0% -47.86% (p=0.000 n=8+7) EncodeRLP/receipt-full-8 297ns ± 0% 301ns ± 1% +1.39% (p=0.000 n=8+8) name old speed new speed delta EncodeRLP/legacy-header-8 1.66GB/s ± 0% 2.29GB/s ± 1% +38.19% (p=0.000 n=8+8) EncodeRLP/london-header-8 1.55GB/s ± 0% 2.22GB/s ± 1% +42.99% (p=0.000 n=8+8) EncodeRLP/receipt-for-storage-8 38.0MB/s ± 0% 64.8MB/s ± 0% +70.48% (p=0.000 n=8+7) EncodeRLP/receipt-full-8 910MB/s ± 0% 897MB/s ± 1% -1.37% (p=0.000 n=8+8) name old alloc/op new alloc/op delta EncodeRLP/legacy-header-8 0.00B 0.00B ~ (all equal) EncodeRLP/london-header-8 0.00B 0.00B ~ (all equal) EncodeRLP/receipt-for-storage-8 64.0B ± 0% 0.0B -100.00% (p=0.000 n=8+8) EncodeRLP/receipt-full-8 320B ± 0% 320B ± 0% ~ (all equal)
rlpgen outputs calls to this method for values of type string.
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: seven <seven@nodereal.io>
…Azure/azure-sdk-for-go/sdk/storage/azblob. (ethereum#24473) * go.mod: update azure-storage-blob-go update Azure/azure-storage-blob-go from v0.7.0 to v0.14.0. relation ethereum#24396. * internal/build: fix for breaking changes of azure-storage-blob-go fix for breaking changes of update Azure/azure-storage-blob-go from v0.7.0 to v0.14.0. relation ethereum#24396. * internal/build: switch azure sdk from Azure/azure-storage-blob-go to Azure/azure-sdk-for-go/sdk/storage/azblob. * internal/build refactor appending BlobItems * internal/build: fix azure blobstore client to include container id Co-authored-by: Péter Szilágyi <peterke@gmail.com>
…24407) This replaces the simple selector parser in signer/fourbyte with one that can actually handle most types. The new parser is added in accounts/abi to also make it useable elsewhere.
* rpc, node: refactor request validation and add jwt validation * node, rpc: fix error message, ignore engine api in RegisterAPIs * node: make authenticated port configurable * eth/catalyst: enable unauthenticated version of engine api * node: rework obtainjwtsecret (backport later) * cmd/geth: added auth port flag * node: happy lint, happy life * node: refactor authenticated api Modifies the authentication mechanism to use default values * node: trim spaces and newline away from secret Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
* eth, cmd: allow FdLimit to be set in config/command line (ethereum#24148) * eth/ethconfig: format code * cmd, eth/ethconfig: simplify fdlimit arg, disallow toml * cnd/utils: make fdlimit setting nicer on the logs Co-authored-by: Gary Rong <garyrong0905@gmail.com> Co-authored-by: Péter Szilágyi <peterke@gmail.com>
When using -buildmode=shared, R15 is clobbered by a global variable access; use a different register instead. Fixes: ethereum#24439
ethereum#24256) Co-authored-by: Felix Lange <fjl@twurst.com>
This change speeds up trie hashing and all other activities that require RLP encoding of trie nodes by approximately 20%. The speedup is achieved by avoiding reflection overhead during node encoding. The interface type trie.node now contains a method 'encode' that works with rlp.EncoderBuffer. Management of EncoderBuffers is left to calling code. trie.hasher, which is pooled to avoid allocations, now maintains an EncoderBuffer. This means memory resources related to trie node encoding are tied to the hasher pool. Co-authored-by: Felix Lange <fjl@twurst.com>
…thereum#24522) The default listening address "localhost" is not sufficient when running geth in Docker.
… generation (ethereum#24811) * core/state/snapshot: check dangling storages when generating snapshot * core/state/snapshot: polish * core/state/snapshot: wipe the last part of the dangling storages * core/state/snapshot: fix and add tests * core/state/snapshot: fix comment * README: remove mentions of fast sync (ethereum#24656) Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de> * core, cmd: expose dangling storage detector for wider usage * core/state/snapshot: rename variable * core, ethdb: use global iterators for snapshot generation * core/state/snapshot: polish * cmd, core/state/snapshot: polish * core/state/snapshot: polish * Update core/state/snapshot/generate.go Co-authored-by: Martin Holst Swende <martin@swende.se> * ethdb: extend db test suite and fix memorydb iterator * ethdb/dbtest: rollback changes * ethdb/memorydb: simplify iteration * core/state/snapshot: update dangling counter * core/state/snapshot: release iterators * core/state/snapshot: update metrics * core/state/snapshot: update time metrics * metrics/influxdb: temp solution to present counter meaningfully, remove it * add debug log, revert later * core/state/snapshot: fix iterator panic * all: customized snapshot iterator for backward iteration * core, ethdb: polish * core/state/snapshot: remove debug log * core/state/snapshot: address comments from peter * core/state/snapshot: reopen the iterator at the next position * ethdb, core/state/snapshot: address comment from peter * core/state/snapshot: reopen exhausted iterators Co-authored-by: Tbnoapi <63448616+nuoomnoy02@users.noreply.github.com> Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de> Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR also comes with 2 fixes in the Goja tracer and one small behavioural change: I had handled errors in the native Go functions by panicing. My oversight was that Goja only handles panics with a Goja.Value as argument. The difference is panic(goja.Value) allows JS to catch the exception whereas Interrupt(error) doesn't. There was a race in how I handled Stop. Because of 1. some of the methods that simply return nil on error (like memory.slice) now throw an exception.
…r's signature (ethereum#24941) * signer/core: always pad clique header extra data with space for sealer's signature * capitalize comment
This should fully resolve dependency conflict issues in modules that also depend on btcsuite/btcd v0.22.0.
This upgrade is necessary to silence a Dependabot warning.
Make TriesInMemory configurable through CLI config
…k time of Wemix Chain. (about one year, 31536000)
eth/ethconfig: Update 'TxLookupLimit' default value.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.