-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
release: 2.6.0 #3455
release: 2.6.0 #3455
Conversation
Co-authored-by: Renovate Bot <bot@renovateapp.com>
Emit `round.missed` whenever an active delegate didn't forge any blocks during a round.
Prevents memory spikes during bootstrap phase by loading transactions of the same type in batches instead of all at once into memory during bootstrap.
The ajv address schema calls `Base58.decodeCheck` to validate the network byte. But during transaction serialization, the address is also decoded, so we end up doing it again. By performing the network byte validation during transaction serialization we can get rid of the decode call during schema validation. This makes multipayment processing approx. 25% faster.
findByAddress() is returning a reference to the stored wallet object, so modifying its balance property also modifies what is stored in the wallet manager, no need to re-insert it. This boosts the apply phase of the multi-payment bootstrap from ~9sec to ~6sec.
On mainnet database (9912558 blocks, 3171992 transactions): SELECT * FROM transactions WHERE type = 6; no index: 700ms, with index: 1ms SELECT * FROM transactions WHERE type = 1; no index: 650ms, with index: 14ms SELECT * FROM transactions WHERE type = 3; no index: 1280ms, with index: 480ms SELECT * FROM transactions WHERE type = 0; no index: 11500ms, with index: 11500ms (the index is not used) bootstrap no index: 22sec, with index: 20sec
Disable the TransactionReader because it inflicts a minor performance regression. However, do not completely delete it from the source code because we may revisit it later when the database queries start returning way too many rows to the nodejs app at once. The proper solution is to execute the database query once and serve the results to the app on portions using cursors, rather than executing the same query multiple times and chopping the portions using OFFSET,LIMIT.
* We support utf8 strings for vendor field * \u0000 (nul byte) is a valid utf8 string * PostgreSQL cannot store \u0000 in VARCHAR It follows that we cannot use VARCHAR for storing vendor field. So use bytea for that.
…l api attributes (#3437)
Codecov Report
@@ Coverage Diff @@
## master #3455 +/- ##
=========================================
Coverage ? 66.16%
=========================================
Files ? 439
Lines ? 12458
Branches ? 1708
=========================================
Hits ? 8243
Misses ? 4181
Partials ? 34 Continue to review full report at Codecov.
|
This pull request introduces 1 alert and fixes 14 when merging cdfde55 into 9ac0c72 - view on LGTM.com new alerts:
fixed alerts:
|
This pull request introduces 1 alert and fixes 14 when merging 3a6b655 into 9ac0c72 - view on LGTM.com new alerts:
fixed alerts:
|
This pull request introduces 1 alert and fixes 14 when merging 64fe08c into 9ac0c72 - view on LGTM.com new alerts:
fixed alerts:
|
This pull request introduces 1 alert and fixes 14 when merging dabd270 into 9ac0c72 - view on LGTM.com new alerts:
fixed alerts:
|
This pull request introduces 1 alert and fixes 14 when merging 13d7f83 into 9ac0c72 - view on LGTM.com new alerts:
fixed alerts:
|
No description provided.