-
Notifications
You must be signed in to change notification settings - Fork 36.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testnet4 including PoW difficulty adjustment fix #29775
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code CoverageFor detailed information about the code coverage, see the test coverage report. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
Concept ACK |
When resetting a test chain, it is also important to consider the script interpreter coverage of the current chain. Test chains are (usually) the first place to go to, to test new script primitives and protocols, as well as consensus deployments. The existing chain thus serves as a test for consensus implementations, apart from basic unit test vectors. It would be good to think how to preserve the test vectors in the chain. See also #11739 (comment) . Or if it is not needed, it would be good to say so. Maybe https://github.com/bitcoin-core/qa-assets/blob/main/unit_test_data/script_assets_test.json already covers a good portion? Moreover, testnet is the only public chain where anyone can submit a nonstandard transaction from their laptop. Recall that policy is enforced on all networks equally (see commit e1dc15d), so getting a non-mempool transaction into a block is only possible for a miner, or by cooperating with a miner. So if the difficulty hack is removed completely, anyone wishing to submit a transaction would have to go purchase and set up mining hardware, or find a miner willing to accept the transaction. Not saying what is the best approach here, just saying that the effects should be considered and a change be done intentionally. |
Probably we should support tracking both testnet3 and the new testnet4 for some time. Making the new code conditional on a different chain param that's only set for testnet4 would probably be the easiest way of accomplishing that? |
61e51d1
to
7fbd4c6
Compare
Pushed some improvements and addressed some feedback. I am experimenting with some of the proposals from the mailing list and so I added Andres Poelstra's suggested difficulty adjustment with 6h/1M from here: https://groups.google.com/g/bitcoindev/c/9bL00vRj7OU/m/kFPaQCzmBwAJ
Updated the code to introduce T4 ( I am using the Genesis block hash to distinguish between the two testnets. There may be cleaner solutions but I think this is ok since it would be only temporary until T3 is removed.
Is it really realistic that someone with just their CPU would be able to mine a block with their non-standard tx on the current testnet? If the bug isn't active currently they would need to wait for it to become active and that could take weeks, right? And when it becomes active I would imagine the miner who found the first block in the difficulty=1 series just blasts the network and the CPU miner still has no chance to get a block in between. We could revert #28354 for testnet4 if this is a feature that matters to users. Is it too much to ask that people use
Interesting thought. I think once there is consensus to do T4 we will find a creative solution for this. Cool would be to convert this coverage to fuzzing coverage somehow but I am not sure if that's realistic or worth the effort. Otherwise, we could write a program that looks at all the different scripts that exist on T3 and replays them on T4 or if we can compress them somehow like by filtering everything that doesn't add coverage, then we turn it into a unit test that replays the interesting scripts. |
7fbd4c6
to
aaed0d3
Compare
Since some people consider the blockstorms an interesting feature of Testnet3, it might be interesting to only raise the difficulty of the delayed block exception to 100,000 instead of 1,000,000. This would allow the network to return to the organic difficulty in fewer difficulty periods and slow down the blockstorms but not remove the feature altogether. My understanding is that this would correspond roughly a tenth of one S9 mining on the network, so if no one had mined for a while, a single S9 could restart the network with ~60 s blocks, but wouldn’t churn out thousands of blocks per second. Only allowing lower difficulty blocks after 6 hours could easily make testnet useless for extended periods, if someone put several ASICs on testnet for a while, it might prevent other users from getting confirmations for up to 6 hours. I could see an increase from the twenty minute rule to maybe an hour, but more seems counter to why the rule was introduced in the first place. |
Yes, I am not sure what would be the problem. All you have to do is to set the time +20min and mine a block on your laptop. If you don't want to try it yourself, you can come by to watch it on my laptop.
After a quick chat with @murchandamus, an alternative fix would be to require the pre-retarget block to have the "correct" difficulty, so that all retarget periods are organic. The +20min hack would remain to allow a CPU to mine a few blocks, if needed, however, a block storm would be naturally limited by the +120h cut-off rule. This would limit the block storms to small block "gusts", which seems good enough to make everyone happy? |
I spun up When running with If anyone wants to deploy a faucet, let me know and I'll send some coins... unless someone reorgs me. |
This seems too complicated for a testnet exception IMO. And it breaks the use case of someone testing being able to mine a block on-demand without actual mining hardware. Shouldn't it be enough to just fix the timewarp bug? |
I doubt many people do that. You can still set |
Two people raised the concern in this thread, so why would you doubt it? |
I missed that response, so if this is possible at any time with or without a block storm happening, I am not sure how the change here is making a difference? I will give it a try. |
"Many" is very relative but I think we probably would not see a market for trading testnet coins against bitcoin if that is something everyone can do as easily as setting a bitcore core node for example. |
If it depends on the difficulty being 1 rather 1 million, that would make a difference. The two people who brought it up can definitely recompile, but maybe there's a better solution - maybe just a startup flag to override the minimum difficulty? |
I don't think consensus rules of remote nodes can be affected by a local startup flag (or re-compilation). If someone wanted to create a block locally only, they could use regtest. |
Post-merge light re-utACK Happy to see Followup suggestion: let's make I didn't test the new seeds generation script. |
There's usually a "timewarp" attack if
It doesn't depend on the 2015/2016 "hole" in bitcoin's measurement of nActualtimespan. The difficulty fix seems to still have a hole if nActualtimespan can be negative. Granted, the attack below requires 16 weeks. The fix is an improvement because the 3 conditions above usually require only 2.5 difficulty adjustment windows to get an infinite number of blocks (even without the 2015 "hole"). MTP and this fix are indirect ways of enforcing monotonic timestamps. Monotonicity comes from Leslie Lamport's 1978 paper that applies to all concepts of "distributed consensus" without regard to the algorithm. He discusses the physical impossibility of travelling to the past (but not the future). In our case, this means a meaningful (non-negative) measurement of work. Height monotonicity and even proof of proposed block ordering via the monotonicity of hash references aren't enough because they aren't used to measure work like timestamps. Condition 2 is a problem because it subverts the math of the work measurement. Lamport also proves timestamp accuracy must be much smaller than the length of consensus rounds (a block). Bitcoin consensus allowing much greater error is what enables selfish mining (here's my suggested fix). Selfish miners have to assign a timestamp before they know when they need to release the block. Honest miners can exploit this lack of knowledge by enforcing better synchronization (accurate timestamps). The synchronization is relaxed if PoW indicates a partition has occurred To summarize this attack: attacker does a private mine to assign future timestamps at the 4x timespan limit two times in a row (two 2016 periods) while keeping the MTP held back. The timestamp at the end of the 2nd period and beginning of 3rd period is 8 * 2016 * 600 seconds into the future (16 weeks). The 3rd period thereby has 1/16 of the original difficulty (16x the target). Attacker then assigns a past timestamp at end of 3rd period (which is allowed by the MTP being held back) which increases difficulty back to 1/4 of the original difficulty (this causes nActualtimespan to be negative). This allows the 4th period to have the first timestamp in the past and its last timestamp at the same 8 * 2016 * 600 future time as before, bringing difficulty back down to 1/16. He keeps doing this pattern in the last 2 periods until his current time equals the 8 * 2016 * 600 future timestamp and then he releases the resulting 41,500 blocks when he should have gotten only 8,064 blocks. correction: in "actual time" column I have +1/2 at the end of the equation in one place and +1/8 in two places that should be +T/2 and +T/8 which can be deduced by the context. Zcash/Digishield (& maybe Grin copied them close enough) is the only difficulty algorithm I know of that isn't subject to this attack despite the fulfilling the 3 conditions but it's because their difficulty window is set to the MTP which is how monotonicity is indirectly enforced in the consensus. Setting MTP = 1 instead of 11 could be a BIP to enforce monotonicity. I'm just kidding. I don't want to get yelled at. |
@zawy12: I think this gets a bit off-topic here, perhaps it would be better to post about your timewarp scenario to the mailing list or Delving Bitcoin. |
@murchandamus I was only interested in letting those calling it a fix to be aware that it's not a fix. |
Currently only contains 1 entry, blackie.c3-soft.com (which is not yet up), but this 1 entry will serve as the seed to find all the other servers, as they may appear. Note that Core only just recently merged testnet4 chain to master so there is no real rush to get other servers up right now, but we will be ready for it on the Fulcrum side as more servers appear. Core added testnet4 here: bitcoin/bitcoin#29775
Well, sounds like we could also just use |
@fjahr: Having thought a bit more about the attack scenario @zawy12 describes, it might be useful to additionally require that the first block of a difficulty period N has a lower timestamp than the last block of the difficulty period N. I’ll respond to Zawy on Delving, if the topic is posted there, or post there myself in the next few days to explain my thinking. |
As @murchandamus just said, preventing nActualtimespan from going negative could stop my attack above. An alternate fix to restricting nActualtimespan is for no block to be more than 2 hours and 80 minutes before its parent block. The 80 minutes is to provide protection from a dual Sybil attack on the 40 minute allowable error in peer time. Allowing & relying on peer time at all is another fundamental consensus error. I prefer monotonicity on every block & removal of timespan limits but I know from the past that those are a "no go" with everyone, as is accurate timestamp enforcement. Restricting timestamps to less than 2 hr 80 minutes for every block is a weak form of monotonicity. Me, Greg Maxwell, Brian Cohen, Jacob Eliosoff, and Johnson Lau discussed the current type of fix in email 6 years ago after a August 2018 mailing list initiation. GM opened discussion but then fell silent ("It's not too important, and can be fixed quickly and easily if it happens"). J Lau suggested 1 day max in the past for the 1st block of each period and provided reasoning that it was a good method. Bram concurred and proposed 3 hours to reduce the max 7% manipulation Lau had calculated. I proposed doing it for every block even though at that time I had not thought of the attack above, but felt something like it was possible. Bram said "maybe every block is a good idea" by his own reasoning. The others didn't object or acknowledge. Recounting this is to show we're not suddenly making things up. |
92c1d7d validation: Use MAX_TIMEWARP constant as testnet4 timewarp defense delta (Fabian Jahr) 4b2fad5 doc: Add release notes for 29775 (Fabian Jahr) f7cc973 doc: Align deprecation warnings (Fabian Jahr) 1163b08 chainparams: Add initial minimum chain work for Testnet4 (Fabian Jahr) Pull request description: This completes follow-ups left open in #29775. - Adds release notes - Addresses the [misalignment](#29775 (comment)) in deprecation warnings and hints at the intention to remove support for Testnet3. - Adds initial minimum chainwork for Testnet4. - Use the `MAX_TIMEWARP` constant as the timewarp defense delta, equal to `MAX_FUTURE_BLOCK_TIME`. ACKs for top commit: Sjors: ACK 92c1d7d achow101: ACK 92c1d7d tdb3: re ACK 92c1d7d Tree-SHA512: 7ebdac7809f96231f75ca62706af59cd1ed27f713a4c7be5e2ad69fae95832b146b3ea23c712fb03b412da1deda7e8a5dae55bb2bbd2dcfd9f926e85c2a72666
99eeb51 [doc] mention bip94 support (glozow) Pull request description: Followup to #29775, noticed while looking at #30604 and #30647. See [release process](https://github.com/bitcoin/bitcoin/blob/master/doc/release-process.md#before-every-major-and-minor-release). ACKs for top commit: maflcko: ACK 99eeb51 fjahr: ACK 99eeb51 tdb3: ACK 99eeb51 Tree-SHA512: 95838d3ace7e5d7b1a2481f2d7bd82902081713e6e89dbf21e0dad16d1cf5295e0c1cfda1f03af72304a5844743d24769f5fa04d4dc9f02f36462ef0ae82a552
Ran scripted-diff from 2d9d752. Follow-up to bitcoin#29775 which overlapped with work on bitcoin#30560 (the latter includes the scripted-diff commit).
Ran scripted-diff from 2d9d752. Follow-up to bitcoin#29775 which overlapped with work on bitcoin#30560 (the latter includes the scripted-diff commit).
…{"str"} 49f9b64 refactor: Testnet4 - Replace uint256S("str") -> uint256{"str"} (Hodlinator) Pull request description: Ran scripted-diff from 2d9d752: ``` sed -i --regexp-extended -e 's/\buint256S\("(0x)?([^"]{64})"\)/uint256{"\2"}/g' $(git grep -l uint256S) ``` Follow-up to Testnet4 introduction #29775 which overlapped with work on `uint256` `consteval` ctor #30560 (the latter includes the scripted-diff commit). Going forward `uint256{}` should be used for constants instead of `uint256S()`. ACKs for top commit: maflcko: review-ACK 49f9b64 🐮 fjahr: ACK 49f9b64 Tree-SHA512: 94fe5d9f1fb85e9ce5c3c4c5e4c31667e8cbb55ee691a4b5b3ae4172ccac38230281071023663965917f188b4c19bdf67afd4e3cdf69d89e97c65faea88af833
Sorry if this is the wrong spot for feedback. Post-merge, Testnet4 seems solid overall. One issue: difficulty appears to be rising exponentially, likely due to someone spoofing timestamps and pushing in 6 blocks every 20 minutes. I could be off here, but if difficulty keeps spiking, mining might become impractical, making reorgs easier. A possible fix: setting up a pool that builds on genuine 20-minute blocks (not spoofed ones), and having some miners point their hashpower to it. |
Not necessarily. There are many CPU-mined blocks, but the chainwork is not rising by that much. And CPU miners are not only producing more blocks than needed: they are also pushing timestamps forward. Which means, that if it will be too hard to mine, then:
For example: what you can currently see, is just a lot of CPU blocks, mined by https://mempool.space/testnet4/address/tb1q3u8f5899ymkatx69h0n3sw0qpalgwdmrcj80dm And also note, that ASIC miners can always mine a single block, and revert a lot of CPU-mined blocks in this way. And: if a lot of fees will be there, spread over many blocks, then sooner or later, reorging hundreds of CPU-mined blocks, and collecting the highest fees, will be profitable enough, to trigger such reorg. And then, CPU-mined blocks will be, what they always should: just a weak blocks, to test things, to propagate transactions faster, and to vanish, and be reorged, when someone will honestly start mining with the network difficulty. Then, maybe we will get closer to the desired outcome, where all testnet blocks are temporary, and quickly becoming stale. Because in practice, all of that mining power, should simply be contributed towards mainnet, and should allow temporarily getting some coins, which would then be reorged, to lose any value, that someone could assign to them; and to crash all kind of trading, that could happen on test coins.
Testnet4 guarantees, that ASICs are needed. If no ASIC is there, then all CPU miners will be stuck, after mining 2016 blocks. Which means, that it is guaranteed to mine a single ASIC block per two weeks, no matter what. And if CPU miners will be blocked by raised difficulty, then it will take more than two weeks, to mine a single ASIC block, and then, the difficulty will start falling.
Note that if you use OP_SIZE on DER signatures, then you can send coins to Proof of Work, without using OP_CAT. In general, if you want to ensure a given reward, for a given amount of Proof of Work, then this is the way to go. By setting up a pool, you will just jump into that race, and the difficulty will skyrocket, because then, the network will become even more similar to the mainnet. You don't need pools on testnets. Mining pools are based on shares. And a share is just a regular block, with lowered difficulty. And in testnet4, you already have that, because you can switch into CPU difficulty, at any time, and then just improve your network propagation. |
It’s a bit scary. Currently, testnet4 block generation is controlled. https://mempool.space/testnet4/address/tb1q2dsc94zq40nwnz27w5rxljwllutnwjtlxk44fz |
Re: the recent discussion here, see #31117 |
To supplement the ongoing conceptual discussion about a testnet reset I have drafted a move to v4 including a fix to the difficulty adjustment mechanism, which was part of the motivation that started the discussion.
Conceptual considerations:
CalculateNextWorkRequired
function and uses the same logic used inGetNextWorkRequired
to find the last previous block that was not mined with difficulty 1 under the exceptionf. An alternative fix briefly mentioned on the mailing list by Jameson Lopp would be to "restrict the special testnet minimum difficulty rule so that it can't be triggered on the block right before a difficulty retarget". That would also fix the issue but I find my suggestion here a bit more elegant.