Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aeon difficulty v9 #194

Merged
merged 3 commits into from
Sep 28, 2020
Merged

Aeon difficulty v9 #194

merged 3 commits into from
Sep 28, 2020

Conversation

stoffu
Copy link

@stoffu stoffu commented Sep 15, 2020

A while ago @iamsmooth told me privately over IRC that Bittrex has expressed a concern about occasional very slow block times observed on the Aeon network, which are due to large hashrate swings. What he suggested as a remedy was to remove the difficulty cutoff/sorting and to reduce the lag, so that the difficulty calculation responds to sudden hashrate changes (especially sudden declines) more quickly.

This PR introduces a new difficulty adjustment algorithm (cutting/sorting removed & lag reduced from 15 to 8) with the next v9 hardfork. Testnet fork height is set to 131111 (currently 131019 since July).

To confirm the effect, I've added some code to tests/difficulty/difficulty.cpp which simulates how the difficulty and block times change over time given a fixed schedule of underlying real hashrate. Here's an interactive plot https://stoffu.github.io/diff-chart/diff-variant/diff-variant.html comparing the original scheme (v1) and the new one (v9). The new version succeeds in reducing the slow block times at sudden hashrate declines (see the 6h average block time).

Screen Shot 2020-09-15 at 16 28 36

Copy link

@moneromooo-monero moneromooo-monero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing the sorting/cutting is likely to cause outlier timestamps to have an inordinate power to change the difficulty (timestamps in the future would increase diff a lot shortly after being mined).
I think that if you mine a block 2 hours into the future (this is the cutoff in monero, I'm assuming the same here), the amount of time between earliest and latest timestamps goes from ~2880 (720 * 4) minutes on average to 2880+120 minutes, +4%. A similar thing should happen when that block leaves the difficulty window, there'll be a quick diff drop (to 2880-120). I guess 4% is acceptable as an instant step.

src/cryptonote_basic/difficulty.cpp Show resolved Hide resolved
@ghost
Copy link

ghost commented Sep 15, 2020

Why not change the window size?

@stoffu
Copy link
Author

stoffu commented Sep 15, 2020

@hsyia
Because the intention is only to make the algorithm respond to sudden spikes earlier. This is not the same as making the difficulty itself move faster to catch up with underlying hashrate change, which would be achieved by reducing the window size. Actually this proposal makes the movement slower due to increased window size (from 660 to 720).

Take a closer look at the plot and see that the v9 curve starts to move earlier than v1 in order to respond to the hashrate change, but it takes more time to converge to the new difficulty level (in other words, to come back to the target block time).

@ghost
Copy link

ghost commented Sep 15, 2020

@stoffu I see how it responds earlier now. That seems like a good solution without changing a lot. But if the goal is to reduce very slow block times, wouldn't it be better to reduce the window size? Or is the goal only to have a quicker response?

@iamsmooth
Copy link

iamsmooth commented Sep 15, 2020

@hsyia I think we want quicker response, so that block times can at least start to come down faster when the hash rate drops, but not necessarily the ability to introduce larger (potentially wild) swings faster. It's not actually that important (and maybe not that desirable) that the block times converge quickly right to the target time, just that they are able to move down from extremes more quickly.

@iamsmooth
Copy link

Also, to clarify, my motivation for proposing to remove the sort and cut is not solely to decrease the delay in responding to hash rate changes but also to simplify the algorithm since it is not demonstrated in any way (and in fact somewhat demonstrated to the contrary) that the sorting and outliner removal is in fact beneficial. Outlier timestamps will indeed cause a small % change in the difficulty for one block, but that is very much within the noise of exponential distribution randomness on the solve time of the next block, and will be reversed in the very next block. Over a series of several blocks (and certainly any significant fraction of the adjustment window) the effect is negligible. @moneromooo-monero

@ghost
Copy link

ghost commented Sep 15, 2020

Perhaps we are seeing this lag because aeon block time is double monero's and so then the 720 block difficulty window is twice as big. One option may be to align closer with monero's ~1 day window by halving the difficulty window 360 blocks. Although I understand the concern for wild fast swings.

@stoffu
Copy link
Author

stoffu commented Sep 16, 2020

@hsyia
As mentioned above, the term “lag” can mean two different things: 1) how early the difficulty starts moving once there’s a sudden hike or drop of hashrate, and 2) how fast the difficulty converges to the new level. The window size directly determines the latter. I believe we only need to address the former, while keeping the latter unchanged. So no, I don’t think we should reduce the window size to 360.

@stoffu
Copy link
Author

stoffu commented Sep 16, 2020

@moneromooo-monero

I've updated the test to simulate what you mentioned. At height 12000 the timestamp is set 2 hours into the future, creating a sharp drop of difficulty at 12009. At height 12719 the timestamp is set to the median of the last 60 blocks, creating a sharp spike of difficulty at 12728. In both cases, the effect of manipulation is pretty transient.

Screen Shot 2020-09-16 at 17 35 27

@stoffu stoffu force-pushed the aeon-difficulty-v9 branch 5 times, most recently from da3df1f to 089b900 Compare September 16, 2020 09:44
@iamsmooth
Copy link

iamsmooth commented Sep 16, 2020

@stoffu Maybe I'm misreading the chart, but how does a 2 hour increase in block time result in a drop in difficulty from about 120k to about 105k? That seems like an excessive change to me, given the window size of 720 blocks (48 hours). Two extra hours should only decrease difficulty by about 4%, no?

@stoffu
Copy link
Author

stoffu commented Sep 17, 2020

@iamsmooth
Thanks for pointing it out. That was because the block time hasn't come back up to the target (120s) yet at around [11000,12000], so the time span for diff calculation was much smaller than 48 hours. I've modified the test so that the timestamp manipulation happens after the block time has stabilized for sufficiently long time. Now the drop is from 135k to 124k which is not too far from the expected 4%.

   

@ghost
Copy link

ghost commented Sep 17, 2020

@hsyia
As mentioned above, the term “lag” can mean two different things: 1) how early the difficulty starts moving once there’s a sudden hike or drop of hashrate, and 2) how fast the difficulty converges to the new level. The window size directly determines the latter. I believe we only need to address the former, while keeping the latter unchanged. So no, I don’t think we should reduce the window size to 360.

Ok, thanks for the clarification.

@ghost
Copy link

ghost commented Sep 17, 2020

@stoffu I see the new code to calculate the difficulty, and I see the verification of the results in the test_variant... but how did you compute the block time to begin with? Is there some pretend network hashrate data you use? Just curious to understand some of this better.

@stoffu
Copy link
Author

stoffu commented Sep 18, 2020

@hsyia
Read generate_variant() where I define a schedule of underlying hashrate changes and a set of block time manipulation events. The actual block times are sampled from an exponential distribution whose expected value is set to difficulty / hashrate. It is well known that block generation is a Poisson process due to mining being memoryless, and the intervals between events in a Poisson process follow an exponential distribution (see e.g. http://r6.ca/blog/20180225T160548Z.html).

A bit tricky issue here is that block timestamps occasionally decrease (i.e. negative block times), which cannot be modeled by a Poisson process. To handle this issue, I collected positive and negative block times separately for the blocks in [1000k,1100k), and by observing that the negative samples (after negating their values) also seem to (quite) roughly follow another exponential distribution, fitted an exponential distribution to each of them separately (after removing outliers) using SciPy. The fitted result for the positive part produced the scale parameter close to the expected 240. The fitted scale parameter for the negative part was 36.7 as used in the code.

@ghost
Copy link

ghost commented Sep 19, 2020

Ok, yeah, I was able to reproduce this, very interesting calculations! I thought this chartwas a good visualization. It is a histogram of the log of the block times. This PR makes sense because the moment when the hashrate drops is the most critical time for a slow block. So responding faster makes for less extreme cases of slow blocks with the compromise of possibly a higher number of slower but less extreme blocks.
Screenshot from 2020-09-19 17-03-15
I also looked at the manipulations and also found little impact. I was curious if there was a continuous manipulated timestamp if that could have a strong impact. I found that the two difficulty alogrithms performed about the same.

For anyone who is not a c++ wizard, here is the spreadsheet I used. Google Sheets

@iamsmooth
Copy link

Thanks for the feedback and analysis @hsyia

@aeonix aeonix merged commit 5652a4d into aeonix:master Sep 28, 2020
@ghost
Copy link

ghost commented Sep 29, 2020

My pleasure! An honest and transparent relationship with the community is so important when it comes to cryptocurrency and having the devs communicate in a way that all aeon holders understand will go a long way. I think we definitely accomplished that here. I will see if I can provide anything useful for the Aeon project because I definitely agree with the ethos of lightweight and doing what is only necessary. That is something rare among cryptocurrencies but very important for longevity.

@ghost
Copy link

ghost commented Aug 24, 2021

Yes, agreed. Simple moving average is closest approximation to Poisson parameter lambda. No sorting/cutting.

@ghost
Copy link

ghost commented Oct 11, 2021

cns-010.txt
Interesting document I came across related to this discussion.

@iamsmooth
Copy link

The odd thing about the older algorithm is that it doesn't actually discard "outliers" in a meaningful sense, it discards low absolute time outliers toward the beginning of the time window and high absolute time outliers toward the end. A more interesting estimator might discard outliers by block time interval (including negative) relative to difficulty.

Anyway, I think the current simple algorithm is good enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants