-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EIP6914 discussion #3335
Comments
Note, potential non-substantive refactor #3311 |
Hi guys, I am not sure about what I am saying, don't hesitate to confirm or not. In the inactivity period, validators that don't validate attestation see their score increasing. And it decreases if they did their job. What if a validator said he wants to exit during the inactivity period (or just before); it means the validator is still active for some epoch, but don't validate; so its score increase. He exits some epochs later, during the inactivity period (or not) and its score won't change (!=0). Since the score decrease only for eligible validators, the score for an exited validator different from 0 will be kept as is until he comes back as active. My question is, will the inactivity score be reset to 0 for the validator that is going to get the index of the validator that lived with a score != 0 when he registers ? Thanks guys ! |
Yes, the inactivity score record is reset to zero at the moment of re-assigning the validator record consensus-specs/specs/altair/beacon-chain.md Line 540 in f7352d1
|
Thanks ! |
Another complicated thing about this EIP is the Currently I think, if you have an idea how to do it, it's quite a good idea to add it to the consensus-specs soon. Otherwise, this EIP seems unviable to me. |
@ppopth that's a great point indeed. Clearly clients should drop reused indexes from the equivocating_indices when re-assigning to a new deposit. I wonder how specific de spec should be in that regard. For example add a more general hook, called whenever any valid imported block triggers the reuse of an index. Clients will have to do some cache clean up anyway when a reuse event happens anyway. def on_reused_index(store: Store, index: ValidatorIndex) -> None:
store.equivocating_indices.delete(index) Or define a more specific mechanism, such as def on_tick_per_epoch(store: Store, epoch: Epoch) -> None:
head_state = store.block_states[get_head(store)]
for index in equivocating_indices:
if state.validators[index].withdrawable_epoch + SAFE_EPOCHS_TO_REUSE_INDEX == epoch:
equivocating_indices.delete(index) I would be in favor of the first approach, as I think the specific details can be iron out latter |
I think that will introduce a new attack vector. Let Let's say Even if the Let's say at the beginning of the epoch 100001, epoch 100000 becomes justified, i.e. Now consider the def get_weight(store: Store, root: Root) -> Gwei:
state = store.checkpoint_states[store.justified_checkpoint]
unslashed_and_active_indices = [
i for i in get_active_validator_indices(state, get_current_epoch(state))
if not state.validators[i].slashed
]
attestation_score = Gwei(sum(
state.validators[i].effective_balance for i in unslashed_and_active_indices
if (i in store.latest_messages
and i not in store.equivocating_indices
and get_ancestor(store, store.latest_messages[i].root, store.blocks[root].slot) == root)
))
... Let's say I'm slashed, but not slashed in the current head block. I'm slashed like in some reorg-ed block or slashed directly from the wire. You can notice that You can notice that my effective balance will not be counted in |
Even if a validator is removed from |
That's right. Thanks. I forgot the activeness. |
General discussion thread for EIP6914 -- reuse validator indices.
Bulk of the core was merged in #3307, but there are many minor discussion points and design decisions on the table.
The text was updated successfully, but these errors were encountered: