-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infinity pubkey and signature #11
Comments
@paulhauner we currently have this check in go Line 274 in a918073
Line 296 in a918073
so that we are in conformance with the spec. It will probably have to be added as a check in the rust wrapper too. |
Thanks @nisdas. I'm also wondering if there's variation in the batch operations too. I'm writing some tests to compare milagro and blst too. I'll update once I have them :) |
Here is the thing. Which qualities make a sequence of bytes to a signature? Or alternatively, can one call a sequence of bytes a signature if it's same value for any message you sign with corresponding secret key, and everybody can forge it at no [and any] time? Do you see the dilemma? The question is what the library is supposed to do if you pass a non-signature. In essence individual infinite public keys should be rejected, and one can wonder who should be responsible for doing so. Maybe everybody should, application and library... Because if application fails for whatever reason... No? Of course one can refer to test vector, but does specification actually say that infinite As for discrepancy between Go and Rust. It's appropriate to resolve it... As for non-individual, i.e. aggregated, signatures and public keys. It's formally reasonable to assume that aggregated signature can be infinity, so they should be treated differently from individual. Currently they'll be mishandled in min-sig case, and treated suboptimally in min-pk (everybody's favourite:-) and it will be addressed... (*) Analogy would be gcc test-suite exercising patterns that are formally undefined by language standard:-) |
Just wanted to open exactly the same issue 😃 |
This is the key issue. We explicitly adopt the (draft) IETF BLS spec in Eth2: https://tools.ietf.org/html/draft-irtf-cfrg-bls-signature-02 I looked into it previously when this Eth2 spec test was added, and have just rechecked, and the BLS signature spec says nothing about signing with the infinity/identity pubkey/privkey pair. That is, it is not special-cased. Thus, given that (We actually had an incidence of an infinite/identity public key on the Eth2 Witti testnet. As per spec, all client implementations accepted this.) |
At the same time it also says that "SK MUST be indistinguishable from uniformly random modulo r and infeasible to guess." There is no stop after "modulo r." And the context of "feasibility" is customarily "given that all other information is available [to the one who tries to guess]." In other words it's not about a guess in isolation. With this in mind, can one actually argue that SK==0 would be infeasible to guess from the looks of the public key?
Well, one might be able to argue that it's up to client to protect itself, and if it chooses not to, then it's client's problem, not network's. But is it actually? Recall that it means that anybody would be able to produce "signature" in this client's name and everything that was "signed" could be modified afterwards... |
Generating the SK is out of scope for this discussion. This is about what your library should accept as a valid signature. In that respect, nothing rules out Of course it's idiotic for a client to use SK=0, but it is not the signature verifier's role to enforce this. Are you going to check for and return false for |
Question I'm struggling with is a little bit more subtle. Yes, from forging viewpoint SK==0 and SK==small_n are equally bad. But trouble is that specifically SK==0 facilitates retrospective manipulation. Thing is that infinite signatures will be simply skipped over, all bets are off. And the question if it's good for entire network, as opposite to specific client. I mean SK==1 is kind of "it's your money," while SK==0 is not necessarily... Or what am I missing? To clarify, it's not like I just refuse to fix it, I just want to see that issue is deliberated. |
## Issue Addressed NA ## Proposed Changes - Refactor the `bls` crate to support multiple BLS "backends" (e.g., milagro, blst, etc). - Removes some duplicate, unused code in `common/rest_types/src/validator.rs`. - Removes the old "upgrade legacy keypairs" functionality (these were unencrypted keys that haven't been supported for a few testnets, no one should be using them anymore). ## Additional Info Most of the files changed are just inconsequential changes to function names. ## TODO - [x] Optimization levels - [x] Infinity point: supranational/blst#11 - [x] Ensure milagro *and* blst are tested via CI - [x] What to do with unsafe code? - [x] Test infinity point in signature sets
* Replace BLSPublicKey.isValid() with fromBytesCompressedValidate() method * Make BLSPublicKey fromBytesCompressed/toBytesCompressed operating with Bytes48 type. SecretKey.toBytes() returns Bytes32 * Rename BLSPublicKey/BLSSignature.from/toBytes() methods to from/toSSZBytes() to be more explicit * Hide constructors. Make PublicKey and Signature implementations to be lazily evaluated inside corresponding BLS wrappers * Fix DLL binary attribute * Validate BlstSignature when creating from bytes * Add mikuli and blst specific infinity signature tests * Add BlstPublicKey.fromBytesUncompressed for testing * For BLSPublicKey/BLSSignature use bytes representation for hashCode() and equals() since we need ability to compare valid (from SSZ standpoint) 'empty' instances, but shouldn't decode them to real instances * In case of fail fast when deserializing invalid signature bytes, just mark the signature invalid. This is to deal with empty signatures (valid from SSZ perspective) * Fix reference tests running on Windows * Add Mikuli fallback when couldn't load Blst * Move SWIG wrapper to a separate project * Update license for jblst * Implement BlstSecretKey.destroy() * Temporary handling infinite pubkey/signature as a special case until supranational/blst#11 is resolved * Generate KeyPair from seed in an implementation independent way. Throw exception when instantiating SecretKey from non-valid BLS12381 scalar value * Make BLSPublicKey.getPublicKey() package private * Make BLSSignature.getSignature() package private * Make BlstBLS12381.INSTANCE initialization more reliable and make it as Optional * Move BLS constants to BLSConstants class. Make a clear separation of BLSPublicKey fromSSZBytes() and fromBytesCompressed() though they are equivalent with current SSZ implementation * Make a distinction of BLSSignature from/toSSZBytes() and from/toBytesCompressed() though they are equivalent with the current SSZ implementation
I think that it is acceptable to have a secret key of zero as all functions will handle it adequately.
Hence signing, aggregation and verifying all operate correctly with the point infinity and secret key 0 so it does not need to be treated separately. It's also beneficial to be consistent between implementations for interoperability and I believe ZCash, Apache and Herumi all allow a secret key of 0. |
Quoting wikipedia, "A valid digital signature [...] gives a recipient very strong reason to believe that the message was created by a known sender, and that the message was not altered in transit." Signature produced with SK==0 is free from both these properties. It doesn't qualify as "a valid signature," hence it's only appropriate to reject it. As invalid.
And I can imagine that everybody simply kicks the can forward, simply assuming that no client application is dumb enough to not ensure that its SK!=0. But my question is what if a client application does it on malicious purpose? Whose responsibility would it be to stop it? Well, I can also imagine that the library can recognize that it can be oblivious of signature being individual or aggregated, and therefore reckon that it's not in a formal position to judge about a SK being 0. Thus assume that it's direct application responsibility to pass meaningful inputs to the library. Is this what I'm missing? However! Even if so, on a grand scale, I can't imagine that it would be actually appropriate for a network to tolerate SK==0 for interoperability reasons. Any reason actually. And I would be very surprised if it can be shown that infinite aggregated[!] signature would necessarily mean infinity on the other side of the pairing equation... |
@dot-asm - you have a valid point of view. But this is simply not what the (draft) BLS signature standard says. It is important to us to be standard-conforming, and it would be preferable if that were reflected in the libraries that we use. In consensus systems like blockchains, agreement is of the essence. For now, we have special-cased our integration of Blst so that we trivially allow the infinite signature, but that's extra code I'd rather not have in there. Other teams have done the same. By all means take this conversation to the IETF standards body. It's not too late to get it changed. But the definition of the If you really hate this, how about a compile-time or run-time flag |
From my perspective it seems totally reasonable for implementations to refuse to accept signatures from preposterous keys. It seems much less reasonable for the Eth2 spec to require people to accept the identity point as a valid public key. I would certainly recommend revisiting that decision. (ping @JustinDrake?) It's true, as others have pointed out, that it is not so easy (:smile:) to exhaustively list all of the "bad" keys---but that does not mean that we cannot identify ones that surely ought to be considered bad! I'll ping the rest of the BLS sigs authors about this. Seems like we should add language to the draft to the effect that implementations MAY reject signatures made under weak public keys (e.g., the identity point). Come to think of it, it might be good to suggest (but probably not require) fixing the most significant bit of the secret key to 1. Has negligible effect on guessing entropy of sk, but makes it more obvious how to implement constant-time signing via double-and-add-always. (EDIT: I suppose I should point out, I'm one of the authors of the bls sigs draft. Also made small edits to text above.) |
My position is that the Eth2 BLS spec should be essentially the same as the IETF spec. As I see it, deviating from the IEFT spec goes against the standardisation effort and could lead to consensus bugs (as well as complicate things like cross-chain interoperability). As I understand |
Just in case for reference. blst handles zero-padded scalars in constant-time manner as it stands now. In other words it doesn't reveal information about factual bit-width of the secret key. All secret keys are treated as 255-bit values without making any assumptions about most significant bit values. |
Hm. This does not seem like a consensus issue, it seems like a security issue. The probability of an honestly generated sk = 0 is roughly 2^-255. By comparison, the probability that every Ethereum node experiences a cosmic ray--induced bit flip that causes everyone to spontaneously accept a bad block seems, from a back-of-the-envelope calculation, to be astronomically higher---better than 2^-160 [1]. So with overwhelming probability any node claiming to have sk = 0 is malicious, and probably the right thing to do is require nodes to reject malicious signatures, not accept them. Obviously you are far more of an expert on how things ought to be run than I am, so maybe I am missing something obvious, but I just do not see how "the spec allows it" should be interpreted as "all nodes must accept obviously preposterous signing keys." [1] Boeing says 1 upset per 10^13 hours, so let's call this 10^-15 in any 5-minute period which is roughly 2^-55. Roughly 2^13 Ethereum nodes gives 2^-68. Each node has roughly 2^40 bits in memory; generously assume memory is independent every clock cycle, so in 5 minutes there are about 2^80 bits that could be flipped, and one critical bit that says "yes" or "no" to a single block; this gives 2^-148. |
Eth2's security model is adversarial! We do not assume 100% of the validators to be honest. If someone can "maliciously" make an Eth2 deposit to the pubkey corresponding to sk = 0 to cause a fork between two Eth2 clients such a deposit will happen with probability essentially 1. |
Right, so my point is: the correct choice is to forbid sk == 0, not to require supporting it, because any node whose sk == 0 is malicious with probability essentially 1. |
As I see it the correct choice is to strictly follow the IETF standard for simplicity 😂 Every exceptional case that diverges from the IETF standard (e.g. explicitly disallowing sk == 0) complicates the story and ultimately makes it harder to reach intra-blockchain consensus and inter-blockchain interoperability. The simplest situation is for everyone to be compliant with the IETF standard (as opposed to Eth2 introducing its custom BLS spec with non-compliant edge cases). |
I completely understand the concern with interop, and like I said in my first comment you know way more about this than I ever will. But allowing sk = 0 violates a bunch of implicit security properties that people are very likely to rely upon. For example, for an honestly-generated key it is infeasible to sign some message In any case, the right solution is to forbid sk = 0 in the spec. We'll plan to do this. |
Are you suggesting making signature verification fail when the pubkey is the identity? If so, given that Eth2 phase 0 (which includes BLS) accepts the identity pubkey, changing the IEFT standard this late in the game would create two BLS standards. cc @djrtwo @CarlBeek
There are other secret keys where equivocation is trivial, right? For example when |
No, sk = 0 is special. sk = 1 (or any other easily guessable key) allows trivial forgery --- a valid signature is just The problem with sk = 0 is that the correct signature for every message is equal. This is the only secret value with this property.
I was assuming step 1 would be to change KeyGen so that it never outputs sk = 0 (discussion). It could also make sense to specify that public key verification (e.g., for proof of possession) should reject the identity pubkey.
Well, I hope not. Wasn't the whole point of this discussion that strict adherence to the IETF spec was necessary? |
FWIW, doing this is fine for Eth2. KeyGen is not part of consensus.
This would not be so great for Eth2. Pubkey verification is part of Eth2 consensus and currently we accept the identity pubkey, as per the current IETF draft standard.
The BLS part of Eth2 phase 0 (which is compliant with the current IETF draft standard) is frozen and is almost certainly not changing. If a modification to the IETF draft standard is made there will effectively be two standards: the current draft one, and the future modified one. It would be an unfortunate situation, a bit like Keccak/SHA3. |
It's not obvious to me how to resolve these two statements. In any case, sk = 0 looks like a big enough footgun that it should be forbidden in the IETF standard. If you are confident that Eth2 will never ever ever attempt to use the VRF-ish properties of BLS signatures, then maybe sk = 0 really isn't a problem for you. But it seems to me like in general people will do this, and even if we regard it as a mistake we should try to design the standard so that slight misuses are not fatal. Allowing sk = 0 goes against this principle. |
Eth2 phase 0 is using BLS as a VRF (see code here). The logic is to sign a single well-defined message (the epoch number) and given BLS signature uniqueneses (i.e. every (message, pubkey) pair has a unique signature) it's all good, even for
One possible way forward is to disallow |
The issue is the the signatures of |
What happens with 0 point on aggregation? |
How is that an issue for RANDAO in Eth2 phase 0 @kirk-baird? As I see it all that is required is that for every (msg, pubkey) pair there's a unique signature. Using BLS as a VRF is fine even allowing |
A validator using |
This is a good point, if |
A validator using |
For reference this has now been merged into BLS Specs. |
When running the tests for conformance to BLSv04 I noticed that Specifically, I'm referring to the behaviour of |
But As for accepting sk==0 in more general sense. It's kind of a trick question. blst rejects signatures produced with such key, and it doesn't generate such key. And it's sufficient to claim compliance. Because deserialization of a secret key falls outside the draft's scope. It does say sk can't be zero, but it's a reference to alternative key generation[!] procedure rather than deserialization. With this in mind one can actually wonder if a secret key deserialzation should perform any sanity checks at all. Indeed, a secret key is something you generate yourself, so it's never "rogue." So why would you rely on deserialization to vet it instead of ensuring that it's generated properly bound to begin with? Well, I can imagine that one might say it's a protection against corruption when pulling the key from permanent storage at later occasion. But it's pretty weak protection, virtually none. Right thing to do would be to generate the public key and compare that to a previously saved copy. But this is obviously not deserialization subroutine's busyness... |
Well, question ought to be rather about handling test vectors with sk==0. But even from this viewpoint, which of two possible ways to handle it gives you more confidence? a) Reject sk==0 upfront and fail the test without going beyond deserializion. b) Have deserialization accept sk==0, generate signature and have the signature rejected. I for one would be more comfortable with b). |
I agree that we should never reach a case where Though it would be a cheap check on deserialising secret keys which would likely be infrequent and prevents us from continuing to execute with an invalid key so I don't see the harm in rejecting it at deserialisation. |
Assuming that we agree that it's appropriate to prevent application from producing infinite signature, as opposite to relying on everybody else to reject such signature. We have to recognize one thing. It's not exclusively about what feels right or convenient in this (or any) specific context, but even about what would be the right thing to do in a more general sense. Maybe there is something else that would be appropriate to do. Or maybe this appropriate something else is the only thing one needs to do. Consider the following. We seem to be in agreement that sk==0 [at sk object instantiation moment] can be viewed as a case of application kind of "messing" with the library, right? Because abnormal sk has to originate from outside the library. But as long as we are talking about "messing" why would we assume that the caller is so well behaved that it would call deserialization at all, or pay attention to error code, or whatever? With this in mind, wouldn't it be appropriate to reject sk==0 in sign procedure? (Just in case, yes, one can just as well argue that the sign procedure can perform the equivalent of blst_fr_check too.) However, formally speaking, before we take this approach, we have to recognize that specification doesn't directly justify returning errors from CoreSign, instead it suggests that input is always sane. And that the option of failure in the sign procedure doesn't go against the spirit of the specification(*). Yes, one can just as well turn this argument around and use it to put the complete burden of checks on deserialization... Thoughts? As a more concrete suggestion. What if deserialization zeroed the buffer in case of blst_fr_check failure and counted on sign procedure to reject it? It would cover the case of not paying attention to error code from deserializaion... (*) Just in case, in my mind it doesn't, but it might be just me. |
I think this is a good assumption to make, as you mention the The library will handle this when generating secret keys using Otherwise, if you prefer that prerequsite checks are handled by the client rather than the library we could have the check in our client instead. |
The Eth2 test vectors contain a test that requires the infinity signature to represent a valid signature by the infinity pubkey across any message:
The
output: true
here means that we should consider this pubkey/message/signature combination to be valid.Presently
blst
will returnoutput: false
for the above test vector. I'm wondering if this project would expect to change their behaviour to match the Eth2 spec or if it plans to leave it as-is?Thank you for your time :)
The text was updated successfully, but these errors were encountered: