-
Notifications
You must be signed in to change notification settings - Fork 202
Account for new fully-specified ECDSA and Ed448 COSEAlgorithmIdentifiers #2283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Changes must be made for ESP384 and ESP512 as well. For example updating the packed attestations and adding the additional requirement that the compressed point must be used for those algorithms as you have done for ESP256. For example:
|
@selfissued can you review ? |
In 16.8 Packed Attestation with ES384 Credential, the private key should be changed to use ESP384 (-48) rather than ES384 (-35). In 16.9 Packed Attestation with ES512 Credential, the private key should be changed to use ESP512 (-49) rather than ES512 (-36). Likewise, the other uses of ES256 should be looked at to change to ESP256, for instance in all the other test vectors in 16 Test Vectors. |
@selfissued Thanks, this is resolved in meta-PR #2290 which in turn depends on #2289. I'll mark this PR as blocked waiting on #2289 so we can update the test vectors along with the generator script. |
As stated, I'm pretty sure we need to add the requirement that the uncompressed points MUST be used for ESP384 and ESP512 just as we have done for ES256, ESP256, ES384, and ES512. |
@zacknewman Thanks, I had fixed this (thanks for pointing it out!) but apparently forgot to push it. Fixed now in 928c668. |
index.bs
Outdated
1. Keys with algorithm ES384 (-35) MUST specify P-384 (2) as the [=crv=] parameter and MUST NOT use the compressed point form. | ||
1. Keys with algorithm ESP384 (-48) MUST NOT use the compressed point form. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ECDSA with P384 is -35 I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-35 is ECDSA with SHA-384 and unspecified curve (this is why WebAuthn adds the restriction to the P-384 curve on line 4340). -48 is the new identifier that fully specifies ECDSA with SHA-384 and curve P-384.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If they are logically equivalent, I would actually advice against using these new numbers by the RPs.
Older platforms and Older Authenticators do not understand these numbers and will fail when specified with these numbers.
These new numbers do not add any value IMO vs specifying existing values and will confuse RPs and add support costs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, one such unupgradable security keys shows supporting these algos in their getInfo. [{"alg": -7, "type": "public-key"}, {"alg": -8, "type": "public-key"}, {"alg": -35, "type": "public-key"}]. Newer clients may be able to do the translation to older value from newer values, but not every user agent and OS is upgradable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looked at earlier discussions in the issue. Fallback mechanism is easy to get it wrong. We have spent years to get these values supported across the ecosystem by every RP, user agent and authenticator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, let's assume that -7 and -9 are equivalent (ECC-P256/SHA256) and -35 and -1001 (some random number for discussion sake) are equivalent (ECC-P384/SHA384).
Let's say
- Authenticator-1 supports -7 and -1001.
- Authenticator-2 supports -7 and -35.
RPs Algorithm preferred list of (-35, -7, -1001, -9) will result in Authenticator-1 picking up (ECC-P256/SHA256) and Authenticator-2 picking up (ECC-P384/SHA384), although technically both support (ECC-P256/SHA256) and (ECC-P384/SHA384)
Having two different values with same meaning in a priority list can become tricky and RP needs to be careful.
I am not seeing the benefit of using the newer values. I would recommend that WebAuthn specify that RPs should use older values as we have already reserved curves for those numbers in WebAuthn.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that there's little value in these particular identifiers (except the addition of Ed448) within the current context of WebAuthn, where we've explicitly patched the ambiguity in the polymorphic identifiers, but these new identifiers are defined for a wider context that covers more than just WebAuthn. In particular, they patch the gap between JOSE and COSE where, for example, the JOSE identifier "ES256"
does fully specify "ECDSA with SHA-256 on the P-256 curve" but the "equivalent" COSE identifier -7
does not (evidently to the WebAuthn WG's surprise earlier) specify the curve, only "ECDSA with SHA-256". For other COSE use cases, especially new ones, it makes a lot of sense to be able to fully describe the crypto suite with a single identifier. So the identifiers definitely have merit in the wider context outside WebAuthn.
So in WebAuthn we are faced with a choice, then: should we support these new identifiers, or not?
If we do not, we'll have to declare that our COSEAlgorithmIdentifier
parameter only supports a subset of COSEAlgorithmIdentifier
values, and continue updating our exclusion list as new identifiers are added to the registry. And it would need to be an exclusion list rather than an allow list, because with an allow list there'd be no point in referring to an external registry in the first place. I don't think this is a good approach. It's already unfortunate that we add the restrictions against compressed point formats, I don't think we should add even more constraints on our interop with COSE.
If we do support the new identifiers (which WebAuthn L1 and L2 do implicitly!), then I agree that it's unfortunate to have values that are semantically equivalent (in our context) but need to be treated as distinct in order to not confuse implementations. But I think that's a lesser problem. Authenticators are incentivized to support both identifiers in case an RP only requests one (including old RPs that only request the old identifier), and RPs are also incentivized to request both identifiers in case an authenticator only supports one (including existing authenticators that only support the old identifier). But as you say, the old values are likely to continue to dominate for the foreseeable future, so it is unlikely that the new values will disadvantage RPs or authenticators that have not or cannot update to support the new values.
To summarize: the new COSE identifiers apply to a wider context than WebAuthn, and I think that making WebAuthn align more with COSE is better than making WebAuthn diverge further from COSE.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that COSE is separate from WebAuthn; however it appears to me that this new draft is very much tied to WebAuthn hence why it is happening in cohort. Yes, technically every RP needs to review IANA COSE algorithms every day for eternity to decide if newer values have been added that they want to support; but I'm unsure if dedicated IDs would be added without this draft. As much as the math person in me would love if specs and registries were super rigorous, one has to accept the fact they aren't and in particular there will be things like "de facto" requirements.
For example, I don't think it's a coincidence there already is an explicit COSE ID for ECDSA with SHA-256 using curve secp256k1 and not for the much more common P-256. To me this is due to the fact that ES256 has the "de facto" requirement that crv
is P-256. It's nice for specs to make this clear, like WebAuthn does; but in practice it likely doesn't matter much.
To play the game of hypotheticals, in an alternate universe if COSE never defined the somewhat absurd EdDSA ID but instead ED256 and ED512 IDs where the former were defined as EdDSA using SHAKE256 and the latter were defined as EdDSA using SHA-512; then like ES256 has the "de facto" requirement that crv
is P-256, these IDs would have also have had "de facto" requirements that crv
is edwards448 and edwards25519 respectively. WebAuthn then would have simply explicitly added those requirements. In that universe, I don't think this draft would have been written since the actual motive of adding support for Ed448 wouldn't have existed.
It appears to me the motive was originally "how do we add Ed448" which then changed to "well since we're here, let's go ahead and also add specific IDs for P-256 and company", and I don't think that adds value. The only counter I can come up with is to my above claim
I'm unsure if dedicated IDs would be added without this draft
If one rejects that and is worried a completely separate "process" were to occur that added these more specific IDs, then WebAuthn is in an unfortunate position of not being able to control this which in turn would lead to some indeterminate amount of time between when such IDs were added to IANA and the spec updated to reflect them. By being in control ourselves, we can be sure there is no "gap" between the IANA registry and spec.
I personally don't find that argument that convincing due to the aforementioned "de facto" requirements and the fact that this prophylactic approach seemingly has no end. What if some process were to add ED256 and ED512 as mentioned, shouldn't we add those too just in case? After all, it is inconsistent to have IDs like ES256 which mandate the hashing algorithm but not the curve so shouldn't the same exist for EdDSA? Conversely, perhaps a process adds ECDSA to mirror EdDSA so COSE now has an ID that is both curve and hashing agnostic. Where does this end?
Regardless, my library already supports these new IDs; so it's no skin off my back. I'm simply providing my two cents.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, let's assume that -7 and -9 are equivalent (ECC-P256/SHA256) and -35 and -1001 (some random number for discussion sake) are equivalent (ECC-P384/SHA384).
Let's say
* Authenticator-1 supports -7 and -1001. * Authenticator-2 supports -7 and -35.
RPs Algorithm preferred list of (-35, -7, -1001, -9) will result in Authenticator-1 picking up (ECC-P256/SHA256) and Authenticator-2 picking up (ECC-P384/SHA384), although technically both support (ECC-P256/SHA256) and (ECC-P384/SHA384)
Having two different values with same meaning in a priority list can become tricky and RP needs to be careful.
I am not seeing the benefit of using the newer values. I would recommend that WebAuthn specify that RPs should use older values as we have already reserved curves for those numbers in WebAuthn.
I've made my thoughts clear on this matter, and I believe this is a solution in search of a problem; however if the committee has determined this is how it is to be, then c'est la vie.
One approach to "solve" this problem is to mandate that user agents always add a legacy ID before/after a "new" ID and add the "new" ID before/after a legacy ID iff the list of supported IDs from the RP contains a "new" ID. Terminology could optionally be added to avoid duplicates that would arise from this; however duplicates have always been a possibility. In addition to this, the spec requires that if an RP supports a "new" ID, then they are obligated to also support the legacy ID potentially also requiring that they be treated interchangeably (e.g., if a self attestation signature uses a legacy ID but the attested credential data uses the "new" ID, RPs are required to accept it—assuming the original list of supported IDs provided by the RP contained the "new" ID). We require that authenticators don't treat the IDs interchangeably in the interest of compatibility.
Here there is consistent behavior, we remove the possibility of "weirdness" from the RP by punting the responsibility to user agents (which already has precedent), and things don't break for RPs that only support the legacy IDs.
As far as terminology is concerned, we should avoid words like "equivalent" since both formally and informally it's not quite correct to say they are equivalent. They're only "equivalent" for RPs that support the new IDs, but they're clearly not equivalent for authenticators nor RPs that don't support the new IDs.
To be clear, user agents perform this "algorithm" for each new ID and not for the mere existence of one (e.g., if an RP passes [-19, -7]
, then the user agent would modify it such that it becomes [-19,-8,-7]
(i.e., user agents conservatively add the legacy ID for EdDSA but don't add the "new" ID for ESP256 since the RP only claimed support for the legacy ES256)).
I understand the desire not to use the new algorithms if the existing ones are burned into devices. If I understand correctly, the existing webauthn behavior is that in cases where an algorithm worked with multiple curves, webauthn chose a curve, and does not support all the curves for that algorithm, is that correct? In other words, in webauthn: -7 means ECDSA with SHA256 and P-256 ... this now has an official name ESP256 Just for my own clarity are there any hardware devices out there that use -7 with a curve other than P-256? In other words, does webauthn make use of the "feature" that -7 works with P-384 and P-521, or does it just ignore that possibility, and assume that -7 is always with P-256? (and my webauthn, I really mean devices that can't be updated, as opposed to the spec itself) |
Yes
No. AFAIK from over the years.
We always assume that -7 is ECDSA with P-256. Similarly with others (-8, -35, -36). We have put below descriptions in the spec.
|
@akshayku Ok, but I don't understand what part of this PR would mean more work for any implementations? We are not replacing the old IDs, only adding the new ones to the set of recommended arguments.
So I agree, we're not going to realistically eliminate the old IDs short of a complete ecosystem-wide migration to PQC/etc. But eliminating the old IDs was never a goal either, so I don't see how that is a problem? Note also that the new IDs will be available for RPs to request and for authenticators to implement regardless of whether we make any change to WebAuthn. The |
Hm, maybe I should also include client implementers in the breakdown 🙂
Or is this outlook too naive? |
Adding it to the WebAuthn means that we are endorsing those new algorithms for WebAuthn. Some algorithms just being registered as COSEAlgorithmIdentifier does not necessarily means that it works in WebAuthn across browser/OS/Authenticator. There are many more COSEAlgorithmIdentifier that does not work for WebAuthn. And I think, Realisticly, the current state is that only algorithms mentioned in the webAuthn spec works in reality across the ecosystem. Actually that also is not true across the ecosystem for -8.
Yes.
Yes, but they don't have to. There is no need, and this is extra work for Authenticators for no benefit. And new code means new potential bugs. And this is again guidance for the authenticators. May be there is some authenticator vendor who has not read all this conversation to only implement newer algorithms. And suddenly their release devices can't be fixed.
RPs will get it wrong. Let's have one such example. Let's say that these newer Ecdsa algorithms gets supported on certain combination of browser/OS and RP didn't test all the platforms/Browser/OS combination. Just because they only tested major OS/Browser combination. I have seen this happen over the years. Also, we have already established that these new algorithms will not be supported on all authenticators/platforms. So, these RPs will have support costs for these unsupported combination situations down the road. Suddenly there is a situation that existing security keys for example does not work. Is RP staffed to add those? May be someone from RP says it is not the worth given their large population may not have security keys (I have seen this happen too). Then the user who has gone out of its way to buy such security key, will be left stranded. You can add more guidance for the RPs. But even with current guidance, people get it wrong. Unless there was any issue with security/incompatibility/privacy, I don't think we should just add new definitions that can cause those incompatibilities. Again, what is the need to introduce these new algorithms in WebAuthn? Agreed that they are not fully specified yet in COSE registry. But we have already made it clear in our spec about what those algorithms means.
Provided that RP does the right thing. We shouldn't even open the possibility for RPs to get it wrong, IMO. There is no benefit. For example, if an RP wants ECDSA with ECC P-256 with SHA256, they use -7 today. There is no need to redefine it to something else.
Again, this is extra work for Authenticator for no benefit. All the cons of these proposals are because of various combinations of browsers/OS/Authenticators. Some are upgradable, some are not. And I don't think using these existing identifiers is even a problem.
Yes. That is the ideal state. Newer RPs shouldn't be thinking different from existing implementation that just works. We don't know how
Overall, there is no benefit of adding these new algorithms for existing algorithms defined in WebAuthn spec. |
@emlun, almost all of the points you made are true without this PR as well. Without this PR, RPs can still add support for these new IDs; so I don't see how there is benefit in mentioning them in the spec. Two, adding these IDs is more work even if they're only "recommendations" for user agents, authenticators, and RPs. That is not even debatable in WebAuthn Level 4 if they were to become required. So if we accept that the legacy IDs will always need to be supported and RPs are always allowed to support IDs in the registry even if they're not explicitly mentioned in the spec (e.g., the new ones mentioned here)—in fact the spec permits IDs outside of the registry—then what's to be gained other than a more complex spec that doesn't actually add anything useful? Another way to view this is now with these new IDs there are two equivalent IDs, from the perspective of WebAuthn, so we arbitrarily pick one (i.e., the legacy ID) to recommend. Fortunately, this "choice" doesn't require any changes to the spec; but if desired, one could add a note about why this "choice" is made. WebAuthn shouldn't make the list of recommended IDs too long; otherwise let's just add the entire registry to the list of recommendations. WebAuthn already "arbitrarily" makes choices (e.g., using the uncompressed P-256 point form), so there is nothing "weird" about the list of recommended IDs. If this were to go in, then you make a great point about |
To reflect that the COSEAlgorithmIdentifier used in this test vector is in fact `EdDSA (-8)`, not the recently registered `Ed25519 (-19)`.
I was about to say there is no change in guidance for authenticators here, but on second thought I guess the changes to §6.5.1.1. Examples of credentialPublicKey Values Encoded in COSE_Key Format do qualify, as do the new/updated test vectors. Good point!
Without it, authenticators that choose to implement support for ESP256 (-9) would be free to emit ESP256 COSE keys in compressed format since that is not forbidden for ESP256. Would that be a problem? RPs likely won't expect the compressed format and would have to update their code to support those authenticators that use the compressed format. On the other hand I argued earlier that some incompatibility should be expected with new features, so maybe that's fine. Honestly I would prefer to drop the restriction against compressed format for
No, not either one - we should always recommend the legacy ID, the only question is whether we should also recommend the new one. I believe there is no harm in recommending the new ID and the old ID, and little or no harm in continuing to recommend only the old ID, but there certainly is harm in recommending the new ID instead of the old ID. So okay, I guess "little or no harm in continuing to recommend only the old ID" may be a signal that we should do just that.
We're not adding that requirement in L3. Okay, I've been convinced that we should at least reduce the scope of this change to keep the old IDs the "preferred" ones in examples and test vectors. I've updated the PR with that (I also changed the Ed25519 test vector to use the seed I could go either way on recommending that RPs request the new IDs (in addition to the respective old ones) in For now I've also kept the restriction against compressed form for |
There is no need to tell RPs that request the new IDs that they SHOULD also request the old one since the old IDs are always recommended (i.e., regardless if an RP requests the new IDs, they SHOULD request the old IDs); as a result, I think the PR should be changed to remove the recommendation in There is harm in adding the new IDs. It adds complexity to the spec that doesn't need to be there. There is a reason that only 3 IDs are recommended for I will say that removing the restriction for There is no such difference between Last, I think it's worse to remove the test vectors and examples for the new IDs but keep the recommended |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks ready to merge.
COSE does not allow encoding
Only Edwards curve keys (currently) are allowed (and indeed required) to use OKP, and yes, RFC 8032 defines a single canonical encoding for EdDSA keys. Rather, it's the EC2 key type that has two variants: compressed or uncompressed y coordinate. |
I've now also reverted adding the new algs to recommendations and examples, per @akshayku's argument that if RPs should continue using the old values anyway, we should not recommend using the new ones. I've still kept the restriction against point compression for |
Indeed. Thanks for the correction. I assumed the reason the OKP parameter for the public key was called |
SHA: 3bcf9d5 Reason: push, by emlun Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Fixes #2282.
This:
ES*
algorithms to the newESP*
algorithms.'packed.EdDSA'
and link anchor#sctn-test-vectors-packed-eddsa
instead of'packed.Ed25519'
and#sctn-test-vectors-packed-ed25519
, in case we want to add test vectors with COSE alg -19 (Ed25519) in the future.Preview | Diff