Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NIP-46 signers compete with each other #1300

Open
alexgleason opened this issue Jun 11, 2024 · 10 comments
Open

NIP-46 signers compete with each other #1300

alexgleason opened this issue Jun 11, 2024 · 10 comments

Comments

@alexgleason
Copy link
Member

I'm having a niche but very real problem. I am running multiple NIP-46 signers for the same nsec, to authorize with different apps. The problem is that whichever signer that responds first likely isn't the right one, and it will send an "Unauthorized" message back, resulting in an error.

So now I have to decide whether the signer is wrong or the client is wrong, and decide what to do about it, if anything. The only way I can picture getting around this is for each session to use a different relay. Possibly a generated relay URL. It could be doable if we used client-initiated auth. But since we use bunker URIs, it's the job of the signer to select the relay, leaving non-custodial solutions basically unable to solve this.

@staab
Copy link
Member

staab commented Jun 11, 2024

p-tag the correct signer based on the address the user provides. I think that should solve the problem, unless I'm mixing things up. But I also think the architecture is overly complicated, when you really do want to select a particular signer.

@pablof7z
Copy link
Member

yeah, what @staab said, p-tag the correct signer that has authorized your client; you don't need to run signer on the same pubkey as the target pubkey (the one you want to sign as)

@fiatjaf
Copy link
Member

fiatjaf commented Jun 11, 2024

I have the same problem, but this is a problem that only developers or very powerful users will ever have so I think we shouldn't worry, just close Gossip and keep going with your life.

@alexgleason
Copy link
Member Author

you don't need to run signer on the same pubkey as the target pubkey (the one you want to sign as)

Are you saying that the pubkey in a Bunker URI is not guaranteed to be the user's actual pubkey?

@alexgleason
Copy link
Member Author

this is a problem that only developers or very powerful users will ever have

Soapbox is a NIP-46 signer, so simply having multiple Ditto sessions on different devices causes this problem. I worked around it by making Soapbox not throw an "Unauthorized" error and instead ignore those messages. But other signers like nak can interfere with my Ditto sessions.

@mikedilger
Copy link
Contributor

you don't need to run signer on the same pubkey as the target pubkey (the one you want to sign as)

Are you saying that the pubkey in a Bunker URI is not guaranteed to be the user's actual pubkey?

The remote signer pubkey is not necessarily the remote user pubkey, and IMHO the bunker url is supposed to be the remote signer pubkey, but not everybody was convinced that is correct so it remains IMHO wrong in the NIP. I made them the same in gossip so it didn't affect me, but it does cause this issue you raise which apparently also affects fiatjaf.

@alexgleason
Copy link
Member Author

Oh man. That's brutal. I guess I will have to change this: https://gitlab.com/soapbox-pub/ditto/-/blob/main/src/signers/ConnectSigner.ts?ref_type=heads#L59

Although I do see how it can fix the problem. Man this is complex.

@mikedilger
Copy link
Contributor

Ok, I was wrong.

Looking closer, the signer has to be addressed by the key it is signing as because the commands don't include that key (except connect).

As it is written today you can't fix this. If two bunkers are signing on behalf of the same key through the same relay, they will both respond.

A more flexible architecture would assign a keypair to the bunker (we already do but only for signup) and address the bunker using its own keypair, and include the key you wish to sign as inside the command.

I was going to "fix" gossip but having dug into this, I have nothing to fix.

@vitorpamplona
Copy link
Collaborator

Just allow a signer tag with the signer's public key to all requests. If no tag is found, all signers reply like today. If a signer is found, that signer is supposed to reply. They others will be ignored.

That's how we do it on NIP-55 when multiple Android signers are available in the same phone, with the same key.

@alexgleason
Copy link
Member Author

Ok, I was wrong.

Looking closer, the signer has to be addressed by the key it is signing as because the commands don't include that key (except connect).

As it is written today you can't fix this. If two bunkers are signing on behalf of the same key through the same relay, they will both respond.

A more flexible architecture would assign a keypair to the bunker (we already do but only for signup) and address the bunker using its own keypair, and include the key you wish to sign as inside the command.

I was going to "fix" gossip but having dug into this, I have nothing to fix.

I think it's still possible. The bunker would generate an ephemeral keypair, and use that pubkey in the bunker URI instead of the user's actual pubkey. The client would then p-tag the ephemeral pubkey in all its commands. The responses would include events signed by the actual pubkey.

The client would have to call get_public_key to get the actual user's pubkey. The problem is that most clients probably don't do this. They probably just pull the pubkey out of the bunker URI and assume it's the user's pubkey.

In the original design of NIP-46, the connect response did return the user's pubkey (now it returns the text "ack"), and get_public_key was included for a reason. In the current design get_public_key seems useless, only because the possibility that a bunker URI can contain a different pubkey than the user's pubkey is not specified in the current document. Basically something got lost in translation, because this is always the way it should have worked. Even though it's even more complexity on top of something already complex, and now we need to do two separate RPC calls (one to connect, then to get_public_key) to work around this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants