Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Idea: statements of (dis-)trust #20

Open
jfinkhaeuser opened this issue Oct 8, 2024 · 4 comments
Open

Idea: statements of (dis-)trust #20

jfinkhaeuser opened this issue Oct 8, 2024 · 4 comments
Labels
Future Scope This issue is not currently something within scope for the taskforce

Comments

@jfinkhaeuser
Copy link

I would like to see the ability to propagate statements of trust. This could be in the form of an activity.

A statement of trust effectively identifies:

  1. The party making the statement (individual actor or [moderation team of] an instance?)
  2. The party about which the statement is made (similar to the above)
  3. A relatively free-form statement expressing a trust relationship.

The use case I have in mind is to help manage the complexity of FediBlocks #19 and/or Flags #14. Conceptually, if I trust a particular admin group to make good calls regarding CSAM block recommendations, I could announce this to other admins.

It follows that the statement in question can be structured data:

I write "can be", simply to leave the general mechanism open to other uses. (Side note: if we use vocabularies for all three, we have RDF triplets)

Assume that as a recipient of trust statements about moderation recommendations, I review those. I can decide to myself issue a statement to the effect "origin X makes good calls about good calls regarding recommendations" (to simplify this, it could merely be a boost of the original statement if it is an activity).

The upshot is to create a web of trust, with two major differences to the GPG-style version:

  1. Trust is not binary, and not leveled, but tightly scoped to specific statements. This avoids the problem of trust inflation.
  2. It does not, and should not, imply any specific action to be taken based on that statement.

Actual use case: as I wrote in #19, I do not think that automatically implementing block recommendations is a good idea. On the other hand, between Flags and recommendations being shared, administrators may find the software they use can provide better user interfaces - but the flood of actionable items does not diminish.

Without going towards automation, statements of trust can be used to optimize user experience for administrators. For example, one could group and/or prioritize trusted origins such that they can be acted upon with different processes. Known trusted sources might require only a single administrator to act upon the recommendation, while others might involve consensus seeking. Identifying the origin can e.g. also permit a different process for recommendations from a specialized organization with dedicated resources.

The exact policy, of course, is up to the admin team. The point of this idea is to provide a basis for such choices.

@bumblefudge
Copy link

As I mentioned in the other thread, "trust" is a little overdetermined here, as it slides between the human/social meaning of the term and the engineering term (each "upvote/downvote" is a trust signal that can computed in realtime on a social graph as an input to mechanical decisions like "allow this Bob's message into Alice's inbox based on current trust-and-proximity score"). But a more generic term might be "reputation system primitives", in that complex and nuances queries can be run over a social graph that evolves over time if the raw data points are specified narrowly enough to be almost invariants over time. There is a ton of prior art I can point you to if you're curious, that might be variously relevant to fediverse software design, but at a high level I think reputation is a can of worms that many people try to avoid, particularly at the standards/infrastructure layer... even in moderation discussions!

here's a detailed document about a "minimum viable reputation system" created for a decentralized/horizontal package manager in the cryptocurrency space, and here are a bunch of notes from essays, prototypes, and presentations on engineering reputation systems from a decentralized-identity design conference I've attended a few times. i'm not sure how much appetite there is to engineer a reputation system on top of the fediverse... but i think it's fair to assume that an implicit/closed-source/under-the-hood one will be developed by, um, at least one commercial fediverse platform (if not multiple of them as more enter the space), so I'm always game to kick the ball around if this is a thing any projects actually have budget/human-hours allocated to working on!

@ThisIsMissEm ThisIsMissEm added the Future Scope This issue is not currently something within scope for the taskforce label Oct 23, 2024
@jfinkhaeuser
Copy link
Author

jfinkhaeuser commented Oct 23, 2024 via email

@bumblefudge
Copy link

But it's a reputation-system primitive you're proposing at the protocol level, to get today's adhoc reputaiton system on firmer ground, non? I'm trying to understand what the options are process-wise, and if anyone wants to work on it with you. Generally anything at protocol level has to be FEPd, prototyped and taken to production by 2 or more implementations before it really comes into scope for the CG and/or normative changes to the protocol itself... so I'm just trying to suss out next steps here for ya.

@jfinkhaeuser
Copy link
Author

jfinkhaeuser commented Oct 24, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Future Scope This issue is not currently something within scope for the taskforce
Projects
None yet
Development

No branches or pull requests

3 participants