-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Idea: statements of (dis-)trust #20
Comments
As I mentioned in the other thread, "trust" is a little overdetermined here, as it slides between the human/social meaning of the term and the engineering term (each "upvote/downvote" is a trust signal that can computed in realtime on a social graph as an input to mechanical decisions like "allow this Bob's message into Alice's inbox based on current trust-and-proximity score"). But a more generic term might be "reputation system primitives", in that complex and nuances queries can be run over a social graph that evolves over time if the raw data points are specified narrowly enough to be almost invariants over time. There is a ton of prior art I can point you to if you're curious, that might be variously relevant to fediverse software design, but at a high level I think reputation is a can of worms that many people try to avoid, particularly at the standards/infrastructure layer... even in moderation discussions! here's a detailed document about a "minimum viable reputation system" created for a decentralized/horizontal package manager in the cryptocurrency space, and here are a bunch of notes from essays, prototypes, and presentations on engineering reputation systems from a decentralized-identity design conference I've attended a few times. i'm not sure how much appetite there is to engineer a reputation system on top of the fediverse... but i think it's fair to assume that an implicit/closed-source/under-the-hood one will be developed by, um, at least one commercial fediverse platform (if not multiple of them as more enter the space), so I'm always game to kick the ball around if this is a thing any projects actually have budget/human-hours allocated to working on! |
That is why I am deliberately not proposing a reputation system 🤷♂️
…On 23 October 2024 18:41:21 UTC, Bumblefudge ***@***.***> wrote:
As I mentioned in the other thread, "trust" is a little overdetermined here, as it slides between the human/social meaning of the term and the engineering term (each "upvote/downvote" is a trust signal that can computed in realtime on a social graph as an input to mechanical decisions like "allow this Bob's message into Alice's inbox based on current trust-and-proximity score"). But a more generic term might be "reputation system primitives", in that complex and nuances queries can be run over a social graph that evolves over time if the raw data points are specified narrowly enough to be almost invariants over time. There is a ton of prior art I can point you to if you're curious, that might be variously relevant to fediverse software design, but at a high level I think reputation is a can of worms that many people try to avoid, particularly at the standards/infrastructure layer... even in moderation discussions!
here's a [detailed document about a "minimum viable reputation system" created for a decentralized/horizontal package manager in the cryptocurrency space](https://chainagnostic.org/CAIPs/caip-261#specification), and here are [a bunch of notes](https://github.com/search?q=org%3Aweboftrustinfo%20reputation&type=code) from essays, prototypes, and presentations on engineering reputation systems from a [decentralized-identity design conference](https://www.weboftrust.info/) I've attended a few times. i'm not sure how much appetite there is to engineer a reputation system on top of the fediverse... but i think it's fair to assume that an implicit/closed-source/under-the-hood one will be developed by, um, at least one commercial fediverse platform (if not multiple of them as more enter the space), so I'm always game to kick the ball around if this is a thing any projects actually have budget/human-hours allocated to working on!
--
Reply to this email directly or view it on GitHub:
#20 (comment)
You are receiving this because you authored the thread.
Message ID: ***@***.***>
|
But it's a reputation-system primitive you're proposing at the protocol level, to get today's adhoc reputaiton system on firmer ground, non? I'm trying to understand what the options are process-wise, and if anyone wants to work on it with you. Generally anything at protocol level has to be FEPd, prototyped and taken to production by 2 or more implementations before it really comes into scope for the CG and/or normative changes to the protocol itself... so I'm just trying to suss out next steps here for ya. |
Well, yes and no.
No, I'm not proposing a reputation system primitive as such. In very general terms, what I'm proposing isn't so different from an RDF or semantic triple, conceptually. I've also mentioned elsewhere it could be signed (as a JWT claim), which makes it useful for federated/distributed/decentralized verification.
The verb could be picked from any vocabulary. In a fedi context, subject and object are most likely actors, instances or activities. Going the full RDF route here is going to invite more trouble than it solves, so let's pretend it's limited to this.
But, yes, more specifically I'm proposing that a semantic triple where subject/object are actors and the verb makes some trust statement is useful. Specifically I think it's useful for scaling moderation beyond the flagging of stuff towards recommendations between mods within or across an instance. Pretty soon you'll reach a point where recommendations are going to go beyond the individual relationship between mods. FediBlock is a perfect example of that.
Is having such a primitive a reputation system? No. Can you build one on top? Yes.
Should you? No... not in the sense I've seen them done.
Specifically, I've seen several obvious flaws in reputation systems:
- When the number of recommendations outweighs the quality in considering or presenting them
- When there is automatic consumption of recommendations
- When the chain of recommendations becomes too long (>1 steps from origin to recipient, basically, but whatever)
- etc.
So, no, I am not aiming for any such system.
But I'd still like to have a better UX when someone I know tells me "ugh that dude is a creep", which will happen only if my instance can tell based on some data that this kind of statement is being made.
Beyond that, there are other uses for the primitive. It still doesn't mean you have to build a reputation system.
…On 24 October 2024 09:06:24 UTC, Bumblefudge ***@***.***> wrote:
But it's a reputation-system primitive you're proposing at the protocol level, to get today's adhoc reputaiton system on firmer ground, non? I'm trying to understand what the options are process-wise, and if anyone wants to work on it with you. Generally anything at protocol level has to be FEPd, prototyped and taken to production by 2 or more implementations before it really comes into scope for the CG and/or normative changes to the protocol itself... so I'm just trying to suss out next steps here for ya.
--
Reply to this email directly or view it on GitHub:
#20 (comment)
You are receiving this because you authored the thread.
Message ID: ***@***.***>
|
I would like to see the ability to propagate statements of trust. This could be in the form of an activity.
A statement of trust effectively identifies:
The use case I have in mind is to help manage the complexity of FediBlocks #19 and/or Flags #14. Conceptually, if I trust a particular admin group to make good calls regarding CSAM block recommendations, I could announce this to other admins.
It follows that the statement in question can be structured data:
I write "can be", simply to leave the general mechanism open to other uses. (Side note: if we use vocabularies for all three, we have RDF triplets)
Assume that as a recipient of trust statements about moderation recommendations, I review those. I can decide to myself issue a statement to the effect "origin X makes good calls about good calls regarding recommendations" (to simplify this, it could merely be a boost of the original statement if it is an activity).
The upshot is to create a web of trust, with two major differences to the GPG-style version:
Actual use case: as I wrote in #19, I do not think that automatically implementing block recommendations is a good idea. On the other hand, between Flags and recommendations being shared, administrators may find the software they use can provide better user interfaces - but the flood of actionable items does not diminish.
Without going towards automation, statements of trust can be used to optimize user experience for administrators. For example, one could group and/or prioritize trusted origins such that they can be acted upon with different processes. Known trusted sources might require only a single administrator to act upon the recommendation, while others might involve consensus seeking. Identifying the origin can e.g. also permit a different process for recommendations from a specialized organization with dedicated resources.
The exact policy, of course, is up to the admin team. The point of this idea is to provide a basis for such choices.
The text was updated successfully, but these errors were encountered: