Context: We never built an open reputation system for the internet. This was a mistake, and that’s one of the reasons why we have so much spam and fake news.
But now, as governance takes an ever-more prominent role in technology, such as for the ever-growing list of decentralized projects e.g. DAOs, we need to figure out how to give more power to “better” actors within a given community or context, and disempower or keep out the detractors and direct opponents. All without putting a centralized authority in place.
Proposal: Here is a quite simple, but as I think rather powerful proposal. We use an on-line discussion group as an example, but this is a generic protocol that should be applicable to many other applications that can use reputation scores of some kind.
-
Let’s call the participants in the reputation system Actors. As this is a decentralized, non-hierarchical system without a central power, there is only one class of Actor. In the discussion group example, each person participating in the discussion group is an Actor.
An Actor is a person, or an account, or a bot, or anything really that has some ability to act, and that can be uniquely identified with an identifier of some kind within the system. No connection to the “real” world is necessary, and it could be as simple as a public key. There is no need for proving that each Actor is a distinct person, or that a person controls only one Actor. In our example, all discussion group user names identify Actors.
-
The reputation system manages two numbers for each Actor, called the Reputation Score S, and the Rating Tokens Balance R. It does this in a way that it is impossible for those numbers to be changed outside of this protocol.
For example, these numbers could be managed by a smart contract on a blockchain which cannot be modified except through the outlined protocol.
-
The Reputation Score S is the current reputation of some Actor A, with respect to some subject. In the example discussion group, S might express the quality of content that A is contributing to the group.
If there is more than one reputation subject we care about, there will be an instance of the reputation system for each subject, even if it covers the same Actors. In the discussion group example, the reputation of contributing good primary content might be different from reputation for resolving heated disputes, for example, and would be tracked in a separate instance of the reputation system.
-
The Reputation Score S of any Actor automatically decreases over time. This means that Actors have a lower reputation if they were rated highly in the past, than if they were rated highly recently.
There’s a parameter in the system, let’s call it αS, which reflects S’s rate of decay, such as 1% per month.
-
Actors rate each other, which means that they take actions, as a result of which the Reputation Score of another Actor changes. Actors cannot rate themselves.
It is out of scope for this proposal to discuss what specifically might cause an Actor to decide to rate another, and how. This tends to be specific to the community. For example, in a discussion group, ratings might often happen if somebody reads newly posted content and reacts to it; but it could also happen if somebody does not post new content because the community values community members who exercise restraint.
-
The Rating Tokens Balance R is the set of tokens an Actor A currently has at their disposal to rate other Actors. Each rating that A performs decreases their Rating Tokens Balance R, and increases the Reputation Score S of the rated Actor by the same amount.
-
Every Actor’s Rating Tokens Balance R gets replenished on a regular basis, such as monthly. The regular increase in R is proportional to the Actor’s current Reputation Score S.
In other words, Actors with high reputation have a high ability to rate other Actors. Actors with a low reputation, or zero reputation, have little or no ability to rate other Actors. This is a key security feature inhibiting the ability for bad actors to take over.
-
The Rating Token Balance R is capped to some maximum value Rmax, which is a percentage of the current reputation of the Actor.
This prevents passive accumulation of rating tokens that then could be unleashed all at once.
-
The overall number of new Ratings Tokens that is injected into the system on a regular basis as replenishment is determined as a function of the desired average Reputation Score of Actors in the system. This enables Actors’ average Reputation Scores to be relatively constant over time, even as individual reputations increase and decrease, and Actors join and leave the system.
For example, if the desired average Reputation Score is 100 in a system with 1000 Actors, if the monthly decay reduced the sum of all Reputation Scores by 1000, 10 new Actors joined over the month, and 1000 Rating Tokens were eliminated because of the cap, 3000 new Rating Tokens (or something like that, my math may be off – sorry) would be distributed, proportional to their then-current Reputation Scores, to all Actors.
-
Optionally, the system may allow downvotes. In this case, the rater’s Rating Token Balance still decreases by the number of Rating Tokens spent, while the rated Actor’s Reputation also decreases. Downvotes may be more expensive than upvotes.
There appears to be a dispute among reputation experts on whether downvotes are a good idea, or not. Some online services support them, some don’t, and I assume for good reasons that depend on the nature of the community and the subject. Here, we can model this simply by introducing another coefficient between 0 and 1, which reflects the decrease of reputation of the downvoted Actor given the number of Rating Tokens spent by the downvoting Actor. In case of 1, upvotes cost the same as downvotes; in case of 0, no amount of downvotes can actually reduce somebody’s score.
To bootstrap the system, an initial set of Actors who share the same core values about the to-be-created reputation each gets allocated a bootstrap Reputation Score. This gives them the ability to receive Rating Tokens with which they can rate each other and newly entering Actors.
Some observations:
-
Once set up, this system can run autonomously. No oversight is required, other than perhaps adjusting some of the numeric parameters before enough experience is gained what those parameters should be in a real-world operation.
-
Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores. Note they can only acquire reputation by being good Actors in the eyes of already-good Actors. So in this respect this system favors the status quo and community consensus over facilitating revolution, which is probably desirable: we don’t want a reputation score for “verified truth” to be easily hijackable by “fake news”, for example.
-
Anybody creating many accounts aka Actors has only very limited ability to increase the total reputation they control across all of their Actors.
-
This system appears to be generally-applicable. We discussed the example of rating “good” contributions to a discussion group, but it appears this could also be applied to things such as “good governance”, where Actors rate higher who consistently perform activities others believe are good for governance; their governance reputation score could then be used to get them more votes in governance votes (such as to adjust the free numeric parameters, or other governance activities of the community).
Known issues:
-
This system does not distinguish reputation on the desired value (like posting good content) vs reputation in rating other Actors (e.g. the difference between driving a car well, and being able to judge others' driving ability, such as needed for driving instructors. I can imagine that there are some bad drivers who are good at judging others’ driving abilities, and vice versa). This could probably be solved with two instances of the system that are suitable connected (details tbd).
-
There is no privacy in this system. (This may be a feature or a problem depending on where it is applied.) Everybody can see everybody else’s Reputation Score, and who rated them how.
-
If implemented on a typical blockchain, the financial incentives are backwards: it would cost to rate somebody (a modifying operation to the blockchain) but it would be free to obtain somebody’s score (a read-only operation, which is typically free). However, rating somebody does not create immediate benefit, while having access to ratings does. So a smart contract would have to be suitably wrapped to present the right incentive structure.
I would love your feedback.
This proposal probably should have a name. Because it can run autonomously, I’m going to call it Autorep. And this is version 0.5. I’ll create new versions when needed.
-
-
-
-
-
-
-
-
-
from Hacker News https://ift.tt/7D1Uz3O
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.