I’m terribly frustrated with common social networks, and by extension the spread of misinformation and the spread of hate.
This is a great time to be an a-hole, or a blowhard, or to have a desire to spread a viral idea to the world if you have no scruples.
We are in the golden age of BS.
Social media companies love to say they are bringing the whole world together. Zuck thinks that the world magically gets better if more and more people can get in direct contact with each other.
Unfortunately this is a naïve view of things. On social media, the things that spread are the things that trigger a quick reaction. Subtlety, nuance, and complex subjects don’t do as well.
This is why a–holes and blowhards thrive in this era. Anything that shocks people gets to spread. Insults, controversy, sewing division. All are much “liked” on social media.
It is not a surprise that people resort to making their point as “triggering” as possible, so that it gets re-shared. This leads to oversimplification, or leaving out mitigating factors, or simply making things up.
I don’t like any of this, so I’ve been asking myself the question: how could social media be designed such that it is much harder for jerks to be jerks.
People Of The Net
At a fundamental level, this shouldn’t be too hard. Most humans are cordial to each-other, and don’t start trash-talking to someone’s face for no reason.
The internet is different. People who don’t know each-other get into debates about the direction of their country. This is the promise of social media: connect with anybody about meaningful things.
In the physical world neighbors who know they aren’t on the same political usually prefer to just skip those conversations and instead talk about the kids soccer match, or any other matter they can talk about easily.
How can we bring civility to tough discussions?
It should be possible to come up with purely objective “acceptable behavior” that every participant would always have to adhere to. A code of conduct that doesn’t tell you what you can or can’t say, but one that mandates decency.
Next, we need to figure out who decides whether a post or comment breaks the “acceptable behavior” rule. The typical approach is to use moderators. Personally, I don’t like this approach. Moderators are more powerful than other users, which is concentration of power which usually leads to problems.
I would like an approach where every user is a moderator. This of course sets up a situation where people use moderation as a tool to abuse other users (moderation tools are always used against users).
To counter that, moderation itself should be a moderated. If a user flags a post as “offensive” when it clearly is not, that contribution could be flagged as “bad moderation”.
Now of course we’re set up for an endless cascade of of users flagging flags. How does it end?
I’m thinking that maybe it doesn’t matter thanks to another concept:
Web Of OK People
There could be a web-of-trust type of thing, but it would be a web-of-OK-people.
Basically, every user would have a list of people who they consider to be “OK”, as in they are stand-up folks who aren’t going to throw feces around the room. A user would mark anybody they know, either in-person, or because you’ve witnessed their good behavior around the net for long enough to know they are stand-up folks, as OK-people.
Now when there is a controversy, any flag or moderation or comment is more likely to be showed to you if they are by one of your ok-people.
A score could be given to any post based on the reactions of your OK-people: if they “liked” it, the score goes up, flagged it and it goes down. If they flagged a flag, then any negative effect of that flag would be removed.
Unfortunately this would mean adding hundreds or thousands of ok-people just to get some coverage of the millions of posts in an active social network. Instead the system would find who the OK-people consider OK-people themselves, and consider those as OK-people too, but with less power. Then it would go another step and another and another until it amassed for each user a large list of OK to OKish people.
Sane Media Not Social Madness
With a web-of-OK-people weighing moderations and flags on content, the content a user sees can be curated for them.
While users of current social media can curate their feed by following good people, things get ugly when you look at replies to a post, or look at the comments section on YouTube for example (yikes!).
With the web-ok-Ok-people eliminating the replies by those who don’t measure up, social media could be a decent place to be again.
The community would essentially do personalized distributed censorship of unacceptable posts, and if a user thought the censorship was wrong, they could adjust who is in their list of OK-people so that it better matches their tastes.
An army of bots wouldn’t get very far. A bad actor could create thousands of accounts, but with no track record (or a bad one if any) and no personal connections, most bots won’t be considered OK-people by anybody, eliminating their influence.
Look, I don’t know if this would ever work. It might be an OK theory but a disaster in practice. I have no idea.
Wikipedia experimented with something like this at some point, but are no longer using it. I’ll need to research why.
There are questions around the computational demands of such a system. It’s not clear it’s even feasible at scale.
I can’t help but wonder if this would just create highly tuned filter bubbles. I think users who would want more diverse views could get that by adding OK-people who see things differently but aren’t jerks.
I worry as well about onboarding new users. How does a fresh new user get anywhere on such a system if they don’t already know people? There would have to be a patronage system to help get people started.
Alright, enough questions. There is bound to be a lot more to this than what I wrote, but it felt good to offload it.
This was day 13 of the #100DaysToOffload challenge.