Something broke with our approach to disagreements. We went from simply arguing with people who were wrong on the Internet to to demands that people be deplatformed because they’re wrong, according to people who seem to be right equally rarely. Thanks to the aggregation of content onto a few major platforms, a few people have the arbitrary ability to make other people vanish from public discourse. At this point, it’s impossible to tell who’s right, who’s wrong, and who’s been disappeared for having views deemed “unacceptable” by people who have no business making that determination. It’s starting to seem like the reason you can’t trust just anything you see online has moved from “anybody can post anything on the Internet” to “because publishing anything too contrarian will get you kicked off.” That’s not good.
So what are we trying to do with all of these “removing from our platform” decisions, anyway? According for all the people demanding that sites, pages, and channels, be taken down – it’s all to fight “misinformation.” Which is defined as “false information that is spread, regardless of whether there is intent to mislead” by dictionary.com. You’d assume that people are using the term “misinformation” when they mean “disinformation,” but I rarely see that kind of distinction being made, or argued for. To be honest, it seems like the people who are so hellbent on taking things off the Internet don’t really care about differentiating between “honest mistake” and “deliberate lie,” (even “deliberate lie” seems a bit too low of a bar, unless we’re all ready to agree that politicians and their supporters need to go where nobody has to listen to them).
On 1 hand, this is a content moderation problem, which is already hard. On the other it’s a free speech thing. Before someone links it – yes, I’ve seen the XKCD comic. I’ve had a thought to address that in the back of my mind for a while. There’s the law around the First Amendment, and how it relates to private entities, which the comic explains very well (especially in terms of moderation). There’s also the principle of the First Amendment, letting people speak as a default position in society, that gets utterly disregarded in all of this. In general, unless the content runs against clear, unambiguous rules about what’s allowed, content should be allowed.
The problem is that the rules around content seem to be getting less clear, more ambiguous, and (depending on who you ask) more opinionated with regards to the only allowable perspective. Let’s skip “misinformation” for now, just look at Twitter’s new media policy. Based on the updated policy, do you think Twitter would have kept up the videos of:
- The George Floyd murder
- The Ahmaud Arbery murder
- The Kyle Rittenhouse shootings
If your answer to any of these was “No,” that’s a problem. If your answer was “No” to some, and “Yes” to others, that’s even worse, because it means you think Twitter, ostensibly a platform, is altering what you see to fit what it believes. The YouTube channel Virtual Legality has a great analysis on the new Twitter CEO’s comments about Twitter’s role in public discourse, and what it means in terms of Section 230 of the Communications Decency Act (that law that everyone wants to amend to allow sites to censor Republicans more or less, depending on the political affiliation of the person complaining about it).
The problems with Section 230 fall into 2 categories (again, depending on the political affiliation of whoever’s complaining, and bear in mind I’m analyzing this despite not being a lawyer):
- Section 230 means social applications don’t have to police the content on their sites, allowing “bad people” to post their “misinformation” without consequence (never mind the fact that the whole point of Section 230 is that liability for content posted on a platform belongs to the poster).
- Section 230 means that platforms that act as a public town square can arbitrarily censor posts from prominent speakers from 1 political persuasion while leaving up posts that violates the same policies from those that are more politically favored by the companies (the whole point of Section 230 was to allow platforms to moderate content on their site without assuming legal liability for everything posted there).
In general, I’m not supportive of efforts to repeal or significantly change section 230 – that law puts responsibility for the things people post on the poster, which is where it belongs. I would like to see more formal definitions around when a site moves from “platform” to “publisher.” The only “real” change I’d like to see is a rule saying that to qualify for section 230 protections, you have to offer extend those protections to others on your platform. The point of this is to target actions like penalizing sites for their comments section (something Google has done more than once), or apps getting de-listed because of their content moderation policies.
I don’t really think people are actually opposed to Section 230. I think the opposition is to rules that appear to vague and seem to be enforced arbitrarily. It feels worse than it is because rules enforcement appears to be consistently inconsistent – some things get flagged as violations, even if it seems like you have to stretch some definitions to do it, and some things don’t, even though it seems clear cut. It’s easy to call that “{site X} being biased,” but it’s not like they have an algorithm doing this. These decisions are being made by people, enforcing rules that are ambiguous. That ambiguity seems more and more deliberate with every decision and new policy.
Nobody at these companies is calling it “ambiguity,” of course. They call it “leaving room for making case-by-case judgments, since no 2 situations are exactly alike, and something can be a violation in 1 instance but not another.” That uncertainty, by the way, is what we adults call “ambiguity.” Determination about whether something falls afoul of site rules shouldn’t rely on human judgement – if you can’t be sure something isn’t allowed, there shouldn’t be a rule about it in the first place.
These problems existed before these applications decided they were going to take on “misinformation on their platforms” (with a little nudging from Congress). But they put these applications in an awkward spot. These sites are already viewed as untrustworthy, and now they’re being asked to rule on “truth,” something they’re fundamentally unqualified to do. So, what do these sites do? They outsource the determination of truth to major media outlets, a conglomerate barely over 1 in 3 people consider trustworthy. For the record, they’re fundamentally unqualified to determine what’s true and not too. The only real arbiter of truth is time, yet we’re expecting software companies to resolve truth and veracity instantly, as millions of people per second post whatever pops into their head.
We’re told that this is important because all the misinformation online is eroding trust in institutions, and in the democratic process itself. It seems more like we’re seeing a lot of pressure from government and traditional media to censor what everyone else is saying because trust in those institutions was already eroded, and they think by kicking all the people who disagree with them offline they’ll get it back by virtue of being the only people allowed to set the public discourse.
The effect of these censorship efforts, and these have crossed the line from simple content moderation to censorship over disapproval of the ideas being expressed, is that the people demanding accounts be shut down are going to succeed in convincing people not trusting what they see on sites like Facebook and Twitter, and they’re going to succeed with associating posts on Facebook and Twitter with misinformation. What’s not going to succeed is convincing people that they can trust what they see once this crusade is over, if it ever is. As DHH noted, conversations that would trigger emotional responses are moving offline.
That may actually be the best thing for user-generated content and opinions. Instead of having major sites that everyone visits because all the content is aggregated at those handful of links, giving you a simple, common point of censorship without useful alternatives, we’re all posting to a variety of smaller, more focused locations, and then letting the aggregation happen at the consumption end – in our email inboxes or in an RSS feed. I have already argued for this on multiple occasions, but it seems relevant again. Multiple, subject-specific instances for posting can serve the same purpose as consumer-side content filtering – allowing me to follow the individuals I want in the contexts where I find their opinions most interesting.
The desire to purge the Internet of “unacceptable views” is recent, and odd considering that lots of idiotic ideas were floating around on the Internet, for years, without us feeling the need banish people from public discourse over them. A greatest hits collection would include:
- We never landed on the moon
- 9/11 was a false-flag operation
- Barack Obama wasn’t a natural born US citizen
- Vaccines cause autism
Every single 1 of these dimwitted ideas was banished from popular conversations online because once they entered the mainstream they got drowned out in counterpoints and evidence pointing out how stupid they are. Given the fact that we’ve been shutting down bad ideas without kicking people offline, and without official government or media pronouncements of veracity, it begs the question of why people feel the need to do it now. The only explanation for that I can come up with is that institutions and organizations claiming to be endangered all this have squandered the trust and goodwill they had. Now they’re resentful of anyone who seems to have it now, especially when those people disagree with them. If we’re really going to commit to getting misinformation and bad actors offline, perhaps we should start with the people claiming the authority, and ability, to decree what is and isn’t misinformation.