Misinformation ‘experts’ are paving the way for more censorship – UnHerd

The Internet has always been filled with junk: porn, flamewars, and the dreaded misinformation. According to a new study in Nature, its no more toxic today than in the past. Levels of rudeness and hostility online have stayed the same for over 34 years across practically every platform.

The only thing that has changed is the surface area. Not only are more people online, but more people are online all the time. The Internet has gone from a place where you could theoretically get your news to where most people get their news, with few exceptions. Its understandable that people would feel more impacted by the Internet and mistake that for a change in toxicity. The real change, as it turns out, is our time online.

According to the researchers behind the study, toxicity online is just human nature. In other words, the medium how we communicate, as opposed to the what is the danger.

It inspires a question: why do we focus on moderation and curbing toxic comments and misinformation with entire industries of so-called experts instead of trying to moderate our Internet usage?

On a recent episode of 60 Minutes, featured misinformation experts argued that researchers are experiencing a chilling effect on social media platforms because of pushback from Republican policymakers who feel that claims about misinformation are being used to silence conservative voices. The experts argue that conservatives do spread more false, misleading, or downright dangerous information.

One one of their recommendations for solving that problem is a process called pre-bunking. In their words, pre-bunking is simply arming users with the tools to identify these posts. They fear that the public will struggle without this help Republicans, rightfully, say that all theyre doing is vilifying conservative positions.

The researchers interviewed on 60 Minutes framed the issue in a way that sounds like a plea for censorship. But this is a slippery slope: why should any speech protected by the First Amendment be censored, including by labelling? And why shouldnt we trust the public to use their best judgement?

A recent article from R Street points out another problem: defining what is and isnt misinformation in the first place isnt clear cut, something these experts seemed to take for granted on the 60 Minutes segment. How do you regulate something you cant define? As the Twitter Files showed us, politicisation and weaponisation of the term are very real issues, too conservative voices were suppressed.

But lets assume the term is clearly defined, as is the impact. Handling misinformation is more complex than determining the best content moderation policies.

Determining expertise has never been more difficult. For example, in the 2000s, you could advise people to avoid personal blogs or pseudonymous posters. Today, the rules around pseudonymity and blogging have changed. Its plausible that a Substack written under a pseudonym may be more reliable and even more widely read than a piece from a legacy publication. But how do you know which ones you can and cant trust?

While imperfect, Xs Community Notes feature is a good solution. It allows users to add context and clarification to potentially misleading posts, providing a layer of fact-checking and nuance that can help readers navigate complex issues. By crowdsourcing this process, Community Notes taps into the X user bases collective knowledge and expertise, which can be more effective than relying on a centralised team of moderators or fact-checkers.

Of course, there are potential downsides to a crowdsourced approach. Users may have biases and agendas, and theres a risk that popular opinion could drown out dissenting voices or minority perspectives. Its unclear how quality control works outside of the voting system.

But the alternative relying on a small group of experts to determine what is and isnt true is far more problematic.

As for the misinformation experts? They should also publish their findings and create supplementary material that people can opt into, should they want to use it to educate themselves. Because of the potential for bias (and their history), the problem is when they become the sole adjudicators of what is and isnt the truth.

Its not that misinformation researchers should be silenced. But we should be careful when they are treated like the be-all and end-all of speech online.

Read this article:
Misinformation 'experts' are paving the way for more censorship - UnHerd

Related Posts

Tags:

Comments are closed.