Archive for the ‘Censorship’ Category

Facebook attempts to walk the tightrope on censorship – Telecoms.com

Having criticized Twitter for poking the bear, Facebook seems to be adopting a more nuanced approach to policing its platform.

Twitters decision to censor President Trump was an astounding mistake. Of course nobody, no matter how powerful, should be exempt from its policies, but if youre going to single out one of the most powerful people in the world, you had better make sure you have all your bases covered. Twitter didnt.

Facebook boss Mark Zuckerberg recognised Twitters mistake immediately and announced during an interview with Fox News that Facebook shouldnt be the arbiter of truth of everything people say online. Even his choice of news outlet was telling, as Fox seems to be the only one not despised by Trump. Zuckerberg was effectively saying leave us out of this.

Twitter boss Jack Dorsey responded directly with the following tweet thread, which at first attempted to isolate the decision to censor Trump to him alone, but then proceeded to talk in the first person plural.

Within a couple of days Zuckerberg posted further clarification of his position on, of course, Facebook. He noted the current violent public response to a man dying in US police custody served as a further reminder of the importance of getting these decisions right.

Unlike Twitter, we do not have a policy of putting a warning in front of posts that may incite violence because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician, wrote Zuckerberg. We have been in touch with the White House today to explain these policies as well.

From that post we can see that Zuckerberg is still in favour of censorship, but sets the bar higher than Twitter and doesnt see the point in half measures. Worryingly for Zuckerberg, many Facebook employees have taken to Twitter to voice their displeasure at this policy, apparently demanding Facebook does censor the President.

Its worth reflecting on the two forms of censorship Twitter has imposed on Trump. The first was simply to fact-check a claim he made about postal voting, which contained a hyperlink to a statement on Twitter saying his claim was unsubstantiated according to select US media consistently hostile to Trump.

The second superimposed a warning label over the top of a Trump tweet which promised repercussions for rioting. The label reads: This Tweet violated the Twitter Rules about glorifying violence. However, Twitter has determined that it may be in the publics interest for the Tweet to remain accessible. Note the capitalization of Twitter Rules and the clear admission that Twitter considers itself the arbiter of what is in the public interest. Clicking on the label reveals Trumps hidden tweet, which features the phrase when the looting starts, the shooting starts.

That was apparently the bit that was interpreted as glorifying violence, and yet a subsequent Trump tweet, using exactly the same phrase, has not been subject to any censorious action by Twitter. That discrepancy alone (not to mention the fact that the labels dont survive the embedding process) illustrates the impossible position Twitter has put itself in. There are presumably millions of other examples of borderline glorifications of violence, let alone direct threats, that it has also let pass. Such inconsistent censoring can easily be viewed as simple bias, seeking to tip the scales of public conversation in your favour.

For many people censorship is a simple matter of harm reduction. Why would anyone want to allow speech that could cause harm? The mistake they make is to view harm as an objective, absolute concept on which there is unilateral consensus. As Zuckerbergs post shows, the perception of harm is often highly subjective, and the threshold at which to censor harmful speech is entirely arbitrary.

There is clearly a lot of demand for extensive policing of internet speech nonetheless, but social media companies have to resist it if they want to be able to claim theyre impartial. Theres just no way to keep bias out of the censorship process. If they dont, they risk being designated as publishers and thus legally responsible for every piece of content they host. This would be calamitous for their entire business model, which makes it all the more baffling that Dorsey would so openly risk such an outcome.

Read the rest here:
Facebook attempts to walk the tightrope on censorship - Telecoms.com

The march of progressive censorship – Spectator.co.uk

Its official: criticising Black Lives Matter is now a sackable offence, even here in the British Isles, thousands of miles away from the social conflict currently embroiling the US. As protesters again fill the streets of a rainy London on Saturday, as part of a now internationalised backlash against the brutal police killing of George Floyd by Minneapolis police, those who criticise them do so at their peril as two men have recently found out.

Stu Peters, a presenter on Manx Radio, has been suspended, pending an investigation, for an on-air exchange with a black caller. He said nothing racist, you can read the transcriptfor yourself. What he did was rubbish the idea of white privilege: I've had no more privilege in my life than you have. And he questioned the wisdom of staging a protest on the Isle of Man against a killing in Minnesota: You can demonstrate anywhere you like, but it doesn't make any sense to me.

For this, he has been taken off air. ManxRadio has even referred the exchange to the Isle of Mans Communications Commission to assess whether any broadcast codes have been broken. And for what? He took issue with the idea that skin colour confers privilege, regardless of any other consideration: a mad ideology whose adherents will actuallyreadily say that white homeless people enjoy white privilege.

And he wondered out loud if a protest against US cops on a small island in the Irish Sea is, well, a bit pointless. If Peters has broken any code it is a very new and unwritten one, and hes not the only person to fall foul of it in recent days. MartinShipton, chief reporter for the Western Mail, has been asked to step down as a judge of the Wales Book of the Year competition over some tweets he posted about the BLM protests in Cardiff. He said they were exercises in virtue-signalling and expressed concern about the effect they might have on the spread of Covid-19. He also got into some robust exchanges with people who told him that, as an old white man, he should just shut up.

How did we get here? In the space of just a few days, Black Lives Matter, its tenets and adherents have become almost unquestionable. No one worth wasting breath on disagrees with the literal message of the movement. But those who dare criticise a lot of the identitarian ideological guff that unfortunately accompanies the movement now risk being treated as heretics. Even criticising these mass gatherings for breaking lockdown remember when sitting too closely on a beach was a scoldable offence? is treated as alarming evidence of non-conformity or perhaps even racism.

This is all a neat demonstration that censorship is not exclusively about state clampdowns. The suspension of Peters and the sacking of Shipton are examples of what John Stuart Mill called the tyranny of the prevailing opinion and feeling the tendency of society to impose, by other means than civil penalties, its own ideas and practices as rules of conduct on those who dissent from them. If expressing an opinion, even one as mild as I support the sentiment, but Im not sure these protests are a great idea, the resulting backlashcan cost you your job or social status.

But this is also profoundly worrying not only for free speech but also for the quality of our discussion about racism and how to defeat it. We are being compelled to have a conversation about race, but one in which any dissent from the most extreme and absurd positions such as that Western society is still racist to the core and that dirt-poor white folk benefit from it, even if they dont realise it are treated as suspect. This is a recipe for censorship, division and neverending culture war and nothing else.

See the original post:
The march of progressive censorship - Spectator.co.uk

The Trojan Horse in Trumps anti-Twitter executive order – Engadget

"The Order would circumvent the role of Congress and of the courts in enacting and interpreting [Section 230] ...and purport to empower multiple government agencies to pass judgment on companies content moderation practices," its lawsuit states. "The Order clouds the legal landscape in which the hosts of third-party content operate and puts them all on notice that content moderation decisions with which the government disagrees could produce penalties and retributive actions, including stripping them of Section 230s protections."

So yeah, here we go with Section 230 (again). If you're unfamiliar, Section 230 is what came out of the Internet Freedom and Family Empowerment Act, an amendment to update the Communications Act of 1934 for the internet era. Or rather, Ye Olde Internet Era, as 230 hails from 1996. It has a strange and storied history thats deeply entangled with a certain set of puritanical family values, entrenched in forcing broadcast art and communications to adhere to a specific worldview. It was known as The Communications Decency Act of 1996, which had hoped to censor porn on the internet but instead ended up protecting free speech online. Because it turns out that sexual expression is protected speech. Lets hope someone tells Facebook and Tumblr.

Anyway. Section 230 basically makes it so that platforms like Twitter and Facebook can have user-generated content (what we say on their platforms) without the companies getting hosed by a range of laws that would make them legally responsible for what we say and do. So if we say something stupid, and someone wants to sue, that's on us. Section 230 is such a surprisingly robust, pro-free speech thing that it is pretty much universally regarded as a core protection of free speech on the internet.

Pool via Getty Images

The "Executive Order on Preventing Online Censorship" is quite a twisty bit of doublespeak in that regard. Yet what it does -- or vaguely intends to do -- is pretty chilling.

The order wants the FCC -- currently run by the guy who killed net neutrality, Ajit Pai -- to come up with regulations that stop section 230's protections for internet platforms' liability for what's posted there. "In addition," explained Forbes, "the order also directs the FTC to consider taking action in cases of complaints received by the White House of political bias on social media, and then to take action for deceptive acts or practices in such cases. The order also asks the FTC to consider complaints against Twitter as violations of the law."

Further, there are directives for the Attorney General to seek regulation and enforcement against online platforms at the state level and with federal legislation.

The order is being described, dismissively, as being so vague as to be ridiculous. Trump's executive order on social media is a silly distraction from a serious debate, said Sarah Miller, Executive Director of the Economic Liberties Project. This executive order is basically a request to independent agencies, the Federal Communications Commission and the Federal Trade Commission, to act in some vague manner. The President cannot single-handedly change a law, he cannot order independent agencies to act, and his executive order reflects that.

Miller is someone I usually agree with, but definitely not on this. I think this minimizing stance, and articles saying "Forget Trumps Executive Order" ("Trumps executive order may not do much") are pushing some dangerous thinking. Or, they are perspectives coming from people who were in no way affected by FOSTA-SESTA.

Because one horrible thing we learned about freedom of speech and internet companies is that it doesn't matter if the marching orders coming from lawmakers and the White House look like they wont do much. FOSTA was vague and sought to neuter Section 230, too. What matters is how companies like Facebook et al decide to change their policies and guess how to implement whatever will make them safe from legal consequences.

FOSTA, as you may recall, was implemented as overbroad, compulsory censorship, ultimately encouraging discriminatory practices against sex workers (or anyone perceived to be a sex worker) everywhere. Some companies, like Facebook who lobbied for FOSTA, acted on the order before its ink dried, as it was (apparently) eagerly seeking a way to punish and exclude users whose sexual morality and professions as performers were not in line with its puritanical values.

Sexual speech is protected speech, and yet companies like Facebook and Tumblr leveraged the similarly vague FOSTA to aggressively censor users who even just talked about sex. It gave bad actors like Facebook the juice to use its "Sexual Solicitation" policy to ban "sexual slang," "sex chat or conversations," "mentioning sexual roles, sexual preference, commonly sexualized areas of the body" and more.

BRENDAN SMIALOWSKI via Getty Images

I like to imagine where we might be if these companies had treated hate groups, Holocauset deniers, and violent extremists with the same zeal for censorship and eradication from platforms, had given them no place to organize and recruit, or to plan and network. I imagine this because it makes me very mad, and it shows me very clearly why these vague White House directives affecting online speech are harmful to both the internet and democratic society.

FOSTA forced countless communities out of places where they once could participate with society, and it stifled speech in ways we have yet to fully comprehend. Tumblr's censorship of gender expression communities and the resultant exodus is just one terrible example. People died in FOSTAs wake because of the ways it was interpreted and implemented.

To characterize the "Executive Order on Preventing Online Censorship" as just another of the Mad Kings follies is to ignore the disastrous previous lessons at our own peril. We must accept that everyone is going to be a bad actor and act accordingly. Companies like Facebook and toadies like Ajait Pai are proven bad actors. FOSTA, the last vague order to target Section 230, traded sex for Nazis. FOSTA killed the internet we loved. We must never ever forget that most internet companies and startups embraced it.

This is especially true in a moment of extreme change, and doubly so in one where the fire of accountability lights our path to survival.

Read more here:
The Trojan Horse in Trumps anti-Twitter executive order - Engadget

What Should We Do If YouTube Censors on Behalf of the Chinese Communist Party? – Reason

A funny thing has been happening on YouTube. For some reason, certain combinations of Chinese characters have been immediately removed from the platform within a few short seconds. No warning or reason would be given to the Mandarin-speaking moderatees. And it's not like they had been foul or freaky in a foreign tongue. The Hanzi Which Shall Not Be Named were merely ("communist bandit") and ("fifty cents").

Huh? Why in the world would YouTube want to immediately take down those particular phrases? Well, according to YouTube, they didn't. It's an algorithmic mistake, you see. The company told The Verge that upon review, they discovered this odd insta-deleting was indeed an "error in [their] enforcement systems" that is currently being patched.

How strange that this automated fluke would tend towards the direction of the Chinese Communist Party's (CCP) preferences. These terms might seem random to Westerners, but they carry significant political weight in China.

It's not nice to call someone a bandit, whether they are a communist or not. But in the Chinese context, the term has a very particular meaning: it was used by Nationalist partisans led by the Republic of China's Chiang Kai-shek against the People's Republic of China's Mao Zedong and the reds. Today, it is considered a slur against the CCP and its patriotic supporters.

is a cleverer anti-CCP troll. It's basically calling a pro-CCP commenter a paid shill, albeit a cheap one. The joke is that human CCP NPCs get paid fifty cents for each pro-CCP post; ergo, the "fifty-cent army" or .

One of the weirder things about this controversy is that those terms would not disappear from YouTube comments if you typed them out in English or in Pinyin, which is a phonetic way to write out Chinese characters. They would only get struck if they appeared in the original Hanzi.

Yet YouTube is already banned in China and can be a pain to access without a good VPN and strong desire to do so. It's not like YouTube would have a significant impact on domestic Chinese opinion anyway. It would be like if WeChat, a hugely popular Chinese messaging app that basically no non-expat Americans use, randomly started censoring terms like "MAGAtard" or "McCarthyite." What's the point?

Well, theoretically, this kind of censorship could be aimed at keeping diaspora citizens in line. Chinese immigrants have found success and fortune throughout Western democracies. The CCP may worry that their erstwhile assets could become a little too accustomed to such capitalist pig values as freedom of speech. If people can't be kept off YouTube, maybe the most memorable memes can.

But that is just a theory. We are asked to believe that YouTube's pesky algorithm just happened to accidentally disappear these very particular Chinese anti-government phrases since at least October of 2019. And although users had publicly flagged this "bug" on official YouTube help forums last fall, it did not become a major concern until the issue received attention in the press last week.

Do you believe YouTube? A lot of people don't. It's hard to know for sure what happened. Platform algorithms are necessarily opaque, and they can sometimes produce outcomes that even their designers struggle to understand. Not only are these algorithms trade secrets, they are an inherently secretive trade.

It is entirely possible that YouTube discretion had nothing to do with this seeming pro-CCP censorship. Perhaps the fifty-cent army flagged these phrases enough to become automatic triggers that the algorithm would automatically pull. But it is also possible that someone at YouTube manually added these terms to a blocklist. Other platforms are known to have such lists. Right now, we don't know. Besides, the outcome is a problem either way.

There is a lot of smoke, but is there fire? How deep might such problems run? Someone needs to dig for answers.

The U.S. government could try. Pressure from politicians of both parties contributed to Google winding down its proposed "Dragonfly" Chinese search engine that would have been compliant with CCP censorship demands. Of course, this pressure campaign relied on someone with close knowledge of the Dragonfly Project to leak documents to the Intercept in the first place. Without a whistleblower, there's no trail to follow.

Perhaps YouTube employees will take more public interest in such matters. They are in a good position to extract answers and concessions from their managers if there is more to this story than official statements indicate. It could be personally risky. But Alphabet employees have been willing to stick their necks out for ideals in the past. In addition to their opposition to Project Dragonfly, Googlers' demonstrations against the Project Maven proposal with the U.S. Department of Defense proved fruitful.

We can hope that YouTube will do the right thing. Yet it is an unsatisfactory option, and even the most public-minded company would struggle to consistently and ethically do battle against a set of politically-mediated "enemies." Is that realistic or even desirable?

We are currently amidst a complex debate about what rights and responsibilities platforms may have in moderating user-submitted content. Few would dispute that Section 230, which shields internet actors from legal liability due to information provided by others, is responsible for the internet environment we inhabit today. The problem is that this internet environment is one where hostile foreign governments that operate reeducation camps for religious minorities may stealthily manipulate content and stifle dissent on U.S.-based platforms with impunity. Does anyone really think this is sustainable?

Laws and activism can only do so much. Censorship, whether by accident or pressure, will be a threat whenever a central party has the ability to censor. The cypherpunk activist John Gilmore noted that "the net interprets censorship as damage and routes around it." It may take time, but eventually we will arrive at a system where authorities could not censor distributed content even if they wanted to.

For those who oppose censorship, working towards solutions that further entrench governments or central platforms in content moderation misses the long-term answer. Instead, they should seek and support technological projects that make the problem irrelevant.

See the rest here:
What Should We Do If YouTube Censors on Behalf of the Chinese Communist Party? - Reason

Censorship row over report on UK BAME Covid-19 deaths – The Guardian

Concerns about censorship have been raised after third-party submissions were left out of the government-commissioned report on the disproportionate effects of Covid-19 on black, Asian and minority ethnic people

Public Health England said it had engaged with more than 1,000 people during its inquiry. But the report, which has been criticised for failing to investigate the reasons for the disparities or make recommendations on how to address them, did not mention the consultations.

Anger has been compounded by a report in the Health Service Journal claiming that before publication the government removed a section detailing responses from third parties, many of whom highlighted structural racism.

The Muslim Council of Britain (MCB), which called in its written submission for specific measures to tackle the culture of discrimination and racism [within the NHS], said it had contacted PHE to ask why its evidence was not included.

Its secretary general, Harun Khan, said: To choose to not discuss the overwhelming role structural racism and inequality has on mortality rates and to disregard the evidence compiled by community organisations, whilst simultaneously providing no recommendations or an action plan, despite this being the central purpose of the review, is entirely unacceptable. It beggars belief that a review asking why BAME communities are more at risk fails to give even a single answer.

TheMCBis seeking further clarification from PHE as to why the report removed the submission from theMCBand others. It is imperative that the full uncensored report is published with actionable policies and recommendations as suggested by community stakeholders, and a full Covid race equality strategy is introduced.

The report, which was published on Tuesday, found that BAME groups were up to twice as likely as white Britons to die if they contract Covid-19. But numerous studies had already established disproportionate mortality among BAME people, leaving many furious as to why PHE did not examine the reasons for the disparities or propose solutions.

Dr Zubaida Haque, the interim director of the Runnymede Trust, who attended a Zoom consultation relating to the review, said: Its extraordinary, theres nothing about that in the document at all. What was the point of carrying out that consultation exercise? Its a partial review, in terms of the fact that it doesnt have any written recommendations or plan of action, and its a partial review because it clearly hasnt taken onboard any of the concerns of voluntary and grassroots organisations. In that sense its very difficult to have confidence and trust in the review.

In a webinar on 22 May, Prof Kevin Fenton, the PHE regional director for London, who led the review, said the public health body had engaged more than 1,000 almost coming up now to 1,500 individuals who have participated in briefings, lectures, discussions, listening sessions on this issue. The extensive exercise included steps being taken already because we shouldnt be waiting to act when we know what to do, he said.

The British Medical Associations written evidence included the need to take account of socioeconomic factors. Its council chair, Dr Chaand Nagpaul, said: It is further incredibly concerning, if true, to hear claims that parts of the review have not been published. We first pushed for this review two months ago and a number of concerns we have consistently raised are not reflected in the paper. While this review was being compiled, BAME workerswere dying and will continue to do so unless the government engages in actions not words.

Neither PHE nor the Department for Health and Social Care responded to the Guardians question as to whether a section of the report had been removed before its publication.

See more here:
Censorship row over report on UK BAME Covid-19 deaths - The Guardian