Archive for the ‘Censorship’ Category

Instagram Creators: Check If Your Posts Are Political The Markup – The Markup

Welcome to The Markup, where we use investigative reporting, data analysis, and software engineering to challenge technology to serve the public good. Sign up forKlaxon, a newsletter that delivers our stories and tools directly to your inbox.

If you opened Instagram last week, you may have seen one of many tutorials on how to opt out of a setting that was quietly released in February: Instagram and Threads users will no longer be recommended political content from people they dont follow.

Instagram wont proactively recommend content about politics, according to a blog post it issued Feb. 9. While the policy was launched without making headlines, it spiked attention last week as Instagram users took to the platform to raise awareness about the change.

What counts as politics? The companys announcement defined political content as potentially related to things like laws, elections, or social topics, and Instagrams help page adds content about governments to the list. But the most comprehensive definition is displayed where users can go to turn off the limits on political content: Political content is likely to mention governments, elections, or social topics that affect a group of people and/or society at large.

While not every Instagram user will be able to review whether their content is considered politicaland therefore no longer eligible for recommendationprofessional users such as creators or businesses have the power to check. (If you can see Instagrams Insights analytics for your account, you have a professional account.)

On a desktop or mobile browser: You can go to Account Status directly.

On your Account Status page, you can check whether Instagram will no longer recommend something youve posted (such as content deemed political), by clicking through What cant be recommended.

This is what Account Status looked like on The Markups account today. So far, none of our recent posts have been flagged as political:

The Markups account status on March 25, 2024.

Credit: The Markup

While all users have an Account Status page, only professional accounts have the What cant be recommended and Monetization status checks.

Help us figure out exactly what Instagram counts as political content. If, after checking the Account Status of your professional account, you see that one or more of your posts have been flagged as political, take a screenshot and send it to The Markup. You can DM us on Instagram directly @the.markup, or email it to us at maria@themarkup.org.

A Markup investigation published in February found that Instagram demoted nongraphic photos of soldiers, destroyed buildings, and military tanks from on the ground in Gaza. If you think youve been shadowbanned on Instagramor if the app has notified you that it has removed your content or limited your account in some wayheres what you can do.

Excerpt from:
Instagram Creators: Check If Your Posts Are Political The Markup - The Markup

Tags:

Is Fighting Misinformation Censorship? The Supreme Court Will Decide. – The Journal. – WSJ Podcasts – The Wall Street Journal

This transcript was prepared by a transcription service. This version may not be in its final form and may be updated.

Ryan Knutson: When the baseball star Hank Aaron died in 2021 at the age of 86, people took to social media to remember his legendary career. Some posted about his legacy as a civil rights icon. Others posted about his incredible swing and how he held the career home run record for more than three decades. But there was one tweet that caused a firestorm. It was from the politician Robert F. Kennedy Jr, who suggested that Aaron's death was caused by the COVID vaccine. He said, "Hank Aaron's tragic death is part of a wave of suspicious deaths among elderly, closely following administration of COVID vaccines." The Biden administration asked Twitter to remove Kennedy's tweet, which the company did. It was one of many posts the government asked social media sites to take down during the pandemic. Now, the administration's effort to go after what it saw as misinformation online is under the spotlight of the Supreme Court, in a case known as Murthy versus Missouri. It's one of the biggest tests of the First Amendment in years.

Jess Bravin: This is a case about what the plaintiffs call censorship and what the government calls guidance.

Ryan Knutson: That's our colleague Jess Bravin. He covers the Supreme Court and was listening as the justices heard oral arguments earlier this week. So what would you say is the central question at the heart of this case?

Jess Bravin: The central question is where is the line between expressing an opinion and censoring speech?

Ryan Knutson: Welcome to The Journal, our show about money, business, and power. I'm Ryan Knutson. It's Thursday, March 21st. Coming up on the show, should the government be allowed to ask social media platforms to remove content? The fight against misinformation online goes back years. But in 2021, as the pandemic was killing thousands of Americans each week, the issue took on new urgency. The newly elected Biden administration said bad information put people at risk. Officials reached out to social media companies and asked them to take action on posts they viewed as problematic.

Jess Bravin: There were several types of posts that officials objected to, but the most important one from the government's point of view was generating fear of vaccines. The government believed that vaccines and mass vaccination was the way to get the pandemic under control and that having millions and millions of people fearful of vaccines would be devastating to public health. And there were some very prominent people who had a different point of view and Robert F. Kennedy, Jr is of course one of them.

Ryan Knutson: Kennedy, who tweeted about Hank Aaron, has been a long time critic of vaccines. For the record, the medical examiner said Aaron died of natural causes. The Biden administration also asked social media sites to remove posts that said the virus was manmade, that criticized lockdowns, or that questioned the efficacy of masks.

Jess Bravin: The government would sometimes flag specific posts and point them out to their contacts at the social media platforms and say, "We think this one's a problem." They also liked to talk to the platforms about the algorithms they were using to identify problematic information and, "How are you sorting it? How are you filtering it? How are you finding it?" And this was public. I mean, there were news articles about it in 2021. It wasn't this was like some classified thing. The government's fairly open about complaining about bad information moving across social media platforms.

Ryan Knutson: But some people, Republicans in particular, didn't like what the government was doing. And in 2022, the attorneys general of Missouri and Louisiana, along with other individuals, sued the Biden administration. Vivek Murthy, the Surgeon General under Biden, was named as a lead defendant. The plaintiffs alleged the government's actions amounted to censorship. What was this case's path to the Supreme Court?

Jess Bravin: Well, the case was filed in a courthouse in Monroe, Louisiana where there is a Trump appointed judge who was expected to be very sympathetic to this argument. The attorneys general of Missouri and Louisiana asserted the right to protect the interests of the residents of those states, saying those residents, either their views might be suppressed by this illegal censorship, or alternatively their right to read or hear or learn things was being interfered with by this censorship.

Ryan Knutson: On July 4th last year, that judge ruled in favor of Louisiana and Missouri.

Jess Bravin: He issued a sweeping opinion calling this an Orwellian form of censorship that the government was imposing on Americans.

Ryan Knutson: The government appealed the ruling and eventually it made its way up to the Supreme Court this week.

Speaker 3: We'll hear argument first this morning in case 23411 Murthy versus Missouri.

Ryan Knutson: Okay, so what were Louisiana and Missouri's main arguments in this case?

Jess Bravin: The Solicitor General of Louisiana who argued this case for all the plaintiffs said that, "Okay, the government has a right to express an opinion. It has a right to use the bully pulpit and say, 'Americans, don't listen to that foolish information or whatever,' but they don't have the right to say to a publisher or a platform, 'Take down that information. Take down that post.'" Their argument is that when the government takes that step, it crosses into coercion, and coercion of private speech is not permitted under the First Amendment.

Speaker 4: The government has no right to persuade platforms to violate Americans' Constitutional rights. And pressuring platforms in back rooms shielded from public view is not using the bully pulpit at all. That's just being a bully.

Ryan Knutson: I mean, did they have evidence to support that the government was being coercive or forcing them to do it?

Jess Bravin: Well, it's an implication. The implication is that the government has behind it the ability to take all kinds of serious steps against these private companies. And the theory of this case is that when White House officials or people in the Surgeon General's office or at the FBI call Facebook and say, "Take down these posts or don't let this known purveyor of disinformation continue to spread these dangerous theories about COVID," when they do that, they carry with them the implication of retaliation if there isn't compliance, because there could be an antitrust investigation, there could be the White House supporting legislation that would be bad for some of these companies. All those things lurk, at least in theory, in the background. The Louisiana argument, the argument of the plaintiffs, is that this was a kind of pervasive behind the scenes campaign that really left these platforms no choice but to comply.

Ryan Knutson: So what was the Biden Administration's defense?

Jess Bravin: The Biden administration said that, "What we did regarding these COVID posts is no different from what the government has done for decades and decades and decades."

Speaker 5: I think the idea that there'd be back and forth between the government and the media isn't unusual at all when the White House-

Jess Bravin: And government officials are not shy about telling the media when they think they got something wrong or asking them not to publish something or saying, "This person you're relying on is a known charlatan or is a foreign agent," or something like that, "and you shouldn't print that." So they say there are many, many times that you've heard government officials say publicly that they don't like certain things that were published or that TV networks shouldn't run certain shows or shouldn't propel certain storylines on the news or what have you.

Ryan Knutson: The government says it's done this in situations involving national security or war and that this kind of back and forth should be allowed because it's necessary to keep the public safe.

Jess Bravin: From the government's point of view, they have an obligation to protect the public and to prevent the spread of dangerous information that misleads people, and particularly in the context of the COVID pandemic, where public health depended on a critical mass of people obtaining vaccinations in order to stop the spread of this sometimes deadly disease, interfering with the vaccination program, based on completely unsupported theories, was a danger to the nation. It was an emergency. It was a literal public health emergency that required people to know what the actual risks were, and the government says they have to take steps to do that.

Ryan Knutson: Coming up, how the Supreme Court justices responded to these arguments. Our colleague Jess says that many of the justices seem receptive to the government's argument that there is and always has been a normal back and forth between officials and the press. What were you able to tell about how the Supreme Court justices who were hearing these arguments were responding to them?

Jess Bravin: It seemed to me that most of the justices found the plaintiff's arguments problematic, from a number of reasons. Some of the justices seemed to have personal experience in dealing with the media. Justice Brett Kavanaugh, Justice Elena Kagan, and Chief Justice John Roberts all worked in the White House for one President of one party or another and all of them seem to recall their own interactions or the interactions of the press staff with the news media and occasions where they reached out to complain about certain stories, complained about certain information that was being published, and urge reporters or editors not to publish it. And Justice Kavanaugh, for instance, he likened it, he said he had a national security analogy.

Justice Kavanaugh: Probably not uncommon for government officials to protest an upcoming story on surveillance or detention policy and say, "If you run that, it's going to harm the war effort and put Americans at risk."

Jess Bravin: And so they seemed to be thinking about, "Well, I used to complain all the time about stuff I didn't like being published and I didn't see any problem with it." And they seemed to believe it was just a feature of the way the government works and the way our democracy works.

Ryan Knutson: Were there camps that seemed to emerge among the justices on this issue or did it seem that they were more uniformly skeptical?

Jess Bravin: In this instance, it seemed that most of the court was leaning toward the government's view of these kinds of interactions being allowable. The only justice who appeared very skeptical of the Biden administration's position was Justice Samuel Alito. He said he looked at these kinds of emails and these communications and the tenor of the language used by government officials, and he said, "The White House is treating Facebook as a subordinate." It's basically asking, "Why haven't you shown us? Why are you hiding the ball from us?"

Justice Alito: They want to have regular meetings and they suggest rules that should be applied, and "Why don't you tell us everything that you're going to do so we can help you and we can look it over?" And I thought, "Wow, I cannot imagine federal officials taking that approach to the print media, our representatives over there."

Jess Bravin: And he said he couldn't imagine that that is the kind of interaction that the White House has with the New York Times or The Wall Street Journal or The Associated Press or other major news organizations and from his point of view, this was not like the traditional back and forth between the news media and the government. This was something that looked different.

Ryan Knutson: The ruling is expected to come by July. What will it mean for the future of misinformation on the internet if Louisiana wins or if the US government wins?

Jess Bravin: Well, if the US government wins, firstly, it depends on what the US government wants to do. I mean, who controls the US government is up to the voters this November. And so a lot of it depends on that. Were this challenge to succeed, I think that you will see a much more freewheeling internet because one of the checks on what appears on social media will be gone. Or is the government's ability to influence what appears on social media will be significantly reduced. Now, whether that has a good or bad effect obviously depends on where you stand.

Ryan Knutson: Murthy versus Missouri is one of several cases involving free speech and online content moderation that the Supreme Court is taking on this year. For example, last month, the justice has heard challenges to laws in Florida and Texas that seek to limit how much social media companies can moderate people's posts.

Jess Bravin: The other major cases involving free speech in the internet also come out of the same view by some people on the right that social media platforms are censoring their views ,are keeping their ideas out of the public discourse. And this particularly came into focus when Facebook and Twitter blocked Donald Trump after they viewed his role in the January 6th attack on the US Capitol as violating their policies or the things that he was tweeting and posting were violating their policies against inciting violence or unlawful conduct or what have you. So that really crystallized for some conservatives the idea that our opinions and our views and our perspective is being blocked by these social media platforms.

Ryan Knutson: Have all these cases had an impact on how social media platforms and also the government are approaching misinformation on their platforms and policing it this year?

Jess Bravin: Well, the government pulled back on some of these encounters because they are facing this type of legal assault. I think for the social media platforms, I mean, they are very powerful. They are ubiquitous for many Americans. And they are facing a range of pressure. I mean, at the same time that they face complaints that they're taking down too many posts, they're also facing complaints that they're allowing up too many dangerous posts. I mean, they are in a position, that they certainly worked hard to achieve, that makes them central to a lot of the discourse in the United States and therefore they get pressure from all sides.

Ryan Knutson: That's all for today, Thursday, March 21st. The Journal is a co-production of Spotify and The Wall Street Journal. Additional reporting in this episode by Jan Wolfe and Jacob Gershman. Thanks for listening. See you tomorrow.

Read this article:
Is Fighting Misinformation Censorship? The Supreme Court Will Decide. - The Journal. - WSJ Podcasts - The Wall Street Journal

Tags:

Microsoft faces bipartisan criticism for alleged censorship on Bing in China – The Register

Microsoft faces bipartisan criticism for alleged censorship on Bing in China  The Register

Continued here:
Microsoft faces bipartisan criticism for alleged censorship on Bing in China - The Register

Tags:

Meta oversight board finds censoring of word ‘shaheed’ discriminatory – Middle East Eye

Meta's Oversight Board, the body in charge of content moderation decisions for the company's social media platforms, found thatcensoring the Arabic word shaheed has had a discriminatory impact on expression and news reporting.

In an investigation done at Metas request, the board found that the companys highly restrictive approach regarding shaheed, the most censored word on Facebook and Instagram, has led to widespread and unnecessary censorship affecting the freedom of expression of millions of users.

Shaheed has several meanings but can roughly be translated to martyr in English. The board has found that Meta has struggled to grapple with the linguistic complexities and religious significance attached to that word.

As the word is also used as a loan word in other languages, many (mostly Muslim) non-Arabic speakers have had their posts censored on Metas platforms.

Prior to the release of the boards advisory opinion, Human Rights Watchfound that Meta was guilty of systemic censorship of Palestine content amidst the Gaza war, which it attributed to flawed Meta policies and their inconsistent and erroneous implementation, over-reliance on automated tools to moderate content, and undue government influence over content removals.

The company has also previously removed the accounts of several Palestinian and pro-Palestinian individuals and advocacy groups, which has led to activists accusing it of "taking a side" in the conflict.

We want people to be able to use our platforms to share their views, and have a set of policies to help them do so safely. We aim to apply these policies fairly but doing so at scale brings global challenges, a Meta spokesperson told Middle East Eye in a statement.

The spokesperson added that Meta will review the boards feedback and respond within 60 days.

According to the board, the discriminatory and disproportionate impact Metas restrictive policy has had on information sharing outweighs the companys concern over the word being used to promote terrorism.

Some examples listed include governments sharing a press release confirming the death of an individual, a human rights defender decrying the execution of an individual using the word shaheed, or even a user criticising the state of a local road named after an individual that includes the honorific term shaheed.

'We won't be silenced': Meta removes Instagram accounts of pro-Palestine advocacy group

Meta would remove all of these posts, as it considers the term shaheed to be violating its policies.

Meta has been operating under the assumption that censorship can and will improve safety, but the evidence suggests that censorship can marginalise whole populations while not improving safety at all, said oversight board co-chair Helle Thorning-Schmidt.

The reality is that communities worst hit by the current policy, such as those living in conflict zones like Gaza and Sudan, also live in contexts where censorship is rife, she added.

The Board is especially concerned that Metas approach impacts journalism and civic discourse because media organisations and commentators might shy away from reporting on designated entities to avoid content removals.

As Israels ongoing war in Gaza has seen many users say they have been censored on Facebook and Instagram, the board saw it as important to tackle the targeting of posts containing the word shaheed.

The board concluded that Meta should end the blanket ban on shaheed when used in reference to people Meta designates as terrorists and instead focus on only removing posts that are linked to clear signs of violence (such as imagery of weapons) or when they break the company rules (for example, glorifying an individual designated as a terrorist).

Meta denies that it had a "blanket ban" in place, referring to the boards statement, and that the word is only banned when used while also referencing a dangerous organisation or individual.

More:
Meta oversight board finds censoring of word 'shaheed' discriminatory - Middle East Eye

Tags:

AAUW speaker warns of rise in book censorship, ‘similar to a pandemic’ – Los Altos Town Crier

The American Association of University Women Silicon Valley Branch (AAUW Silicon Valley) hosted a virtual discussion titled School Book Banning: A Primer for Readers of All Ages with Jennifer Lynn Wolf, senior lecturer at Stanford Universitys Graduate School of Education and former high school English teacher.

The March 14 discussion had more than 60 attendees.

According to PEN America, book banning is defined as Anyaction taken against a book based on its content and as a result of parent or community challenges, administrative decisions, or in response to direct or threatened action by lawmakers or other governmental officials, that leads to a previously accessible book being either completely removed from availability to students, or where access to a book is restricted or diminished.

Wolf focused onthe particulars of book banning in schools. She said that the current surge in book banning is similar to a pandemic in the number of attempts (531 from Jan. 1 to Aug. 31, 2023, for example) involving 3,923 titles.

This surge is not new attempts to ban books go back to the early part of the 20thcentury. Wolf cited a case study of books being burned by the Nazis at the urging of the German Student Union in 1933. In the 21st century, the controversy on books began with the banning by the McMinn County School Board in Tennessee of the childrens graphic novelMausthat described the terrors of the Nazi regime.

The audience was encouraged to learn that in 2023, California passed AB 1078, which prohibits book bans.

According to Wolf,the current schoolbook banning movement is being driven by Moms for Libertyandhas great impact on both children and families.

She pointed out that the American Library Association tracks and challenges attempts to ban books nationwide.

Wolf offered this advice on how to protect the right to read:Read and gift banned books, use your public library, learn whos on your local school board and hold candidates forums, and watch, listen to or read documentaries, podcasts or books on book banning.

In a question-and-answer session after her talk, one attendee said that San Joses AAUW has already gone to board meetings of four school districts and learned that the true reason for book banning is to discredit public schools and to promote private parochial schools.

In response to another question, Wolf said that in her opinion it is impossible to learn and grow without some discomfort, so the fact that children do experience some unease through reading shouldnt be a reason to ban books.

Wolf concluded with the comment that currently there are more questions than answers about book banning, particularly with regard to who (parents, school boards, teachers, legislators, the courts, for example) should decide what children should learn and read.

Go here to see the original:
AAUW speaker warns of rise in book censorship, 'similar to a pandemic' - Los Altos Town Crier

Tags: