Archive for the ‘Censorship’ Category

Instagram Creators: Check If Your Posts Are Political The Markup – The Markup

Welcome to The Markup, where we use investigative reporting, data analysis, and software engineering to challenge technology to serve the public good. Sign up forKlaxon, a newsletter that delivers our stories and tools directly to your inbox.

If you opened Instagram last week, you may have seen one of many tutorials on how to opt out of a setting that was quietly released in February: Instagram and Threads users will no longer be recommended political content from people they dont follow.

Instagram wont proactively recommend content about politics, according to a blog post it issued Feb. 9. While the policy was launched without making headlines, it spiked attention last week as Instagram users took to the platform to raise awareness about the change.

What counts as politics? The companys announcement defined political content as potentially related to things like laws, elections, or social topics, and Instagrams help page adds content about governments to the list. But the most comprehensive definition is displayed where users can go to turn off the limits on political content: Political content is likely to mention governments, elections, or social topics that affect a group of people and/or society at large.

While not every Instagram user will be able to review whether their content is considered politicaland therefore no longer eligible for recommendationprofessional users such as creators or businesses have the power to check. (If you can see Instagrams Insights analytics for your account, you have a professional account.)

On a desktop or mobile browser: You can go to Account Status directly.

On your Account Status page, you can check whether Instagram will no longer recommend something youve posted (such as content deemed political), by clicking through What cant be recommended.

This is what Account Status looked like on The Markups account today. So far, none of our recent posts have been flagged as political:

The Markups account status on March 25, 2024.

Credit: The Markup

While all users have an Account Status page, only professional accounts have the What cant be recommended and Monetization status checks.

Help us figure out exactly what Instagram counts as political content. If, after checking the Account Status of your professional account, you see that one or more of your posts have been flagged as political, take a screenshot and send it to The Markup. You can DM us on Instagram directly @the.markup, or email it to us at maria@themarkup.org.

A Markup investigation published in February found that Instagram demoted nongraphic photos of soldiers, destroyed buildings, and military tanks from on the ground in Gaza. If you think youve been shadowbanned on Instagramor if the app has notified you that it has removed your content or limited your account in some wayheres what you can do.

Excerpt from:
Instagram Creators: Check If Your Posts Are Political The Markup - The Markup

Meta oversight board finds censoring of word ‘shaheed’ discriminatory – Middle East Eye

Meta's Oversight Board, the body in charge of content moderation decisions for the company's social media platforms, found thatcensoring the Arabic word shaheed has had a discriminatory impact on expression and news reporting.

In an investigation done at Metas request, the board found that the companys highly restrictive approach regarding shaheed, the most censored word on Facebook and Instagram, has led to widespread and unnecessary censorship affecting the freedom of expression of millions of users.

Shaheed has several meanings but can roughly be translated to martyr in English. The board has found that Meta has struggled to grapple with the linguistic complexities and religious significance attached to that word.

As the word is also used as a loan word in other languages, many (mostly Muslim) non-Arabic speakers have had their posts censored on Metas platforms.

Prior to the release of the boards advisory opinion, Human Rights Watchfound that Meta was guilty of systemic censorship of Palestine content amidst the Gaza war, which it attributed to flawed Meta policies and their inconsistent and erroneous implementation, over-reliance on automated tools to moderate content, and undue government influence over content removals.

The company has also previously removed the accounts of several Palestinian and pro-Palestinian individuals and advocacy groups, which has led to activists accusing it of "taking a side" in the conflict.

We want people to be able to use our platforms to share their views, and have a set of policies to help them do so safely. We aim to apply these policies fairly but doing so at scale brings global challenges, a Meta spokesperson told Middle East Eye in a statement.

The spokesperson added that Meta will review the boards feedback and respond within 60 days.

According to the board, the discriminatory and disproportionate impact Metas restrictive policy has had on information sharing outweighs the companys concern over the word being used to promote terrorism.

Some examples listed include governments sharing a press release confirming the death of an individual, a human rights defender decrying the execution of an individual using the word shaheed, or even a user criticising the state of a local road named after an individual that includes the honorific term shaheed.

'We won't be silenced': Meta removes Instagram accounts of pro-Palestine advocacy group

Meta would remove all of these posts, as it considers the term shaheed to be violating its policies.

Meta has been operating under the assumption that censorship can and will improve safety, but the evidence suggests that censorship can marginalise whole populations while not improving safety at all, said oversight board co-chair Helle Thorning-Schmidt.

The reality is that communities worst hit by the current policy, such as those living in conflict zones like Gaza and Sudan, also live in contexts where censorship is rife, she added.

The Board is especially concerned that Metas approach impacts journalism and civic discourse because media organisations and commentators might shy away from reporting on designated entities to avoid content removals.

As Israels ongoing war in Gaza has seen many users say they have been censored on Facebook and Instagram, the board saw it as important to tackle the targeting of posts containing the word shaheed.

The board concluded that Meta should end the blanket ban on shaheed when used in reference to people Meta designates as terrorists and instead focus on only removing posts that are linked to clear signs of violence (such as imagery of weapons) or when they break the company rules (for example, glorifying an individual designated as a terrorist).

Meta denies that it had a "blanket ban" in place, referring to the boards statement, and that the word is only banned when used while also referencing a dangerous organisation or individual.

More:
Meta oversight board finds censoring of word 'shaheed' discriminatory - Middle East Eye

AAUW speaker warns of rise in book censorship, ‘similar to a pandemic’ – Los Altos Town Crier

The American Association of University Women Silicon Valley Branch (AAUW Silicon Valley) hosted a virtual discussion titled School Book Banning: A Primer for Readers of All Ages with Jennifer Lynn Wolf, senior lecturer at Stanford Universitys Graduate School of Education and former high school English teacher.

The March 14 discussion had more than 60 attendees.

According to PEN America, book banning is defined as Anyaction taken against a book based on its content and as a result of parent or community challenges, administrative decisions, or in response to direct or threatened action by lawmakers or other governmental officials, that leads to a previously accessible book being either completely removed from availability to students, or where access to a book is restricted or diminished.

Wolf focused onthe particulars of book banning in schools. She said that the current surge in book banning is similar to a pandemic in the number of attempts (531 from Jan. 1 to Aug. 31, 2023, for example) involving 3,923 titles.

This surge is not new attempts to ban books go back to the early part of the 20thcentury. Wolf cited a case study of books being burned by the Nazis at the urging of the German Student Union in 1933. In the 21st century, the controversy on books began with the banning by the McMinn County School Board in Tennessee of the childrens graphic novelMausthat described the terrors of the Nazi regime.

The audience was encouraged to learn that in 2023, California passed AB 1078, which prohibits book bans.

According to Wolf,the current schoolbook banning movement is being driven by Moms for Libertyandhas great impact on both children and families.

She pointed out that the American Library Association tracks and challenges attempts to ban books nationwide.

Wolf offered this advice on how to protect the right to read:Read and gift banned books, use your public library, learn whos on your local school board and hold candidates forums, and watch, listen to or read documentaries, podcasts or books on book banning.

In a question-and-answer session after her talk, one attendee said that San Joses AAUW has already gone to board meetings of four school districts and learned that the true reason for book banning is to discredit public schools and to promote private parochial schools.

In response to another question, Wolf said that in her opinion it is impossible to learn and grow without some discomfort, so the fact that children do experience some unease through reading shouldnt be a reason to ban books.

Wolf concluded with the comment that currently there are more questions than answers about book banning, particularly with regard to who (parents, school boards, teachers, legislators, the courts, for example) should decide what children should learn and read.

Go here to see the original:
AAUW speaker warns of rise in book censorship, 'similar to a pandemic' - Los Altos Town Crier

NSF paid universities to develop AI censorship tools for social media, House report alleges – The College Fix

Used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others: report

The National Science Foundation is paying universities using taxpayer money to create AI tools that can be used to censor Americans on various social media platforms, according to members of the House.

University of Michigan, the University of Wisconsin-Madison, and MIT are among the universities cited in the House Judiciary Committee and the Select Subcommittee on the Weaponization of the Federal Government interim report.

It details the foundations funding of AI-powered censorship and propaganda tools, and its repeated efforts to hide its actions and avoid political and media scrutiny.

NSF has been issuing multi-million-dollar grants to university and non-profit research teams for the purpose of developing AI-powered technologies that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others, states the report, released last month.

Funding for the projects began in 2021 and was issued through the NSFs Convergence Accelerator grant program, which was initially launched in 2019 to develop interdisciplinary solutions to major challenges of national and societal importance such as those pertaining to AI and quantum technology, it states.

In 2021, however, the NSF introduced Track F: Trust & Authenticity in Communication Systems.

The NSFs 2021 Convergence Accelerator program solicitation stated the goal of Track F projects was to develop prototype(s) of novel research platforms forming integrated collection(s) of tools, techniques, and educational materials and programs to support increased citizen trust in public information of all sorts (health, climate, news, etc.), through more effectively preventing, mitigating, and adapting to critical threats in our communications systems.

Specifically, the grant solicitation singled out the threats posed by hackers and misinformation.

That September, the select subcommittee report notes, the NSF awarded twelve Track F teams $750,000 each (a total of $9 million) to develop and refine their project ideas and build partnerships. The following year, the NSF selected six of the 12 teams to receive an additional $5 million each for their respective projects, according to the report.

Projects from the University of Michigan, University of Wisconsin-Madison, MIT, and Meedan, a nonprofit that specializes in developing software to counter misinformation, are highlighted by the select subcommittee.

Collectively, these four projects received $13 million from the NSF, it states.

The University of Michigan intended to use the federal funding to develop its tool WiseDex, which could use AI technology to assess the veracity of content on social media and assist large social media platforms with what content should be removed or otherwise censored, it states.

The University of Wisconsin-Madisons Course Correct, which was featured in an article from The College Fix last year, was intended to aid reporters, public health organizations, election administration officials, and others to address so-called misinformation on topics such as U.S. elections and COVID-19 vaccine hesitancy.

MITs Search Lit, as described in the select subcommittees report, was developed as an intervention to help educate groups of Americans the researchers believed were most vulnerable to misinformation such as conservatives, minorities, rural Americans, older adults, and military families.

Meedan, according to its website, used its funding to develop easy-to-use, mobile-friendly tools [that] will allow AAPI [Asian-American and Pacific Islander] community members to forward potentially harmful content to tiplines and discover relevant context explainers, fact-checks, media literacy materials, and other misinformation interventions.

According to the select committees report, Once empowered with taxpayer dollars, the pseudo-science researchers wield the resources and prestige bestowed upon them by the federal government against any entities that resist their censorship projects.

In some instances, the report states, if a social media company fails to act fast enough to change a policy or remove what the researchers perceive to be misinformation on its platform, disinformation researchers will issue blogposts or formal papers to generate a communications moment (i.e., negative press coverage) for the platform, seeking to coerce it into compliance with their demands.

Efforts were made via email to contact senior members of the three university research teams, as well as a representative from Meedan, regarding the portrayal of their work in the select subcommittees report.

Paul Resnick, who serves as the WiseDex project director at the University of Michigan, referred The College Fix to the WiseDex website.

Social media companies have policies against harmful misinformation. Unfortunately, enforcement is uneven, especially for non-English content, states the site. WiseDex harnesses the wisdom of crowds and AI techniques to help flag more posts [than humans can]. The result is more comprehensive, equitable, and consistent enforcement, significantly reducing the spread of misinformation.

A video on the site presents the tool as a means to help social media sites flag posts that violate platform policies and subsequently attach warnings to or remove the posts. Posts portraying approved COVID-19 vaccines as potentially dangerous are used as an example.

Michael Wagner from the University of Wisconsin-Madison also responded to The Fix, writing, It is interesting to be included in a report that claims to be about censorship when our project censors exactly no one.

According to the select subcommittee report, some of the researchers associated with Track F and similar projects, however, privately acknowledged efforts to combat misinformation were inherently political and a form of censorship.

Yet, following negative coverage of Track F projects, depicting them as politically motivated and their products as government-funded censorship tools, the report notes, the NSF began discussing media and outreach strategy with grant recipients.

Notes from a pair of Track F media strategy planning sessions included in Appendix B of the select subcommittees report recommended researchers, when interacting with the media, focus on the pro-democracy and non-ideological nature of their work, Give examples of both sides, and use sports metaphors.

The select subcommittee report also highlights that there were discussions of having a media blacklist, although at least one researcher from the University of Michigan objected to this, citing the potential optics.

MORE: Feds give professors $5.7M to develop tool to combat misinformation

Read More

Like The College Fix on Facebook / Follow us on Twitter

The rest is here:
NSF paid universities to develop AI censorship tools for social media, House report alleges - The College Fix

EFF Opposes California Initiative That Would Cause Mass Censorship – EFF

In recent years, lots of proposed laws purport to reduce harmful content on the internet, especially for kids. Some have good intentions. But the fact is, we cant censor our way to a healthier internet.

When it comes to online (or offline) content, people simply dont agree about whats harmful. And people make mistakes, even in content moderation systems that have extensive human review and appropriate appeals. The systems get worse when automated filters are brought into the mixas increasingly occurs, when moderating content at the vast scale of the internet.

Recently, EFF weighed in against an especially vague and poorly written proposal: California Ballot Initiative 23-0035, written by Common Sense Media. It would allow for plaintiffs to sue online information providers for damages of up to $1 million if it violates its responsibility of ordinary care and skill to a child.

We sent a public comment to California Attorney General Rob Bonta regarding the dangers of this wrongheaded proposal. While the AGs office does not typically take action for or against ballot initiatives at this stage of the process, we wanted to register our opposition to the initiative as early as we could.

Initiative 23-0035 would result in broad censorship via a flood of lawsuits claiming that all manner of content online is harmful to a single child. While it is possible for children (and adults) to be harmed online, Initiative 23-0035s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults. Many online platforms will censor user content in order to avoid this legal risk.

The following are just a few of the many areas of culture, politics, and life where people have different views of what is harmful, and where this ballot initiative thus could cause removal of online content:

In addition, the proposed initiative would lead to mandatory age verification. Its wrong to force someone to show ID before they go online to search for information. It eliminates the right to speak or to find information anonymously, for both minors and adults.

This initiative, with its vague language, is arguably worse than the misnamed Kids Online Safety Act, a federal censorship bill that we are opposing. We hope the sponsors of this initiative choose not to move forward with this wrongheaded and unconstitutional proposal. If they do, we are prepared to oppose it.

You can read EFFs full letter to A.G. Bonta here.

Continued here:
EFF Opposes California Initiative That Would Cause Mass Censorship - EFF