Archive for the ‘First Amendment’ Category

California worked with social companies to remove election misinformation – CALmatters

In summary

California worked with social media companies, created an internal database and developed threat levels to fight 2020 election misinformation.

One post on YouTube claimed a voter registered to vote under a fake name. A tweet alleged thousands of 2020 ballots were tossed out. Another tweet claimed a voter used an alias to vote in person.

These are just a few of two dozen social media posts deemed to be misinformation and removed from online platforms this year at the request of a newly formed cybersecurity team within the California Secretary of States office.

The Office of Election Cybersecurity in the California Secretary of States office monitored and tracked social media posts, decided if they were misinformation, stored the posts in an internal database coded by threat level, and on 31 different occasions requested posts be removed. In 24 cases, the social media companies agreed and either took down the posts or flagged them as misinformation, according to Jenna Dresner, senior public information officer for the Office of Election Cybersecurity.

We dont take down posts, that is not our role to play, Dresner said. We alert potential sources of misinformation to the social media companies and we let them make that call based on community standards they created.

Even with the new cybersecurity efforts, misinformation still was a primary cause of frustration for Californias registrars of voters. A CalMatters survey of 54 of Californias 58 counties found that registrars dealt with everything from false or misleading information coming from the White House to all sorts of preposterous claims posted to the internet.

As the state works with social media companies to quell speech it considers misinformation, First Amendment advocates and privacy experts say they are concerned about increased censorship of online discourse and the implications of a database that stores posts indefinitely.

The goal of the Office of Election Cybersecurity is to coordinate with county election officials to protect the integrity of the election process. Its duties also include monitoring and counteracting false or misleading online information regarding the electoral process and its integrity.

The office was established in 2018 because of foreign meddling in the 2016 election. With the passage of Assembly Bill 3075, the California state legislature established the Office of Election Cybersecurity with an annual budget of $2 million.

One of the first things the Office of Election Cybersecurity did was launch a 2018 voter education awareness campaign called VoteSure that encouraged voters to be on the lookout for misinformation. Initial monitoring was sparse the Office mostly followed hashtags and tracked narratives via a complaint database. Dresner centralized the monitoring when she joined the office in July, and created a formal tracking system.

We rely on the generous support of our readers. If you found our work valuable in this crisis-filled year for California, please consider supporting our newsroom.

In 2018, state officials also started developing relationships with federal intelligence agencies and reaching out to social media companies. The Office of Election Cybersecurity worked to fully understand what happened in the 2016 election and the extent of foreign interference, Dresner said. One of the federal agencies it began working with was the Cybersecurity and Infrastructure Security Agency also a new agency formed in 2018, but with a multi-billion dollar budget and a national purview.

During the 2020 election, the Office worked closely with CISA, the Stanford Internet Observatory, and other groups to measure the extent of misinformation facing Californians andAmericans alike. Rene Diresta, research manager at the Stanford Internet Observatory, said that unlike the 2016 election, during which Americans saw disinformation generated and spread by foreign state actors, misinformation and conspiracy theories were largely generated domestically.

Besides the incident with Iran that pushed the Proud Boys emails, most of the other actions taken by state actors appear to have been broadly attributable because they were put out by their [state-owned] media, Diresta said.

Even if they have no particular political candidate that they wanted to get behind, putting out that the American election is in chaos is beneficial to them.

She saw foreign state media outlets take American social media posts and livestreams, repurpose them and then amplify them on foreign state media outlets to give a perception of widespread chaos.

Presenting us as a nation in chaos that cant get its election straight weakens the perception of the U.S. in the world abroad, which serves their broader interests, Diresta said. So even if they have no particular political candidate that they wanted to get behind, putting out that the American election is in chaos is beneficial to them.

Diresta has been studying the effects of misinformation for five years and calls this period of cyberattacks a warm war something that is a few steps beyond previous Cold War tactics between the U.S. and former Soviet Union, but stops short of open armed conflict.

An information war is not the same thing as a war, but you can find a dynamic that is taking shape of all different factions fighting each other on the internet to try and gain attention to move policy or to move politicians, Diresta said. The introduction of foreign actors into that space, took it up to a level that we hadnt seen before.

Those new levels of conflict are behind Californias decision to ramp up cybersecurity efforts to surveil the online posts of Californians.

Dresner is one of two people in the Office of Election Cybersecurity, which reports to Paula Valle, chief communications officer for the Secretary of States office.

Dresner defines misinformation as inaccurate information unintentionally spread.

That might include posts that either break a platforms community standards policy or posts that violate California election laws.

If someone is offering to get paid to vote on a certain behalf, that would be an example, she said.

Every sort of misinformation requires a different tactic (of response) and it is a sort of ongoing process to determine what that is, Dresner said. There is no clear threshold, it is a fine line between opinion and misinformation.

Keep tabs on the latest California policy and politics news

Whether the posts are removed is up to the social media companies. Dresner said the state does not have access to private Facebook groups, direct messages or similar social posts and communication.

Instead, the Office of Election Cybersecurity monitors what is playing out in the public sphere. Staff use commonly available services that allow users to set parameters for search options and others that charge for the monitoring itself.

Twitter for example has an option called Tweetdeck, that allows users to view multiple columns of searches or feeds. To isolate a search column to a specific area, a user can enter whats called a geocode to limit a search to that area.

Dresner said her office uses what they call a Misinformation Tracker to collect screenshots of posts and then they report each to the respective social media platform.

The office stores the screenshots indefinitely in the Misinformation Tracker to maintain a paper trail.

Such indefinite storage and the ways in which the state is surveilling its residents concerns David Greene, civil liberties director for the nonprofit Electronic Frontier Foundation.

I dont think the government should store any peoples personal information any longer than it needs to, indefinite seems unnecessary, Greene said. If there is some type of coordinated disinformation effort that poses a serious danger to the state, then I think they could retain it for investigative purposes, but you dont want to be keeping dossiers just for the possibility that something may be useful for the future.

Typically it is the federal government that removes content from websites, usually because it concerns instances of child abuse or what is known as Terrorist and Violent Extremist Content. Greene said he wasnt surprised California is surveillancing misinformation, especially when it comes to election integrity, and he expects similar efforts surrounding coronavirus vaccinations. He just wants the state to be more transparent about what it is doing.

To me this is something they should do publicly and not behind the scenes, Greene said. After all, Californias data privacy laws do not prohibit the state from looking at publicly available information.

For Dresner, she said she doesnt think her office is violating the privacy of Californians.

It is all public information and that is what we monitor, the public sphere, she said. We arent worried about what people are saying in the privacy of their own homes, we are worried about what they are putting out there for the world to see.

Katie Licari,a reporter at the UC Berkeley Graduate School of Journalism, contributed to this story.

This coverage is made possible throughVotebeat, a nonpartisan reporting project covering local election integrity and voting access. In California,CalMattersis hosting the collaboration with the Fresno Bee, the Long Beach Post and the UC Berkeley Graduate School of Journalism.

Read the original here:
California worked with social companies to remove election misinformation - CALmatters

Section 230 Isn’t A Subsidy; It’s A Rule Of Civil Procedure – Techdirt

from the make-section-230-boring-again dept

The other day Senator Schatz tweeted, "Ask every Senator what Section 230 is. Dont ask them if they want to repeal it. Ask them to describe it."

It's a very fair point. Most of the political demands to repeal Section 230 betray a profound ignorance of what Section 230 does, why, or how. That disconnect between policy understanding and policy demands means that those demands to repeal the law will only create more problems while not actually solving any of the problems currently being complained about.

Unfortunately, however, Senator Schatz's next tweet revealed his own misunderstanding. [Update: per this tweet, it wasn't his misunderstanding his next tweet revealed but rather the misunderstanding of other Senators who have proposed other sorts of "reforms" he was taking issue with. Apologies to Senator Schatz for misstating.] "I have a bipartisan bill that proposes changes to 230, but repeal is absurd. The platforms are irresponsible, but we should not have a government panel handing out immunity like it's a hunting license. We must rein in big tech via 230 reform and antitrust law, not lazy stunts."

There's a lot to unpack in that tweet, including the bit about antitrust law, but commenting on that suggestion is for another post. The issue here is that no, Section 230 is nothing like the government "handing out immunity like a hunting license," and misstatements like that matter because they egg on "reform" efforts that will ruin rather than "reform" the statute, and in the process ruin plenty more that the Constitution and our better policy judgment requires us to protect.

The point of this post is to thus try to dispel all such misunderstandings that tend to regard Section 230's statutory protection as some sort of tangible prize the government hands out selectively, when in reality it is nothing of the sort. On the contrary, it reads like a rule of civil procedure that, like any rule of civil procedure, is applicable to any potential defendant that meets its broadly-articulated criteria.

For non-lawyers "rules of civil procedure" may sound arcane and technical, but the basic concept is simple. When people want to sue other people, these are the rules that govern how those lawsuits can proceed so that they can proceed fairly, for everyone. They speak to such things as who can sue whom, where someone can be sued, and, if a lawsuit is filed, whether and how it can go forward. They are the rules of the road for litigation, but they often serve as more than a general roadmap. In many cases they are the basis upon which courts may dispense with cases entirely. Lawsuits only sometimes end with rulings on the merits after both parties have fully presented their cases; just as often, if not more often, courts will evaluate whether the rules of civil procedure even allow a case to continue at all, and litigation frequently ends when courts decide that they don't.

Which is important because litigation is expensive, and the longer it goes on the more cost-prohibitive it becomes. And that's a huge problem, especially for defendants with good defenses, because even if those defenses should mean that they would eventually win the case, the crippling cost involved in staying in the litigation long enough for that defense to prevail might bankrupt them long before it ever could.

Such a result hardly seems fair, and we want our courts to be fair. They are supposed to be about administering justice, but there's nothing just about letting courts being used as tools to obliterate innocent defendants. One reason we have rules of civil procedure is to help lessen the danger that innocent defendants can be drained dry by unmeritorious litigation against them. And that is exactly what Section 230 is designed to do as well.

An important thing to remember is that most of what people complain about when they complain about Section 230 are things that the First Amendment allows to happen. The First Amendment is likely to insulate platforms from liability in their users' content, and it's also likely to insulate them from liability for their moderation decisions. Section 230 helps drive those points home explicitly for providers of "interactive computer services" (which, it should be noted, include far more than just "big tech" platforms; they also include much smaller and non-commercial ICS providers as well, and even individual people), but even if there were no Section 230 the First Amendment would still be there to do the job of protecting platforms in this way. At least in theory.

In practice, however, defendant platforms would first have to endure an onslaught of litigation and all its incumbent costs before the First Amendment could provide any useful benefit, which will likely be too little, too late for most if not all of them. The purpose of Section 230 is therefore to make sure those First Amendment rights can be real, and meaningful, and something that every sort of interactive computer service provider can be confident in exercising without having to fear being crushed by unconstitutional litigation if they do.

What people calling for any change to Section 230 need to realize is how these changes will do nothing but open the floodgates to this sort of crushing litigation against so much that the Constitution is otherwise supposed to protect. It is a flood that will inevitably chill platforms by effectively denying them the protection their First Amendment rights were supposed to afford, and in the process also chill all the expressive user activity they currently feel safe to enable. It is not an outcome that any policymaker should be so eager to tempt; rather, it is something to studiously avoid. And the first step to avoiding it is to understand how these proposed changes will do nothing but invite it.

Thank you for reading this Techdirt post. With so many things competing for everyones attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise and every little bit helps. Thank you.

The Techdirt Team

Filed Under: brian schatz, civil procedure, section 230, subsidy

Visit link:
Section 230 Isn't A Subsidy; It's A Rule Of Civil Procedure - Techdirt

Amherst residents weigh in on potential resolution opposing COVID-19 restrictions – Lynchburg News and Advance

Science and facts should outweigh emotions and political decision-making, Witt, of Madison Heights, said. The fact is social distancing is all we have. Wear a mask, wash your hands and stand 6 feet apart. Its not asking you to give up your First Amendment [rights].

Witt said federal, state and local governments have provided financial assistance for struggling businesses and organizations. She added she feels she has a right to be healthy.

I prefer a mask. Its all we have, Witt said.

Teresa Ray, a lifelong county resident, said everyone has made sacrifices during the pandemic to flatten the curve but restrictions have tightened, further damaging businesses and residents in the process, while cases increase.

If masks work, shouldnt these case numbers decline? Ray said, adding: The public has been programmed [to view] new cases as death sentences.

Several speakers strongly opposed one of Northams latest restrictions, a 10-person limit for gatherings, especially with the Christmas holiday approaching.

It is our constitutional right to assemble, especially in our own homes, Ray said.

Ben Summers, a county resident in favor of the resolution, said he sees businesses negatively affected by the restrictions. He cited his asthma as a reason he opposes the mask requirement he feels is unconstitutional.

Read the original here:
Amherst residents weigh in on potential resolution opposing COVID-19 restrictions - Lynchburg News and Advance

The Year That Changed the Internet – The Atlantic

That enthusiasm didnt last, but mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm. During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year. Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate. These actions had a domino effect, as podcast platforms, on-demand fitness companies, and other websites banned QAnon postings. Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.

As if to make clear how far things had come since 2016, Facebook and Twitter both took unusually swift action to limit the spread of a New York Post article about Hunter Biden mere weeks before the election. By stepping in to limit the storys spread before it had even been evaluated by any third-party fact-checker, these gatekeepers trumped the editorial judgment of a major media outlet with their own.

Gone is the naive optimism of social-media platforms early days, whenin keeping with an overly simplified and arguably self-serving understanding of the First Amendment traditionexecutives routinely insisted that more speech was always the answer to troublesome speech. Our tech overlords have been doing some soul-searching. As Reddit CEO Steve Huffman said, when doing a PR tour about an overhaul of his platforms policies in June, I have to admit that Ive struggled with balancing my values as an American, and around free speech and free expression, with my values and the companys values around common human decency.

Derek Thompson: The real trouble with Silicon valley

Nothing symbolizes this shift as neatly as Facebooks decision in October (and Twitters shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized stand for free expression at Georgetown University. The strong protection of even literal Nazism is the most famous emblem of Americas free-speech exceptionalism. But one year and one pandemic later, Zuckerbergs thinking, and, with it, the policy of one of the biggest speech platforms in the world, had evolved.

The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines. This might seem an obvious move; the virus has killed more than 315,000 people in the U.S. alone, and widespread misinformation about vaccines could be one of the most harmful forms of online speech ever. But until now, Facebook, wary of any political blowback, had previously refused to remove anti-vaccination content. However, the pandemic also showed that complete neutrality is impossible. Even though its not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they dont want to be seen as feeding people information that might kill them.

Continued here:
The Year That Changed the Internet - The Atlantic

FCC Commissioner Brendan Carr Again Misrepresents The Debate Over Section 230 – Above the Law

Late on Tuesday evening, FCC Commissioner Brendan Carr suddenly issued aweird and misleading anti-230 Twitter thread, claiming (falsely) that supporters of Section 230 (who he incorrectly calls Big Techs lobbyists) routinely conflate statutory protections with First Amendment rights. Heres the thread in plain text, with my responses and corrections interjected.

The debate over Section 230 often produces more heat than light.

One reason: Big Techs lobbyists routinely conflate statutory protections with First Amendment rights.

I mean, what?!? This is like claiming day is night, up is down, or yellow is purple. There is one side of this debate that has regularly conflated Section 230 with the 1st Amendment: and thats the people arguing against Section 230. Almost every complaint about Section 230 isactually a complaint about the 1st Amendment. I mean, the NY Times has had torun a correctionsaying oops, we blamed 230 for this, but really it was the 1st Amendmentmultiple times.

For instance, they argue that action on the Section 230 Petition would force websites to carry speech in violation of their First Amendment rights.

Not at all. NTIAs Petition expressly says that websites would retain their 1st Amendment right to remove content for any reason.

This may be the weirdest of all the tweets in the bunch. The NTIA Petition is asking the FCC, including Brendan Carr, to reinterpret Section 230, to suggest that Congress (including those who wrote the law) and dozens of courts have all been interpreting it wrong. Let me repeat that: the petition is asking Carr to reinterpret the law. And yet, here he is citingthat requestas his evidencethat his reinterpretation wont implicate 1st Amendment rights? Its kind of like a judge pointing to the plaintiffs complaint as the binding legal precedent. It makes no sense at all.

Similarly, the claim that Section 230 reform would resurrect the Fairness Doctrine or mandate neutrality misses the mark.

The Petition is quite clear on this: It would not require any website to carry any sort of content at all.

Again, citing to the petition makes no sense. The petition is asking Carr to reinterpret the law. Its the request. It has no legal weight or authority (in part because its wrong on nearly everything).

What Section 230 reform *would do* is bring much needed clarity to the terms contained in the statutory text.

There has never, not once, been a complaint from judges or the authors of the law that the terms are unclear. There is no problem with clarity. There are just some people who are upset that some websites moderate in a way they dislike.

In other words, the question presented by the Section 230 Petition is not whether the First Amendment will continue to cover a take down decision (it will) but whether a particular take down *also* benefits from Section 230s statutory protections.

But thats not an open question. Its pretty damn well settled. Its not like theres a court split here. Every single court decision has agreed on this. Theres no confusion. Theres no disagreement. Theres no lack of clarity. The law is very clear.

The answer to that question flows from the text of the statute and leaves a websites constitutional rights uninfringed.

Right. Which is why weve pointed out that all the people complaining about content moderation decisions arent actually mad about 230, but are mad about the 1st Amendment. And this includes wait for it FCC Commissioner Brendan Carr who just months ago said thatwe need to reform Section 230to stop tech companies from biased moderation. Except that moderation (biased or not) is protection by the 1st Amendment.

So, Brendan Carr seems to be talking out of both sides of his mouth. To Trumpists he goes on Fox News and says that we need to reform Section 230 to change their moderation practices and force them to keep content they dont want online. But then, he goes on Twitter and insists its the other guys (the people who actually know the law) who want to conflate 230 with the 1st Amendment, and that changes to 230 wont stop companies from moderating speech. The very speech that Brendan Carr said we need to change 230 to force companies to host.

So which Brendan Carr is lying?

FCC Commissioner Brendan Carr Again Misrepresents The Debate Over Section 230

More Law-Related Stories From Techdirt:

Another Day, Another Antitrust Lawsuit For GoogleDEA Ditches Location Data Vendor Currently Being Investigated By CongressYet Another Report Shows Asset Forfeiture Doesnt Reduce Crime Or Cripple Criminal Organizations

Link:
FCC Commissioner Brendan Carr Again Misrepresents The Debate Over Section 230 - Above the Law