Archive for the ‘First Amendment’ Category

So to Speak Podcast Transcript: The First Amendment at the Supreme Court – Foundation for Individual Rights and Expression

Note: This is an unedited rush transcript. Please check any quotations against the audio recording.

Nico Perrino: All right, folks. Welcome back to So to Speak, the free speech podcast, where every other week we take a uncensored look at the world of free expression through personal stories and candid conversations. Today, we are reviewing the 2024 Supreme Court term looking at the First Amendment cases. Yesterday, we got NetChoice handed down from the court in a 9-0 decision. Joining us to discuss these cases is Robert McNamara who is the Deputy Litigation Director for the Institute for Justice. Bob, welcome onto the show.

Bob McNamara: Thanks for having me.

Nico Perrino: To his right, we have Ronnie London who is FIREs General Counsel. Our listeners should be well acquainted with him. Ronnie, welcome back onto the show.

Ronnie London: Thanks, Nico.

Nico Perrino: And then to his right, we have Bob Corn-Revere whos is FIREs Chief Counsel. Bob, welcome back onto the show.

Bob Corn-Revere: Always happy to be here, Nico.

Nico Perrino: So, were gonna try and take this in reverse chronological order with one exception, and our listeners will get that soon because weve discussed some of these cases back on past podcasts. But we havent discussed, for example, the NetChoice case. We havent discussed Murthy. We havent discussed Gonzalez v. Trevino. Theres some cases we need to get to, and Im gonna try to frontload them, and then we will get back to some of the cases that were decided earlier in the term.

I wanna remind folks that we are recording this 24 hours after NetChoice came down. So, we are still digesting the opinion; and, hopefully, during this conversation we can think aloud about it, its implications. Also, this coming Monday, July 8th, we will have a live webinar for FIREs members to participate and ask questions of their own about this past Supreme Court term. Ronny, I think youre participating in that. We also have Will Creeley, FIREs Litigation Director or Legal Director, excuse me and Darpana Sheth, FIREs Vice President of Litigation.

So, without further ado, lets move onto the NetChoice cases. We have two cases here, one stemming from a law in Florida, one stemming from a law in Texas. One decision from the court. It was 9-0 with some somewhat testy concurrences. They dismissed the cases, or they remanded them vacated or remanded the cases because they say that these lower courts did not review the laws properly as facial challenges. Bob, I wanna start with you. What is a facial challenge, and why did the courts say that these lower courts didnt look at them properly?

Bob Corn-Revere: Well, a facial challenge is one that challenges the law, basically, on its own terms and asks the court to conclude that it is unconstitutional either in all of its applications or in the First Amendment context in too many applications that would violate expressive rights. So, the court has permitted these kinds of challenges to go forward for many decades, but it is not the favored approach by the court.

And thats one of the things that all of the justices in the NetChoice decisions made clear except the majority, the six justices who formed the core of the opinion, basically said it might not be favored. But heres the process that you would use for doing a facial challenge. It sets a fairly high bar for that but, nonetheless, says that in this context, particularly in a First Amendment context, you can bring these challenges, and it sent the case back to the 11th and 5th Circuits to do a do-over.

Nico Perrino: To assess them properly.

Bob Corn-Revere: Yes.

Nico Perrino: So, again, a facial challenge is just a challenge to the law.

Bob Corn-Revere: Youre challenging the law as a whole.

Nico Perrino: So, its not, for example, how the law was applied in a specific circumstance to say Meta or well, I was about to say Instagram, but Instagram is owned by Meta or X, for example, or TikTok.

Bob Corn-Revere: Thats right.

Ronnie London: Or would apply to a certain subset of applications.

Nico Perrino: So, the courts sent these down. But in the majority decision, you get a lot of stirring language about how the 5th Circuit and what was it the 11th Circuit should analyze this case from a First Amendment perspective.

Bob Corn-Revere: Well, thats right. The court, while it didnt decide the ultimate First Amendment issues and didnt decide whether or not these laws are constitutional, it did a serious course correction, particularly for the 5th Circuit to make clear that when the courts do look at these issues anew that they do so under a clear set of governing First Amendment principles. So, it made a number of things quite clear, so clear that even the 5th Circuit cant get it wrong the next time around.

So, they include things like the First Amendment applies to new communications technologies. That is the rule even in the case where you have advancing technology, and you have new and novel applications. It made it quite clear that it applies to social media. It also made clear that when social media is making editorial choices, moderation choices, where they decide what information to post or not to post or to downgrade, that those are editorial choices, and editorial choices are protected by the First Amendment.

It also made clear something that the 5th Circuit got completely wrong the first time around is that the First Amendments protection of speech applies to government actors, not to private actors. The 5th Circuit had concluded that the moderation choices by social media platforms were censorship in the same sense, really, as censorship by the government. And Justice Kagans majority opinion made clear that, no, that gets it completely backwards and that when the court goes to reconsider these issues the next time around, it has to apply these First Amendment principles articulated by the majority.

Bob McNamara: And those principles, I think, are right, and its nice to see the court articulate them. But NetChoice itself, I dont think, really accomplishes very much. And I think in part it doesnt accomplish things because both sides of the versus have kinda behaved in ways that made it hard for the court to resolve the case. The first is just the statutes that are being challenged are kind of a mess.

What actually happened is that legislators in Texas and Florida were mad at Twitter for what they viewed as censoring conservative voices. So, they tried to pass a law stopping Twitter from doing that. But the problem with legislating when youre extremely grumpy is you dont do a very good job of defining what youre trying to legislate against.

Bob Corn-Revere: Its not just from legislators that are grumpy.

Bob McNamara: The problem with legislating, Bob this is why legislatures shouldnt do things. But, no, it kinda emerged at oral argument in this case. Everyones talking about this as a regulation of Twitter and Facebook. But by its terms, these statutes seem to regulate Etsy. How does this apply to Etsy? The statute is such a mess. It makes it hard to do the sort of facial analysis that Bob was talking about.

Bob Corn-Revere: But thats more true of the Florida law than it is of the Texas law. The Texas law focused on social media platforms of a certain size. The Florida law applied to internet platforms that could include pretty much anything, and thats the issue that really did emerge and dominated the oral argument in the case. My concern with this is that the courts decision sending it back and saying, You have to have a thorough analysis of what the statute does, is it tends to reward legislatures for this broad and sloppy legislation.

If they pass a law that says the State gets to regulate everything on the internet, then that places the burden on would-be plaintiffs to say, okay, lets look at the entire universe of things that this law can lawfully do and compare it to those things that regulate speech that the First Amendment prohibits the government from doing. Thats a real burden for the plaintiffs where really the root problem is sloppiness by legislators.

Ronnie London: Or it puts the burden on the plaintiff to challenge the part of the law that it cares about. Thats what I found a little bit well, I dont wanna say silly about this whole thing but the chiding of NetChoice for the way that they brought this case and for the way the parties and the courts addressed it below.

Look, we all know what the legislatures cared about and what the folks who were upset about being censored on Twitter or on any other social media platform cared about. They cared about the parts of the platform where theres moderation. Nobodys complaining about, hey, I cant get Gmail because of my political views. Thats not happening. You dont see news stories about it. Everyone knew what this case was about, and they litigated it accordingly.

So, to get up to the Supreme Court and say, Well, look how broad this Florida statute is. You really have to do a much more complicated legitimate sweep of the legislation analysis before we can even think about a preliminary injunction all seems a bit much. Im glad they said free speech-reinforcing things in their balance of the decision, but I do wonder whats gonna happen when it goes back.

Nico Perrino: And Bob, hopefully, clarified the Texas law, which essentially prohibits social media companies from engaging in viewpoint discrimination, and you can get some silly outcomes from that, right? If you ban speech supportive of Al Qaeda, you also have to ban speech that would be in opposition to Al Qaeda.

Ronnie London: Well, actually, you cant do either one of those things. Lets be clear. You cant discriminate based on viewpoint, which means you cant pick a topic and say, This topic is off limits. You have to allow things. For example, we know that being hateful or being scandalous or being immoral are all viewpoints that are protected by the First Amendment. We knew that from Tam and Brunetti from a few terms back. So, you cant just wholesale say, This type is speech is off and claim to be viewpoint neutral.

Nico Perrino: Although they tried, right, by saying, We could bunch it into categories, and we would ban speech by categories.

Bob McNamara: The Texas Solicitor Generals kinda saving construction of the Texas law was whoa, whoa, whoa, were not saying that you have to host pro-Al Qaeda speech. You just have to ban the category. You have to say that there will be no speech about Al Qaeda, and I have no idea why he thought that was better. But that was, in fact, his defense of the law.

Ronnie London: Well, thats not viewpoint discriminatory at all then. Problem solved.

Bob McNamara: That was one of the answers that youre forced into at oral argument when youre trying to defend bad legislation.

Nico Perrino: In Florida, you have a slightly different piece of legislation. As you say, its a little bit broader in that it applies to internet platforms. But its more narrowly focus in that it also only applies to how these internet platforms treat news publications, political candidates, for example.

Bob McNamara: Exactly, exactly. Its both more broadly focused and more narrowly focused.

Bob McNamara: With content-based preferences thrown in to boot.

Bob McNamara: Exactly.

Nico Perrino: So, a couple of lines from the majority opinion. On the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana, and this speaks to the animus behind the Texas and the Florida laws insofar as they didnt like how these platforms were moderating content. And, therefore, they tried to change the balance of what users could see on the platform.

Bob Corn-Revere: It also speaks to how the 5th Circuit erred in trying to paint as censorship in the same sense as state censorship, decisions by social media platforms. What the court was saying is that this is really a problem that the First Amendment speaks to government action and not to private action. Correcting that load-bearing premise of the 5th Circuit decision, really, I think, goes a long way.

Bob, thats the one point that I disagree with you on. I think this does a lot for future cases in terms of laying out the ground rules, setting the baseline constitutional principles that are going to apply so that, as I mentioned before, even the 5th Circuit can get it.

Bob McNamara: Oh, no, its nice to see those basic principles reaffirmed, and I am glad to have a left-right coalition firmly saying that the government cant regulate these marketplace of ideas in order to make it more fair and recognizing that that is the most dangerous justification for speech because I think all of us realize the marketplace for ideas is unfair because if it were fair our ideas wouldve won by now.

So, obviously, its biased against us and kinda recognizing the dangerousness of that idea, which has had a lot of currency on the left and a lot of currency on the right; and to have it rejected in that kind of an ideologic coalition, I think, is gratifying, and its great to see.

Nico Perrino: Well, but if you look at the concurrence from what was it Alito or Thomas, they say that all the First Amendment analysis thats provided in Kagans majority opinion is dicta. So, what can we take from it?

Bob Corn-Revere: Well, in a sense it is dicta in that its not a binding ruling except for the fact that you have a majority of the court saying this is what the First Amendment requires. So, while the court, I suppose, when this case eventually comes back, as it inevitably will, could say, Sorry, we didnt mean it. Thats not going to happen. You now have a majority of the court, a solid majority of the court, and as Bob says, a bipartisan or multi-partisan coalition of justices making these points. So, I think that is going to set the baseline going forward even if it isnt a precedent as in, say, striking down a particular role.

Ronnie London: Heres a minority view. I dont think it is dicta. I think it is necessary to the outcome of the case because in order to say this was a facial challenge and you did not conduct the analysis appropriately, you have to discard some of the other challenges. For example, I have always worried throughout this case that somebody would wise up and go, you know, this stuff is preempted by Section 230. Why are we dicking around with reaching the constitutional question? Now, theres a footnote in the decision that explains

Bob Corn-Revere: Is dicking around a technical term?

Ronnie London: Well, thats the technical term.

Nico Perrino: Weve got a lay audience on this podcast.

Bob McNamara: Theres a Latin phrase for that.

Bob Corn-Revere: Thats right. Thats right.

Ronnie London: In any event, there was a footnote to that explaining why that was not advanced below. The court below in the 11th Circuit didnt pick it up. But that was always a potential outcome if you really wanted to dodge, not just dodge a little like they did here. But the other issue in this case is a compelled speech issue, and compelled speech doesnt use the same analysis as a facial challenge. If you look back, for example, at 303 Creative and the cases that it cites, what becomes clear is compelled speech is unconstitutional pretty much full stop.

If you go back through the courts compelled speech cases, you dont see them applying strict scrutiny. You see them say compelling speech is unconstitutional. Now, in order to have the facial challenge and the failure to conduct it properly be the grounds on which this decision is rendered and sending it back, you have to ignore the compelled speech aspect of it.

In order to do that, you have to have this explanation of what rules would apply and shouldve applied if you had conducted the analysis properly. I know Im going out on a limb a little bit here by calling it necessary to the decision, but I dont think its so obviously dicta that they could simply walk away from it when the case comes back up.

Nico Perrino: BCR?

Bob Corn-Revere: I think its dicta plus. I think it is the court expressing what it believes the law to be. And as I said before, a majority of the court. You can argue over whether or not it was absolutely necessary to the decision. Ultimately, the courts analysis of what is required for a facial challenge was sufficient to make a decision and send it back to say that the lower courts hadnt performed that necessary analysis.

The court was clear about this saying, We need to lay out the First Amendment principles so that the lower courts dont screw it up the next time and called out the 5th Circuit three different times in the opinion to drive that point home. I would say its more than dicta maybe less than a ruled decision. But, nonetheless, I think its going to guide lower courts going forward.

Nico Perrino: Well, the majority says that the 5th Circuit decision rested on a serious misunderstanding of First Amendment precedent and principle, but Im not sure that Alito and Thomas are as convinced because they give credence to the common carrier argument that they say the majority decision just didnt grapple with, but they should have.

Bob Corn-Revere: They do. Theyre a minority of justices that take a different view. And this is something else that, I think, underscores a lot the problem with court viewers. And that is, here you have a 9-0 decision, no dissents. And, yet, its in effect a 5-1/2 to 3-1/2 decision setting out what rules should apply to these cases.

It also belies the usual political reporting that youll get about this court saying its this conservative supermajority of six justices. Well, thats just not the case in the First Amendment context. You see justices joining forces across ideological lines quite a bit based on the First Amendment principles that we all hope that the court will uphold.

Nico Perrino: Well, do you guys think the common carrier argument carries any water?

Bob McNamara: So, I think its very difficult to make the common carrier argument for social media companies because part of the sort of inherent in being a common carrier is the notion of some kind of quasi-monopoly. Youre a must-carry because you have this route. You have this power line, and youre some type of technical monopoly. Thats the basis for common carrier.

Bob Corn-Revere: And, usually, a government-granted monopoly. And for justices that rely so heavily on history and tradition, like Justice Thomas and Justice Alito, to then claim that social media companies are common carriers is nonsense.

Nico Perrino: So, common carriers, what youre saying there, Bob, is that theyre like the phone companies. To an extent, they are private water utilities or electrical companies.

Bob McNamara: Or railroad is the classic example. You build a railroad line. No ones gonna build a railroad line next to yours. So, you have essentially a monopoly over the two cities that youve built your railroad line between, and that comes with certain regulatory obligations not to deny people service on your railroad line.

Ronnie London: Well, the other distinguishing feature of a common carrier is the offering of non-discriminatory service. Here, you have the mirror image of that. Everyones up in arms because the disservice is being delivered on a discriminatory basis based on which viewpoints or what types of substance the social media platforms want to carry. Its a little bit ironic to say, Okay, wait. Thats a problem. Lets make them common carriers.

Bob Corn-Revere: But not only that, if you go back historically to the origin of the common carrier doctrine, whether youre talking about railroads or stagecoaches or waterways, those things were adopted in the communications world only by analogy. When the Radio Act of 27 and the Communications Act of 1934 were adopted, they basically were saying, Why dont we just borrow that concept of common carriage from the transportation world, and well apply it. But it will apply only to specific phone service, point-to-point communication between people where the company had nothing to do with that communication.

It didnt apply to radio or, ultimately, to television or other communications media. Here you have a new medium that has never been subject to these kinds of rules and has never played by this non-discriminatory access that Ronnie was talking about. So, to simply say, Poof, youre a common carrier because we want to impose obligations on you is contrary to history and is contrary to the traditions of the First Amendment.

Bob McNamara: I also think it belies a lack of imagination about how the world works. People invent things. The world is dynamic. Right now, you could say Twitter has a monopoly on Twitter, but theres no obstacle. And as weve seen, people do start new social media companies.

Bob Corn-Revere: Or they leave social media companies when they become dissatisfied with the new owner.

Ronnie London: Exactly, the marketplace seems to be operating.

Nico Perrino: Well, the argument, I think, on the other side would be we saw what happened with Donald Trump after January 6th. All these social media companies dumped him. But then the counterargument to that, of course, is you have platforms like Parlor and Truth Social that started up.

Ronnie London: And he was never heard from again.

Nico Perrino: Justice Barrett also has a somewhat interesting concurrence in which she speculates that maybe these principles that underlie the First Amendment that the majority opinion says dont change with the advent of new technologies. Maybe they will change with artificial intelligence. Did you all see that?

Bob Corn-Revere: Well, she did try and draw some nuanced distinctions between direct editorial choices made on the part of platforms or where they use algorithms to simply implement the users changes or use artificial intelligence to make those kinds of things. But she simply raised that as a question, which I think is appropriate in a matter where the court is not deciding the ultimate questions but saying that this is a complex and nuanced area, and we need to take some care in analyzing the facts before we make a pronouncement one way or the other.

Im not wild about the idea that she left it open that even machine-based editorial choices, particularly machine-based editorial choices, might change the balance because, ultimately, it is the platform that is programming those algorithms to make those choices.

Ronnie London: At least until the singularity.

Bob Corn-Revere: Thats right.

Nico Perrino: Well, to level-set I think what Texas and Florida were seeking to address with their laws were human-centric moderation decisions to de-platform

Bob Corn-Revere: I dont think theyre distinguished.

Nico Perrino: No, I dont think they did distinguish. But I think the animus for the laws came from the deplatforming of former President Trump

Bob Corn-Revere: Who knows what goes on inside the minds of legislators?

Nico Perrino: from the New York Post story and the Hunter Biden laptop issue. But most platforms moderation decisions happen algorithmically.

Bob Corn-Revere: They have to.

Nico Perrino: They have to. You have millions of pieces of content that need to be sorted through and cant be sorted through by simply humans. And then she has this question, But what if a platforms algorithm just presents automatically to each user whatever the algorithm thinks the user will like.

Ronnie London: Well, thats still a choice

Nico Perrino: Its still a choice but its also how most algorithms work because they see what you engage with. They see what you look at, and then they tweak the feed to present you with the content that you think will be more engaging.

Bob Corn-Revere: Thats one of things that they do. What the legislatures were really concerned about is the fact that these platforms have terms of service where they define what kinds of communities they want to foster. They dont like certain kinds of speech. They dont like hate speech and, again, Im generalizing or misinformation, various kinds of things that are supposed to be forbidden on those platforms.

Now, a lot of times these rules are enforced more in the breach than in the observance. Its hard when youve got billions of posts coming in all of the time where in the case of YouTube 500 hours of new content every minute.

Ronnie London: 500 million, I hope.

Bob Corn-Revere: Huh?

Ronnie London: 500 million.

Bob Corn-Revere: No, no, 500 hours per minute.

Ronnie London: Per minute.

Bob Corn-Revere: Per minute.

Nico Perrino: 500 million.

View post:
So to Speak Podcast Transcript: The First Amendment at the Supreme Court - Foundation for Individual Rights and Expression

Ruling boosts social media free speech protections, some say – Roll Call

The Supreme Courts decision on two cases challenging social media content moderation policies could expand protections for tech platforms under the First Amendment umbrella even if Congress were to dilute other protections, according to legal expertsclosely watching the issue.

Companies posting user content on the internet, like Meta Platforms Inc., enjoya broad shield under Section230 ofa 1996 law that protectstech against liability for such content. Lawmakers who want such platforms to rein in harmful content have threatened to revoke that section and force stricter moderation of what gets uploaded.

But the courts decision last week, which remanded the Florida and Texas cases to lower courts, opens the door tobroader, more fundamental cover from the First Amendment, even as the ruling expressly avoids declaring social media posts to be free speech.

At the end of the day, a lot of the content thats protected by Section 230 would also be protected by the First Amendment, and that includes choices made by social media services to take down or not cut down particular content, said Samir Jain, vice president of policy at the Center for Democracy & Technology.

What the court is saying here is that those are protected by the First Amendment, Jain said in an interview, referring to how companies moderate content on their platforms. And that would be true even if Section 230 didnt exist.

TheTexas and Florida lawswere part of thepushback to perceived censorship of conservative views by tech companies, including Meta,Google parent Alphabet Inc. and others. The laws required the platforms to offer a detailed explanation and appeals process when users or their content were blocked. Tech companies sued to end the laws.

The U.S. Court of Appeals for the 11th Circuit enjoined parts of the Florida law on First Amendment grounds, while the 5th Circuit upheld the Texas law but stayed its enforcement pending appeal. Both states appealed to the U.S. Supreme Court.

Justice Elena Kagan criticized the 5th Circuit decision that upheld the Texas law, writing for a six-justice majority that social media content moderation is free speech protected by the First Amendment to the Constitution.

Deciding on the third-party speech that will be included in or excluded from a compilation and then organizing and presenting the included items is expressive activity of its own, Kagan wrote.

Although privacy advocates opposed the Texas and Florida laws, some were alarmed by the majority opinion likening actions of social media companies to those made in newsrooms.

Kagans views are disappointing, because it analogizes social media platforms to the editorial work of newspapers, said Fordham Law professor Zephyr Teachout, a senior adviser at the American Economic Liberties Project.

As we argued in our amicus brief, and as noted in todays concurring opinions, social media platforms are more like town squares, Teachout said in a statement. The First Amendment is not a shield for censorship and discrimination in the town square, and it shouldnt protect against discrimination and targeting by opaque algorithms.

Expanding First Amendment protections to include content moderation and curation by tech companies could potentially result in protections even in cases where no human judgment is involved, according to Tim Wu, a law professor at Columbia University who previously served as a senior White House official on tech policy.

The next phase in this struggle will presumably concern the regulation of artificial intelligence, Wu wrote in a July 2 op-ed in The New York Times. I fear that the First Amendment will be extended to protect machine speech at considerable human cost.

Algorithms that are currently used by tech companies to determine which posts and content are allowed and which aretaken down are merely automated versions of human choices, Jain argued.

Jain offered the example of computer code screening and flagging users posts for certain terms and phrases considered by the tech platforms to be hateful speech. Even though its an algorithm in some ways, implementing human decision, he said.

In the context of protecting Americans data privacy, some members of Congress have been mulling ways to curb the broad liability protections that tech companies enjoy because of Section 230.

In recent months, top House lawmakers including Energy and Commerce Chair Cathy McMorris Rodgers, R-Wash., and ranking member Frank Pallone Jr., D-N.J., have held hearings on sunsettingsuchprotections by the end of 2025.

As written, Section 230 was originally intended to protect internet service providers from being held liable for content posted by a third-party user or for removing truly horrific or illegal content, Rodgers said at a committee hearing in May. But giant social media platforms have been exploiting this to profit off us and use the information we share to develop addictive algorithms that push content onto our feeds.

Some fear the emergence of powerful artificial intelligence systems that could potentially make decisions on their own without human direction will complicate the question of First Amendment protections for content moderation.

According to Jain, Justice Amy Coney Barrett, in her concurring opinion, raised the question of a future where tech companies could develop an artificial intelligence tool whose job is to figure out what is hateful, what isnt and whether a human really is making an expressive choiceprotected by the First Amendment.

Thats a question the [justices] dont answer, Jain said.

Read the original post:
Ruling boosts social media free speech protections, some say - Roll Call

Can the First Amendment Protect Americans From Government Censorship? – The New York Sun

Last week, in Murthy v. Missouri, the Supreme Court hammered home the distressing conclusion that, under the courts doctrines, the First Amendment is, for all practical purposes, unenforceable against large-scale government censorship. The decision is a strong contender to be the worst speech decision in the courts history.

(I must confess a personal interest in all of this: My civil rights organization, the New Civil Liberties Alliance, represented individual plaintiffs in Murthy.)

All along, there were some risks. As I pointed out in an article called Courting Censorship, Supreme Court doctrine has permitted and thereby invited the federal government to orchestrate massive censorship through the social media platforms. The Murthy case, unfortunately, confirms the perils of the courts doctrines.

One danger was that the court would try to weasel out of reaching a substantive decision. Months before Murthy was argued, there was reason to fear that the court would try to duck the speech issue by disposing of the case on standing.

Indeed, in its opinion, the court denied that that the plaintiffs had standing by inventing what Justice Samuel Alito calls a new and heightened standard of traceability a standard so onerous that, if the court adheres to it in other cases, almost no one will be able to sue. It is sufficiently unrealistic that the court wont stick to it in future cases.

The evidence was more than sufficient to establish at least one plaintiffs standing to sue, and consequently, as Justice Alitos dissent pointed out, we are obligated to tackle the free speech issue.

Regrettably, the court, however, again in Justice Alitos words, shirks that duty and thus permits this case to stand as an attractive model for future officials who want to control what the people say, hear, and think. The case gives a greenlight for the government to engage in further censorship.

A second problem was doctrinal. The Supreme Court has developed doctrine that encourages government to think it can censor Americans through private entities as long as it is not too coercive. Accordingly, with painful predictability, the oral argument in Murthy focused on whether or not there had been government coercion.

The implications were not lost on the government. Although it had slowed down its censorship machine during litigation, it revved it up after the courts hearing emphasized coercion. As put by Matt Taibbi, the FBI and the Department of Homeland Security reportedly resumed contact with Internet platforms after oral arguments in this case in March led them to expect a favorable ruling.

The First Amendment, however, says nothing about coercion. On the contrary, it distinguishes between abridging the freedom of speech and prohibiting the free exercise of religion. As I have explained in great detail, the amendment thereby makes clear that the Constitutions standard for a speech violation is abridging, that is, reducing, the freedom of speech, not coercion. A mere reduction of the freedom violates the First Amendment.

The court in Murthy, however, didnt recognize the significance of the word abridging. This matters in part for the standing question. Its much more difficult to show that the plaintiffs injuries are traceable to government coercion than to show that they are traceable to government abridging of the freedom of speech. More substantively, if the court had recognized the First Amendments word abridging, it would have clarified to the government that it cant use evasions to get away with censorship.

Other doctrinal disasters included the courts casual indifference to listeners or readers rights the right of speakers to hear the speech of others. The court treated such rights as if they were independent of the rights of speakers and therefore concluded that they would broadly invite everyone to sue the government.

Listeners rights, though, are most clearly based in the First Amendment when they are understood as the right of speakers to hear the speech of others, as this is essential for speakers to formulate and refine their own speech. The right of speakers to hear what others say is, therefore, the core of listeners rights. From this modest understanding of listeners rights, the plaintiffs rights as listeners should have been understood as part of their rights as speakers an analysis that wouldve avoided hyperbolical judicial fears of permitting everyone to sue.

The courts concern that a recognition of listeners rights would open up the courts to too many claimants is especially disturbing when the government has censored millions upon millions of posts with the primary goal of suppressing what the American people can hear or read.

When the most massive censorship in American history prevents Americans from learning often true opinion on matters of crucial public interest, it should be no surprise that there are many claimants. The courts disgraceful reasoning suggests that when the government censors a vast number of Americans, we lose our right of redress.

The greatest danger comes from the courts tolerance of the sub-administrative power that the government uses to corral private parties into becoming instruments of control. Administrative regulation ideally runs through notice-and-comment rulemaking.

In contrast, sub-administrative regulation works through informal persuasion, including subtle threats, regulatory hassle, and illicit inducements. By such means, the government can get the private platforms to carry out government orchestrated censorship of their users.

The federal government once had no such sub-administrative power, and it therefore had little control over speech. It could punish speakers only through criminal prosecutions that is, by going to court and showing that the defendants speech violated the criminal law.

Now, however, federal officials can subtly get the platforms to suppress speech often covertly, so an individual wont even know he is being suppressed. Thus, whereas the government traditionally could only punish the individual, it now can make his speech disappear.

Even worse, the courts tolerance of this sub-administrative privatization of censorship reverses the burden of proof. Government once had to prove to a judge and jury that a speakers words were illegal. Now, instead, the speaker must prove that the government censored him.

Whats more, theres no effective remedy. The courts qualified immunity doctrine makes it nearly impossible for censored individuals to get damages for past censorship. And the obstacles to getting an injunction mean that its nearly impossible to stop future censorship.

For example, the government can claim, as it did in Murthy, that its no longer censoring the affected individual. Then, poof The possibility of an injunction disappears. Moreover, because of the courts indifference to listeners rights even to the right of speakers to hear the speech of others, an injunction can protect only a handful of individuals; it cant stop the governments massive censorship of vast numbers of Americans.

The court thus puts Americans affected by censorship in an unenviable position. It reverses the burden of proof and denies Americans any effective remedy.

So, for multiple reasons, Murthy is probably the worst speech decision in American history. In the face of the most sweeping censorship in American history, the decision fails to recognize either the realities of the censorship or the constitutional barriers to it.

In practical terms, the decision invites continuing federal censorship on social media platforms. It thereby nearly guarantees that yet another election cycle will be compromised by government censorship and condemns a hitherto free society to the specter of mental servitude.

This article was originally published by RealClearPolitics and made available via RealClearWire.

Originally posted here:
Can the First Amendment Protect Americans From Government Censorship? - The New York Sun

The aftermath of the Supreme Courts NetChoice ruling – The Verge

Last weeks Supreme Court decision in the NetChoice cases was overshadowed by a ruling on presidential immunity in Trump v. US that came down only minutes later. But whether or not America even noticed NetChoice happen, the decision is poised to affect a host of tech legislation still brewing on Capitol Hill and in state legislatures, as well as lawsuits that are percolating through the system. This includes the pending First Amendment challenge to the TikTok ban bill, as well as a First Amendment case about a Texas age verification law that the Supreme Court took up only a day after its NetChoice decision.

The NetChoice decision states that tech platforms can exercise their First Amendment rights through their content moderation decisions and how they choose to display content on their services a strong statement that has clear ramifications for any laws that attempt to regulate platforms algorithms in the name of kids online safety and even on a pending lawsuit seeking to block a law that could ban TikTok from the US.

When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices, Justice Elena Kagan wrote in the majority opinion, referring to Facebooks News Feed and YouTubes homepage. And because that is true, they receive First Amendment protection.

NetChoice isnt a radical upheaval of existing First Amendment law, but until last week, there was no Supreme Court opinion that applied that existing framework to social media platforms. The justices didnt rule on the merits of the cases, concluding, instead, that the lower courts hadnt completed the necessary analysis for the kind of First Amendment challenge that had been brought. But the decision still provides significant guidance to the lower courts on how to apply First Amendment precedent to social media and content moderation. The Fifth Circuit was wrong in concluding that Texass restrictions on the platforms selection, ordering, and labeling of third-party posts do not interfere with expression, Kagan wrote of the appeals court that upheld Texas law seeking to prevent platforms from discriminating against content on the basis of viewpoint.

The decision is a revealing look at how the majority of justices view the First Amendment rights of social media companies something thats at issue in everything from kids online safety bills to the TikTok ban.

The court is already set to hear Free Speech Coalition v. Paxton next term a case challenging Texas HB 1181, which requires internet users to verify their ages (sometimes with government-issued IDs) to access porn sites. Free Speech Coalition, an adult entertainment industry group that counts Pornhub among its members, sued to block the law but lost on appeal. The justices decision in that case next year has the potential to impact many different state and federal efforts to age-gate the internet.

One recently signed law that may need to contend with the ruling is New Yorks Stop Addictive Feeds Exploitation (SAFE) for Kids Act, which requires parental consent for social media companies to use addictive feeds on minors. The NetChoice ruling calls into question how far legislatures can go in regulating algorithms that is, software programmed to surface or deprioritize different pieces of information to different users.

A footnote in the majority opinion says the Court does not deal here with feeds whose algorithms respond solely to how users act online giving them the content they appear to want, without any regard to independent content standards. The note is almost academic in nature platforms usually take into account many different variables beyond user behavior, and separating those variables from each other is not a straightforward matter.

Because its so hard to disentangle all of the users preferences, and the guidance from the services, and the editorial decisions of those services, what youre left with technologically speaking is algorithms that promote content curation. And it should be inevitably assumed then that those algorithms are protected by the First Amendment, said Jess Miers, who spoke to The Verge before departing her role as senior counsel at center-left tech industry coalition Chamber of Progress, which receives funding from companies like Google and Meta.

The Supreme Court made it pretty clear, curation is absolutely protected.

Thats going to squarely hit the New York SAFE Act, which is trying to argue that, look, its just algorithms, or its just the design of the service, said Miers. The drafters of the SAFE Act may have presented the law as not having anything to do with content or speech, but NetChoice poses a problem, according to Miers.The Supreme Court made it pretty clear, curation is absolutely protected.

Miers said the same analysis would apply to other state efforts, like Californias Age Appropriate Design Code, which a district court agreed to block with a preliminary injunction, and the state has appealed. That law required platforms likely to be used by kids to consider their best interests and default to strong privacy and safety settings. Industry group NetChoice, which also brought the cases at issue in the Supreme Court, argued in its 2022 complaint against Californias law that it would interfere with platforms own editorial judgments.

To the extent that any of these state laws touch the expressive capabilities of these services, those state laws have an immense uphill battle, and a likely insurmountable First Amendment hurdle as well, Miers said.

Michael Huston, a former clerk to Chief Justice Roberts who co-chairs law firm Perkins Coies Appeals, Issues & Strategy Practice, said that after this ruling, any sort of ban on content curation would be subject to a level of judicial scrutiny that is difficult to overcome. A law that, for instance, requires platforms to only show content in reverse-chronological order, would likely be unconstitutional. (TheCalifornias Protecting Our Kids from Social Media Addiction Act, which would prohibit the default feeds shown to kids from being based on any information about the user or their devices, or involve recommending or prioritizing posts, is one such real-life example.) The court is clear that there are a lot of questions that are unanswered, that its not attempting to answer in this area, Huston said. But broadly speaking ... theres a recognition here that when the platforms make choices about how to organize content, that is itself a part of their own expression.

The new Supreme Court decision also raises questions about the future of the Kids Online Safety Act (KOSA), a similar piece of legislation at the federal level thats gained significant steam. KOSA seeks to create a duty of care for tech platforms serving young users and allows them to opt out of algorithmic recommendations. Now with the NetChoice cases, you have this question as to whether KOSA touches any of the expressive aspects of these services, Miers said. In evaluating KOSA, a court would need to assess does this regulate a non-expressive part of the service or does it regulate the way in which the service communicates third-party content to its users?

Supporters of these kinds of bills may point to language in some of the concurring opinions (namely ones written by Justices Amy Coney Barrett and Samuel Alito) positing scenarios where certain AI-driven decisions do not reflect the preferences of the people who made the services. But Miers said she believes that kind of situation likely doesnt exist.

David Greene, civil liberties director at the Electronic Frontier Foundation, said that the NetChoice decision shows that platforms curation decisions are First Amendment protected speech, and its very, very difficult if not impossible for a state to regulate that process.

Similarly important is what the opinion does not say. Gautam Hans, associate clinical professor and associate director of the First Amendment Clinic at Cornell Law School, predicts there will be at least some state appetite to keep passing laws pertaining to content curation or algorithms, by paying close attention to what the justices left out.

What the Court has not done today is say, states cannot regulate when it comes to content moderation, Hans said. It has set out some principles as to what might be constitutional versus not. But those principles are not binding.

There are a couple different kinds of approaches the court seems open to, according to experts. Vera Eidelman, staff attorney at the American Civil Liberties Union (ACLU)s Speech, Privacy, and Technology Project, noted that the justices pointed to competition regulation also known as antitrust law as a possible way to protect access to information. These other regulatory approaches could, the Supreme Court seems to be hinting, either satisfy the First Amendment or dont raise First Amendment concerns at all, Eidelman said.

Transparency requirements also appear to be on the table, according to Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights. He said the decision implies that a standard for requiring businesses to disclose certain information created under Zauderer v. Office of Disciplinary Counsel is good law, which could open the door to future transparency legislation. When it comes to transparency requirements, its not that the Texas and Florida legislatures necessarily got it right, Barrett said. Their individualized explanation requirements may have gone too far, even under Zauderer. But disclosure requirements are going to be judged, according to Justice Kagan, under this more deferential standard. So the government will have more leeway to require disclosure. Thats really important, because thats a form of oversight that is far less intrusive than telling social media companies how they should moderate content.

The justices opinion that a higher bar was required to prove a facial challenge to the laws meaning that they were unconstitutional in any scenario could be reason enough for some legislatures to push ahead. Greene said states could potentially choose to pass laws that would be difficult to challenge unless they are enforced since bringing a narrower as-applied challenge before enforcement means platforms would have to show theyre likely to be targets of the law. But having a law on the books might be enough to get some companies to act as desired, Greene said.

Still, the areas the justices left open to potential regulation might be tricky to get right. For example, the justices seem to maintain the possibility that regulation targeting algorithms that only take into account users preferences could survive First Amendment challenges. But Miers says that when you read the court opinion and they start detailing what is considered expression, it becomes increasingly difficult to think of a single internet service that doesnt fall into one of the expressive capabilities or categories the court discusses throughout. What initially seems like a loophole might actually be a null set.

Justice Barrett included what seemed to be a lightly veiled comment about TikToks challenge to a law seeking to ban it unless it divests from its Chinese parent company. In her concurring opinion, Barrett wrote, without naming names, that a social-media platforms foreign ownership and control over its content moderation decisions might affect whether laws overriding those decisions trigger First Amendment scrutiny. Thats because foreign persons and corporations located abroad do not have First Amendment rights like US corporations do, she said.

Experts predicted the US government would cite Justice Barretts opinion in their litigation against TikTok, though cautioned that the statement of one justice does not necessarily reflect a broader sentiment on the Court. And Barretts comment still beckons for a greater analysis of specific circumstances like TikToks to determine who really controls the company.

Barretts concurrence notwithstanding, TikTok has also notched a potentially useful ammunition in NetChoice.

Id be feeling pretty good if I were them today, Greene said of TikTok. The overwhelming message from the NetChoice opinions is that content moderation is speech protected by the First Amendment, and thats the most important holding to TikTok and to all the social media companies.

Still, Netchoice does not resolve the TikTok case, said NYUs Barrett. TikToks own legal challenge implicates national security, a matter in which courts tend to defer to the government.

The idea that there are First Amendment rights for the platforms is helpful for TikTok, Hans said. If Im TikTok, Im mostly satisfied, maybe a little concerned, but you rarely get slam dunks.

See the article here:
The aftermath of the Supreme Courts NetChoice ruling - The Verge

Oklahoma Supreme Court ruling ‘inconsistent’ with First Amendment – ADF Media

Tuesday, Jun 25, 2024

The following quote may be attributed to Alliance Defending Freedom Senior Counsel Phil Sechler regarding the Oklahoma Supreme Courts ruling Tuesday in Drummond v. Oklahoma Statewide Virtual Charter School Board directing the Statewide Virtual Charter School Board to rescind its contract with St. Isidore of Seville Catholic Virtual School on grounds that the contract violates state and federal law:

Oklahoma parents and children are better off with more choices, not fewer. The U.S. Constitution protects St. Isidores freedom to operate according to its faith and supports the boards decision to approve such learning options for Oklahoma families. The board knew that the First Amendments Free Exercise Clause prohibits state officials from denying public funding to religious schools simply because they are religious. We are disappointed with the courts ruling that upholds discrimination against religion; well be considering all legal options, including appeal.

Justice Dana Kuehn wrote in her dissent, I find nothing in the State or Federal Constitutions barring sectarian organizations, such as St. Isidore, from applying to operate charter schools. To the extent the Charter Schools Act bars such organizations from even applying to operate a charter school, I would find it inconsistent with the Free Exercise Clause of the First Amendment.

ADF attorneys representing the Oklahoma Statewide Virtual Charter School Board filed a brief last year with the states high court opposing the petition filed by Oklahoma Attorney General Gentner Drummond to cancel the contract the board entered with St. Isidore.

The ADF Center for Academic Freedom is dedicated to protecting First Amendment and related freedoms for students and faculty so that everyone can freely participate in the marketplace of ideas without fear of government censorship.

# # #

Go here to read the rest:
Oklahoma Supreme Court ruling 'inconsistent' with First Amendment - ADF Media