Archive for the ‘First Amendment’ Category

Ruling boosts social media free speech protections, some say – Roll Call

The Supreme Courts decision on two cases challenging social media content moderation policies could expand protections for tech platforms under the First Amendment umbrella even if Congress were to dilute other protections, according to legal expertsclosely watching the issue.

Companies posting user content on the internet, like Meta Platforms Inc., enjoya broad shield under Section230 ofa 1996 law that protectstech against liability for such content. Lawmakers who want such platforms to rein in harmful content have threatened to revoke that section and force stricter moderation of what gets uploaded.

But the courts decision last week, which remanded the Florida and Texas cases to lower courts, opens the door tobroader, more fundamental cover from the First Amendment, even as the ruling expressly avoids declaring social media posts to be free speech.

At the end of the day, a lot of the content thats protected by Section 230 would also be protected by the First Amendment, and that includes choices made by social media services to take down or not cut down particular content, said Samir Jain, vice president of policy at the Center for Democracy & Technology.

What the court is saying here is that those are protected by the First Amendment, Jain said in an interview, referring to how companies moderate content on their platforms. And that would be true even if Section 230 didnt exist.

TheTexas and Florida lawswere part of thepushback to perceived censorship of conservative views by tech companies, including Meta,Google parent Alphabet Inc. and others. The laws required the platforms to offer a detailed explanation and appeals process when users or their content were blocked. Tech companies sued to end the laws.

The U.S. Court of Appeals for the 11th Circuit enjoined parts of the Florida law on First Amendment grounds, while the 5th Circuit upheld the Texas law but stayed its enforcement pending appeal. Both states appealed to the U.S. Supreme Court.

Justice Elena Kagan criticized the 5th Circuit decision that upheld the Texas law, writing for a six-justice majority that social media content moderation is free speech protected by the First Amendment to the Constitution.

Deciding on the third-party speech that will be included in or excluded from a compilation and then organizing and presenting the included items is expressive activity of its own, Kagan wrote.

Although privacy advocates opposed the Texas and Florida laws, some were alarmed by the majority opinion likening actions of social media companies to those made in newsrooms.

Kagans views are disappointing, because it analogizes social media platforms to the editorial work of newspapers, said Fordham Law professor Zephyr Teachout, a senior adviser at the American Economic Liberties Project.

As we argued in our amicus brief, and as noted in todays concurring opinions, social media platforms are more like town squares, Teachout said in a statement. The First Amendment is not a shield for censorship and discrimination in the town square, and it shouldnt protect against discrimination and targeting by opaque algorithms.

Expanding First Amendment protections to include content moderation and curation by tech companies could potentially result in protections even in cases where no human judgment is involved, according to Tim Wu, a law professor at Columbia University who previously served as a senior White House official on tech policy.

The next phase in this struggle will presumably concern the regulation of artificial intelligence, Wu wrote in a July 2 op-ed in The New York Times. I fear that the First Amendment will be extended to protect machine speech at considerable human cost.

Algorithms that are currently used by tech companies to determine which posts and content are allowed and which aretaken down are merely automated versions of human choices, Jain argued.

Jain offered the example of computer code screening and flagging users posts for certain terms and phrases considered by the tech platforms to be hateful speech. Even though its an algorithm in some ways, implementing human decision, he said.

In the context of protecting Americans data privacy, some members of Congress have been mulling ways to curb the broad liability protections that tech companies enjoy because of Section 230.

In recent months, top House lawmakers including Energy and Commerce Chair Cathy McMorris Rodgers, R-Wash., and ranking member Frank Pallone Jr., D-N.J., have held hearings on sunsettingsuchprotections by the end of 2025.

As written, Section 230 was originally intended to protect internet service providers from being held liable for content posted by a third-party user or for removing truly horrific or illegal content, Rodgers said at a committee hearing in May. But giant social media platforms have been exploiting this to profit off us and use the information we share to develop addictive algorithms that push content onto our feeds.

Some fear the emergence of powerful artificial intelligence systems that could potentially make decisions on their own without human direction will complicate the question of First Amendment protections for content moderation.

According to Jain, Justice Amy Coney Barrett, in her concurring opinion, raised the question of a future where tech companies could develop an artificial intelligence tool whose job is to figure out what is hateful, what isnt and whether a human really is making an expressive choiceprotected by the First Amendment.

Thats a question the [justices] dont answer, Jain said.

Read the original post:
Ruling boosts social media free speech protections, some say - Roll Call

Can the First Amendment Protect Americans From Government Censorship? – The New York Sun

Last week, in Murthy v. Missouri, the Supreme Court hammered home the distressing conclusion that, under the courts doctrines, the First Amendment is, for all practical purposes, unenforceable against large-scale government censorship. The decision is a strong contender to be the worst speech decision in the courts history.

(I must confess a personal interest in all of this: My civil rights organization, the New Civil Liberties Alliance, represented individual plaintiffs in Murthy.)

All along, there were some risks. As I pointed out in an article called Courting Censorship, Supreme Court doctrine has permitted and thereby invited the federal government to orchestrate massive censorship through the social media platforms. The Murthy case, unfortunately, confirms the perils of the courts doctrines.

One danger was that the court would try to weasel out of reaching a substantive decision. Months before Murthy was argued, there was reason to fear that the court would try to duck the speech issue by disposing of the case on standing.

Indeed, in its opinion, the court denied that that the plaintiffs had standing by inventing what Justice Samuel Alito calls a new and heightened standard of traceability a standard so onerous that, if the court adheres to it in other cases, almost no one will be able to sue. It is sufficiently unrealistic that the court wont stick to it in future cases.

The evidence was more than sufficient to establish at least one plaintiffs standing to sue, and consequently, as Justice Alitos dissent pointed out, we are obligated to tackle the free speech issue.

Regrettably, the court, however, again in Justice Alitos words, shirks that duty and thus permits this case to stand as an attractive model for future officials who want to control what the people say, hear, and think. The case gives a greenlight for the government to engage in further censorship.

A second problem was doctrinal. The Supreme Court has developed doctrine that encourages government to think it can censor Americans through private entities as long as it is not too coercive. Accordingly, with painful predictability, the oral argument in Murthy focused on whether or not there had been government coercion.

The implications were not lost on the government. Although it had slowed down its censorship machine during litigation, it revved it up after the courts hearing emphasized coercion. As put by Matt Taibbi, the FBI and the Department of Homeland Security reportedly resumed contact with Internet platforms after oral arguments in this case in March led them to expect a favorable ruling.

The First Amendment, however, says nothing about coercion. On the contrary, it distinguishes between abridging the freedom of speech and prohibiting the free exercise of religion. As I have explained in great detail, the amendment thereby makes clear that the Constitutions standard for a speech violation is abridging, that is, reducing, the freedom of speech, not coercion. A mere reduction of the freedom violates the First Amendment.

The court in Murthy, however, didnt recognize the significance of the word abridging. This matters in part for the standing question. Its much more difficult to show that the plaintiffs injuries are traceable to government coercion than to show that they are traceable to government abridging of the freedom of speech. More substantively, if the court had recognized the First Amendments word abridging, it would have clarified to the government that it cant use evasions to get away with censorship.

Other doctrinal disasters included the courts casual indifference to listeners or readers rights the right of speakers to hear the speech of others. The court treated such rights as if they were independent of the rights of speakers and therefore concluded that they would broadly invite everyone to sue the government.

Listeners rights, though, are most clearly based in the First Amendment when they are understood as the right of speakers to hear the speech of others, as this is essential for speakers to formulate and refine their own speech. The right of speakers to hear what others say is, therefore, the core of listeners rights. From this modest understanding of listeners rights, the plaintiffs rights as listeners should have been understood as part of their rights as speakers an analysis that wouldve avoided hyperbolical judicial fears of permitting everyone to sue.

The courts concern that a recognition of listeners rights would open up the courts to too many claimants is especially disturbing when the government has censored millions upon millions of posts with the primary goal of suppressing what the American people can hear or read.

When the most massive censorship in American history prevents Americans from learning often true opinion on matters of crucial public interest, it should be no surprise that there are many claimants. The courts disgraceful reasoning suggests that when the government censors a vast number of Americans, we lose our right of redress.

The greatest danger comes from the courts tolerance of the sub-administrative power that the government uses to corral private parties into becoming instruments of control. Administrative regulation ideally runs through notice-and-comment rulemaking.

In contrast, sub-administrative regulation works through informal persuasion, including subtle threats, regulatory hassle, and illicit inducements. By such means, the government can get the private platforms to carry out government orchestrated censorship of their users.

The federal government once had no such sub-administrative power, and it therefore had little control over speech. It could punish speakers only through criminal prosecutions that is, by going to court and showing that the defendants speech violated the criminal law.

Now, however, federal officials can subtly get the platforms to suppress speech often covertly, so an individual wont even know he is being suppressed. Thus, whereas the government traditionally could only punish the individual, it now can make his speech disappear.

Even worse, the courts tolerance of this sub-administrative privatization of censorship reverses the burden of proof. Government once had to prove to a judge and jury that a speakers words were illegal. Now, instead, the speaker must prove that the government censored him.

Whats more, theres no effective remedy. The courts qualified immunity doctrine makes it nearly impossible for censored individuals to get damages for past censorship. And the obstacles to getting an injunction mean that its nearly impossible to stop future censorship.

For example, the government can claim, as it did in Murthy, that its no longer censoring the affected individual. Then, poof The possibility of an injunction disappears. Moreover, because of the courts indifference to listeners rights even to the right of speakers to hear the speech of others, an injunction can protect only a handful of individuals; it cant stop the governments massive censorship of vast numbers of Americans.

The court thus puts Americans affected by censorship in an unenviable position. It reverses the burden of proof and denies Americans any effective remedy.

So, for multiple reasons, Murthy is probably the worst speech decision in American history. In the face of the most sweeping censorship in American history, the decision fails to recognize either the realities of the censorship or the constitutional barriers to it.

In practical terms, the decision invites continuing federal censorship on social media platforms. It thereby nearly guarantees that yet another election cycle will be compromised by government censorship and condemns a hitherto free society to the specter of mental servitude.

This article was originally published by RealClearPolitics and made available via RealClearWire.

Originally posted here:
Can the First Amendment Protect Americans From Government Censorship? - The New York Sun

The aftermath of the Supreme Courts NetChoice ruling – The Verge

Last weeks Supreme Court decision in the NetChoice cases was overshadowed by a ruling on presidential immunity in Trump v. US that came down only minutes later. But whether or not America even noticed NetChoice happen, the decision is poised to affect a host of tech legislation still brewing on Capitol Hill and in state legislatures, as well as lawsuits that are percolating through the system. This includes the pending First Amendment challenge to the TikTok ban bill, as well as a First Amendment case about a Texas age verification law that the Supreme Court took up only a day after its NetChoice decision.

The NetChoice decision states that tech platforms can exercise their First Amendment rights through their content moderation decisions and how they choose to display content on their services a strong statement that has clear ramifications for any laws that attempt to regulate platforms algorithms in the name of kids online safety and even on a pending lawsuit seeking to block a law that could ban TikTok from the US.

When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices, Justice Elena Kagan wrote in the majority opinion, referring to Facebooks News Feed and YouTubes homepage. And because that is true, they receive First Amendment protection.

NetChoice isnt a radical upheaval of existing First Amendment law, but until last week, there was no Supreme Court opinion that applied that existing framework to social media platforms. The justices didnt rule on the merits of the cases, concluding, instead, that the lower courts hadnt completed the necessary analysis for the kind of First Amendment challenge that had been brought. But the decision still provides significant guidance to the lower courts on how to apply First Amendment precedent to social media and content moderation. The Fifth Circuit was wrong in concluding that Texass restrictions on the platforms selection, ordering, and labeling of third-party posts do not interfere with expression, Kagan wrote of the appeals court that upheld Texas law seeking to prevent platforms from discriminating against content on the basis of viewpoint.

The decision is a revealing look at how the majority of justices view the First Amendment rights of social media companies something thats at issue in everything from kids online safety bills to the TikTok ban.

The court is already set to hear Free Speech Coalition v. Paxton next term a case challenging Texas HB 1181, which requires internet users to verify their ages (sometimes with government-issued IDs) to access porn sites. Free Speech Coalition, an adult entertainment industry group that counts Pornhub among its members, sued to block the law but lost on appeal. The justices decision in that case next year has the potential to impact many different state and federal efforts to age-gate the internet.

One recently signed law that may need to contend with the ruling is New Yorks Stop Addictive Feeds Exploitation (SAFE) for Kids Act, which requires parental consent for social media companies to use addictive feeds on minors. The NetChoice ruling calls into question how far legislatures can go in regulating algorithms that is, software programmed to surface or deprioritize different pieces of information to different users.

A footnote in the majority opinion says the Court does not deal here with feeds whose algorithms respond solely to how users act online giving them the content they appear to want, without any regard to independent content standards. The note is almost academic in nature platforms usually take into account many different variables beyond user behavior, and separating those variables from each other is not a straightforward matter.

Because its so hard to disentangle all of the users preferences, and the guidance from the services, and the editorial decisions of those services, what youre left with technologically speaking is algorithms that promote content curation. And it should be inevitably assumed then that those algorithms are protected by the First Amendment, said Jess Miers, who spoke to The Verge before departing her role as senior counsel at center-left tech industry coalition Chamber of Progress, which receives funding from companies like Google and Meta.

The Supreme Court made it pretty clear, curation is absolutely protected.

Thats going to squarely hit the New York SAFE Act, which is trying to argue that, look, its just algorithms, or its just the design of the service, said Miers. The drafters of the SAFE Act may have presented the law as not having anything to do with content or speech, but NetChoice poses a problem, according to Miers.The Supreme Court made it pretty clear, curation is absolutely protected.

Miers said the same analysis would apply to other state efforts, like Californias Age Appropriate Design Code, which a district court agreed to block with a preliminary injunction, and the state has appealed. That law required platforms likely to be used by kids to consider their best interests and default to strong privacy and safety settings. Industry group NetChoice, which also brought the cases at issue in the Supreme Court, argued in its 2022 complaint against Californias law that it would interfere with platforms own editorial judgments.

To the extent that any of these state laws touch the expressive capabilities of these services, those state laws have an immense uphill battle, and a likely insurmountable First Amendment hurdle as well, Miers said.

Michael Huston, a former clerk to Chief Justice Roberts who co-chairs law firm Perkins Coies Appeals, Issues & Strategy Practice, said that after this ruling, any sort of ban on content curation would be subject to a level of judicial scrutiny that is difficult to overcome. A law that, for instance, requires platforms to only show content in reverse-chronological order, would likely be unconstitutional. (TheCalifornias Protecting Our Kids from Social Media Addiction Act, which would prohibit the default feeds shown to kids from being based on any information about the user or their devices, or involve recommending or prioritizing posts, is one such real-life example.) The court is clear that there are a lot of questions that are unanswered, that its not attempting to answer in this area, Huston said. But broadly speaking ... theres a recognition here that when the platforms make choices about how to organize content, that is itself a part of their own expression.

The new Supreme Court decision also raises questions about the future of the Kids Online Safety Act (KOSA), a similar piece of legislation at the federal level thats gained significant steam. KOSA seeks to create a duty of care for tech platforms serving young users and allows them to opt out of algorithmic recommendations. Now with the NetChoice cases, you have this question as to whether KOSA touches any of the expressive aspects of these services, Miers said. In evaluating KOSA, a court would need to assess does this regulate a non-expressive part of the service or does it regulate the way in which the service communicates third-party content to its users?

Supporters of these kinds of bills may point to language in some of the concurring opinions (namely ones written by Justices Amy Coney Barrett and Samuel Alito) positing scenarios where certain AI-driven decisions do not reflect the preferences of the people who made the services. But Miers said she believes that kind of situation likely doesnt exist.

David Greene, civil liberties director at the Electronic Frontier Foundation, said that the NetChoice decision shows that platforms curation decisions are First Amendment protected speech, and its very, very difficult if not impossible for a state to regulate that process.

Similarly important is what the opinion does not say. Gautam Hans, associate clinical professor and associate director of the First Amendment Clinic at Cornell Law School, predicts there will be at least some state appetite to keep passing laws pertaining to content curation or algorithms, by paying close attention to what the justices left out.

What the Court has not done today is say, states cannot regulate when it comes to content moderation, Hans said. It has set out some principles as to what might be constitutional versus not. But those principles are not binding.

There are a couple different kinds of approaches the court seems open to, according to experts. Vera Eidelman, staff attorney at the American Civil Liberties Union (ACLU)s Speech, Privacy, and Technology Project, noted that the justices pointed to competition regulation also known as antitrust law as a possible way to protect access to information. These other regulatory approaches could, the Supreme Court seems to be hinting, either satisfy the First Amendment or dont raise First Amendment concerns at all, Eidelman said.

Transparency requirements also appear to be on the table, according to Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights. He said the decision implies that a standard for requiring businesses to disclose certain information created under Zauderer v. Office of Disciplinary Counsel is good law, which could open the door to future transparency legislation. When it comes to transparency requirements, its not that the Texas and Florida legislatures necessarily got it right, Barrett said. Their individualized explanation requirements may have gone too far, even under Zauderer. But disclosure requirements are going to be judged, according to Justice Kagan, under this more deferential standard. So the government will have more leeway to require disclosure. Thats really important, because thats a form of oversight that is far less intrusive than telling social media companies how they should moderate content.

The justices opinion that a higher bar was required to prove a facial challenge to the laws meaning that they were unconstitutional in any scenario could be reason enough for some legislatures to push ahead. Greene said states could potentially choose to pass laws that would be difficult to challenge unless they are enforced since bringing a narrower as-applied challenge before enforcement means platforms would have to show theyre likely to be targets of the law. But having a law on the books might be enough to get some companies to act as desired, Greene said.

Still, the areas the justices left open to potential regulation might be tricky to get right. For example, the justices seem to maintain the possibility that regulation targeting algorithms that only take into account users preferences could survive First Amendment challenges. But Miers says that when you read the court opinion and they start detailing what is considered expression, it becomes increasingly difficult to think of a single internet service that doesnt fall into one of the expressive capabilities or categories the court discusses throughout. What initially seems like a loophole might actually be a null set.

Justice Barrett included what seemed to be a lightly veiled comment about TikToks challenge to a law seeking to ban it unless it divests from its Chinese parent company. In her concurring opinion, Barrett wrote, without naming names, that a social-media platforms foreign ownership and control over its content moderation decisions might affect whether laws overriding those decisions trigger First Amendment scrutiny. Thats because foreign persons and corporations located abroad do not have First Amendment rights like US corporations do, she said.

Experts predicted the US government would cite Justice Barretts opinion in their litigation against TikTok, though cautioned that the statement of one justice does not necessarily reflect a broader sentiment on the Court. And Barretts comment still beckons for a greater analysis of specific circumstances like TikToks to determine who really controls the company.

Barretts concurrence notwithstanding, TikTok has also notched a potentially useful ammunition in NetChoice.

Id be feeling pretty good if I were them today, Greene said of TikTok. The overwhelming message from the NetChoice opinions is that content moderation is speech protected by the First Amendment, and thats the most important holding to TikTok and to all the social media companies.

Still, Netchoice does not resolve the TikTok case, said NYUs Barrett. TikToks own legal challenge implicates national security, a matter in which courts tend to defer to the government.

The idea that there are First Amendment rights for the platforms is helpful for TikTok, Hans said. If Im TikTok, Im mostly satisfied, maybe a little concerned, but you rarely get slam dunks.

See the article here:
The aftermath of the Supreme Courts NetChoice ruling - The Verge

Oklahoma Supreme Court ruling ‘inconsistent’ with First Amendment – ADF Media

Tuesday, Jun 25, 2024

The following quote may be attributed to Alliance Defending Freedom Senior Counsel Phil Sechler regarding the Oklahoma Supreme Courts ruling Tuesday in Drummond v. Oklahoma Statewide Virtual Charter School Board directing the Statewide Virtual Charter School Board to rescind its contract with St. Isidore of Seville Catholic Virtual School on grounds that the contract violates state and federal law:

Oklahoma parents and children are better off with more choices, not fewer. The U.S. Constitution protects St. Isidores freedom to operate according to its faith and supports the boards decision to approve such learning options for Oklahoma families. The board knew that the First Amendments Free Exercise Clause prohibits state officials from denying public funding to religious schools simply because they are religious. We are disappointed with the courts ruling that upholds discrimination against religion; well be considering all legal options, including appeal.

Justice Dana Kuehn wrote in her dissent, I find nothing in the State or Federal Constitutions barring sectarian organizations, such as St. Isidore, from applying to operate charter schools. To the extent the Charter Schools Act bars such organizations from even applying to operate a charter school, I would find it inconsistent with the Free Exercise Clause of the First Amendment.

ADF attorneys representing the Oklahoma Statewide Virtual Charter School Board filed a brief last year with the states high court opposing the petition filed by Oklahoma Attorney General Gentner Drummond to cancel the contract the board entered with St. Isidore.

The ADF Center for Academic Freedom is dedicated to protecting First Amendment and related freedoms for students and faculty so that everyone can freely participate in the marketplace of ideas without fear of government censorship.

# # #

Go here to read the rest:
Oklahoma Supreme Court ruling 'inconsistent' with First Amendment - ADF Media

Gag orders and First Amendment rights – Foundation for Individual Rights and Expression

Perhaps the most talked-about gag orders in 2024 were those against former president (and current presidential candidate) Donald J. Trump. New York State Supreme Court Judge Juan M. Merchan, who presided over theNew York v. Trumphush money criminal trial, issued an order limiting Trump from making statements or directing others to make statements about potential witnesses, the district attorney, employees of the district attorneys office, family members of the district attorney, jurors, or prospective jurors. This came after the former president made numerous statements the judge considered inflammatoryand was givenseveral warnings to stop commenting on the case.

Trumps legal team argued this broad gag order violates his right to engage in political speech on matters of public concern. Judge Merchan countered that the statements were necessary to preserve the administration of justice, and supporters of the order contend the gag order was narrowly tailored and justified under the circumstances.

Judges commonly use gag orders to limit the speech of other trial participants, not just the former president and presumptive party nominee. Judges sometimes issue gag orders that prevent trial participants from making statements outside the court about the underlying legal proceedings or other matters before the court, in order to minimize harm from pervasive pre-trial publicity or to ensure litigants receive fair judicial proceedings. However, sometimes judges issue gag orders even against the media or other parties not before the court. In any of these instances, gag orders raise important First Amendment questions.

The most suspect gag orders are those levied against the press. The U.S. Supreme Court explained inNebraska Press Association v. Stuart (1976) that gag orders against the press are prior restraints on speech what Chief Justice Warren Burger called the most serious and least tolerable infringements on First Amendment rights.

The case involved the murder trial of a man who allegedly killed six members of a family in the small town of Sutherland, Nebraska. Trial judge Hugh Stuart issued a gag order limiting the press from reporting on several aspects of the case, including:

Whether the defendant had confessed to the police.

Statements that the defendant had made to others.

The contents of a note that the defendant had written the night of the crime.

Certain aspects of medical testimony at the preliminary hearing.

The identity of the victims of an alleged sexual assault committed before the killings. (It also prohibited reporting on the exact nature of the order.)

The press challenged the gag order as an impermissible prior restraint on speech in violation of the First Amendment. Ultimately, the Supreme Court agreed the gag order was too broad. It held that before issuing a gag order, a judge should consider less speech-restrictive alternatives, such as changing the venue or location of the trial, postponing the trial, questioning potential jurors during voir dire (the jury selection process), or making emphatic and clear jury instructions.

As the Court explained, these alternatives could lead to judicial proceedings sensitive to a criminal defendants fair-trial rights, without restricting speech like the gag order that Judge Stuart issued.

We cannot say on this record that alternatives to a prior restraint on petitioners would not have sufficiently mitigated the adverse effects of pretrial publicity so as to make prior restraint unnecessary, the Courtwrote. Reasonable minds can have few doubts about the gravity of the evil pretrial publicity can work, but the probability that it would do so here was not demonstrated with the degree of certainty our cases on prior restraint require.

Nebraska Press Association thus erects a high barrier to gag orders against reporters, particularly in criminal cases. Subsequent courts generally have required the government to show that any requested gag order is narrowly tailored and necessary to avoid a clear and present danger to the fair administration of justice. While not always using the term gag order, the rule fromNebraska Press Association in effect means such an order against the media is constitutional only if it meets strict scrutiny the highest form of judicial review.

As constitutional law scholar Erwin Chemerinskyhas observed, the decision has virtually precluded gag orders on the press as a way of preventing prejudicial pretrial publicity.

While strict scrutiny is the high standard used to evaluate gag orders against the press, there is far less consistency in American jurisprudence on how to evaluate gag orders against attorneys and trial participants. Some courts still apply exacting scrutiny to such gag orders even against attorneys and trial participants. However, many courts use a much less demanding standard.

That inconsistency is perhaps understandable given the Supreme Court has never decided a First Amendment case directly involving a gag order on an attorney or trial participant, unlike with gag orders against the media. Attorneys are considered officers of the court and are therefore subject to greater judicial control. Likewise, trial participants also are more under the control of the court than the reporting press.

The Court did rule inGentile v. State Bar of Nevada (1993) on whether a criminal defense attorney could be subject to professional discipline for statements made at a press conference months before trial. Attorney Dominic Gentile, in order to combat negative pretrial press coverage of his client, contended his client was innocent and that the real culprit in the case was likely a police officer.The Nevada Bar sought to discipline Gentile for violating a rule of professional conduct that prohibited lawyers from making public statements about active litigation that have a substantial likelihood of materially prejudicing the underlying court proceedings.

All gag orders are not only prior restraints but content-based restrictions on speech. As such, they should be subject to rigorous review and must be narrowly drawn.

This substantial likelihood standard is often known as the Gentile standard. InGentile, a sharply divided Court upheld the constitutionality of the Nevada Bars professional conduct rule even as it ultimately ruled in favor of Gentile, finding he reasonably could have believed his comments were justified under the rules safe harbor exception allowing lawyers to make statements to counter negative pretrial publicity against their clients. The Court held the safe harbor provision was too vague, and that the bar therefore could not discipline Gentile.

As mentioned, some courts apply a very high standard for all gag orders. For example, the U.S. Court of Appeals for the Sixth Circuitinvalidated a broad gag order issued by a federal district court in the criminal trial of sitting Rep. Harold Ford from Memphis, Tennessee, back in 1987. Ford faced mail and bank fraud charges, and the judge issued a broad gag order prohibiting Ford from discussing the merits of the case. The order even prohibited him from makingany statements about the trial, including an opinion of or discussion of the evidence and facts in the investigation or case.

The Sixth Circuit wrote inUnited States v. Ford (1987) that such broadly based restrictions on speech in connection with litigation are seldom, if ever, justified. It also explained that it is true that permitting an indicted defendant like Ford to defend himself publicly may result in overall publicity that is somewhat more favorable to the defendant than would occur when all participants are silenced. This does not result in an unfair trial for the government, however.

Ultimately, the Sixth Circuit held such gag orders are justifiable only if the government can show public comments about the trial pose a clear and present danger to the fair administration of justice.

Two of the most cherished constitutional rights in the United States are the right to vote and the right to freedom of speech.

Read More

Other courts use a much less demanding standard. These courts will often apply a standard similar to that discussed in theGentile case whether there is a substantial likelihood public statements about the trial would prejudice court proceedings. And some courts have used even a lower standard whether there is a reasonable likelihood that public statements will prejudice an underlying proceeding.

Gag orders featuring high profile defendants like the former President receive significant media attention. In the age of social media, everyone including court participants can reach a wider audience and this makes judges more sensitive to interference with court proceedings and more prone to issue gag orders. But as noted at the outset, in each case, they raise important constitutional considerations as they are a form of prior restraint.

The case law draws a distinction between gag orders against the media on the one hand and gag orders against trial participants, including attorneys, on the other hand. However, all gag orders are not only prior restraints but content-based restrictions on speech. As such, they should be subject to rigorous review and must be narrowly drawn.

By David L. Hudson, Jr. (Last updated: June 20, 2024)

Continue reading here:
Gag orders and First Amendment rights - Foundation for Individual Rights and Expression