zero-sum

California Sued Over New “Deepfake” Law

by Jonathan Turley | Sep 22, 2024

California has triggered the first lawsuit over its controversial new laws that require social media companies to censor fake images created by artificial intelligence, known as deepfakes as well as barring the posting of images. A video creator is suing the State of California after his use of a parody of Vice President Kamala Harris was banned. The law raises serious and novel constitutional questions under the First Amendment.

Gov. Gavin Newsom signed A.B. 2839, expanding the time period that bars the knowing posting of deceptive AI-generated or manipulated content about the election. He also signed A.B. 2655, requiring social media companies to remove or label deceptive or digitally altered AI-generated content within 72 hours of a complaint. A third bill, A.B. 2355,  requires election advertisements to disclose whether they use AI-generated or manipulated content.

The American Civil Liberties Union of California, Foundation for Individual Rights and Expression (FIRE), the California News Publishers Association and the California Broadcasters Association opposed the legislation on first amendment grounds.

Elon Musk recently reposted the image of Christopher Kohls, who he defended as fighting for that “absolute Constitutional right to lampoon politicians he believes should not be elected.”

Kohls objected that the new law requires a new font size for the labeling that would fill up the entire screen of his video.

In the complaint below, Kohls noted “[w]hile the obviously far-fetched and over-the-top content of the video make its satirical nature clear, Plaintiff entitled the video ‘Kamala Harris Campaign Ad PARODY.’”

AB 2389 covers “deepfakes,” when “[a] candidate for any federal, state, or local elected office in California portrayed as doing or saying something that the candidate did not do or say if the content is reasonably likely to harm the reputation or electoral prospects of a candidate.”

The exceptions for satire, parody, and news reporting only apply when they are accompanied by a disclaimer. The law is vague and could be used to cover a wide array of political speech.

It is not clear what defines satire or parody under the exception. Likewise, “materially deceptive content,” is defined as “audio or visual media that is digitally created or modified, and that includes, but is not limited to, deepfakes and the output of chatbots, such that it would falsely appear to a reasonable person to be an authentic record of the content depicted in the media.”

The Kohls complaint argues that the law flips the burden to creators to establish a defense.

One of the more interesting legal issues is how the law defines “malice.” The legislators lifted the definition from New York Times v. Sullivan on defamation to define the element as the statute requires “malice.” This term does not require any particular ill-intent, but instead applies a definition of “knowing the materially deceptive content was false or with a reckless disregard for the truth.”

That is the long-standing standard for public officials and public figures subject to the higher standard of defamation. However, it is not clear that it will suffice for a law with potential criminal liability  and a law with sweeping limits on political speech.

Opinion and satire are generally exempted from defamation actions. Satire can sometimes be litigated as a matter of “false light,” but the standard can become blurred. The intent is clearly to create a false impression of the speaker in making fun of a figure like Harris. Drawing lines between honest and malicious satire is often difficult.

Under a false light claim, a person can sue when a publication or image implies something that is both highly offensive and untrue. Where defamation deals with false statements, false light deals with false implications.

For example, in Gill v. Curtis Publ’g Co., 239 P.2d 630 (Cal. 1952), the court considered a “Ladies Home Journal” article that was highly critical of couples who claimed to be cases of “love at first sight.” The article suggested that such impulses were more sexual than serious. The magazine included a photo of a couple, with the caption, “[p]ublicized as glamorous, desirable, ‘love at first sight’ is a bad risk.” The couple was unaware that the photo was used and never consented to its inclusion in the magazine. They prevailed in an action for false light given the suggestion that they were one of these sexualized, “wrong” attractions.

In 1967, the Supreme Court handed down Time, Inc. v. Hill, which held that a family suing Life Magazine for false light must shoulder the burden of the actual malice standard under New York Times v. Sullivan. Justice William Brennan wrote that the majority opinion held that states cannot judge in favor of plaintiffs “to redress false reports of matters of public interest in the absence of proof that the defendant published the report with knowledge of its falsity or reckless disregard of the truth.”

This line is equally difficult under the tort’s standard for the commercial appropriation of use or likeness.

Parody and satire can constitute appropriation of names or likenesses (called the right to publicity). The courts, including the Ninth Circuit, have made a distinctly unfunny mess of such cases. Past tort cases generally have favored celebrities and resulted in rulings like White v. Samsung, a perfectly ludicrous ruling in which Vanna White successfully sued over the use of a robot with a blonde wig turning cards as the appropriation of her name or likeness. It appears no blonde being — robotic or human — may turn cards on a fake game show.

There is also the interesting question of when disclaimers (which are often upheld) ruin the creative message. The complaint argues:

“Disclaimers tend to spoil the joke and initialize the audience. This is why Kohls chooses to announce his parody videos from the title, allowing the entire real estate of the video itself to resemble the sorts of political ads he lampoons. The humor comes from the juxtaposition of over-the-top statements by the AI generated ‘narrator,’ contrasted with the seemingly earnest style of the video as if it were a genuine campaign ad.”

The complaint below has eight counts from (facial and applied) challenges under the First Amendment to due process claims under the Fourteenth Amendment.

Here is the complaint: Kohls v. Bonta

Subscribe to Res ipsa loquitur

0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Contact Us

Subscribe to get our latest posts

Privacy Policy

Sitemap

© 2024 FM Media Enterprises, Ltd.