A Real Account of Deep Fakes
Laws regulating pornographic deepfakes are written to prohibit “digital forgeries,” “false” images, or media “indistinguishable” from “authentic” recordings. Yet the typical anti-deepfake law covers materials that aren’t forgeries, aren’t false, and that reasonable observers can easily distinguish from authentic recordings. Though drafted as if they regulate statements of fact, anti-deepfake laws actually target certain outrageous depictions per se—and rightly so, because pornographic deepfakes cause harm irrespective of their truth or falsity. However, the inapposite language of facts results in statutes with crucial ambiguities. Moreover, because anti-deepfake laws ban outrageous depictions irrespective of the factual assertions they make, they differ fundamentally from the information-privacy and defamation regimes that they superficially resemble. Instead of regulating true or false disclosures of fact, anti-deepfake laws fall into a distinct and disfavored category: laws that forbid expression because it is outrageous.
This Article begins by excavating the internal tension of anti-deepfake statutes and explaining the laws’ theoretical underpinnings. It shows that the laws mean to redress highly offensive appropriations of likeness, but employ the incommensurable vocabulary of a different dignitary harm: the circulation of facts about persons. The Article then uses semiotic theory to explain how deepfakes differ from the media they mimic and why those differences matter legally. Photographs and video recordings record events. Deepfakes merely depict them. Justifications for regulating records do not necessarily justify regulating depictions. Many laws—covering everything from trademark dilution to flag burning to “morphed” child sexual abuse material (CSAM)—have banned outrageous depictions as such. Several remain in effect today. Yet when such bans are challenged, courts mischaracterize imagery to sidestep constitutional scrutiny: Courts pretend fictional depictions are factual records. Anti-deepfake laws resist this dodge. Courts considering these laws will be forced to confront head-on the extent to which a statute may ban outrageous expression as such.
Introduction
“[R]epresentation is reality . . . .”1 Catharine A. MacKinnon, Only Words 29 (1993) (discussing, with approval, an argument attributed to Susanne Kappeler).
Breakthrough technology has made it cheap and easy to synthesize photorealistic images and videos of recognizable individuals. People are using it to generate massive amounts of porn.2 Henry Ajder, Giorgio Patrini, Francesco Cavalli & Laurence Cullen, The State of Deepfakes: Landscape, Threats, and Impact 1 (2019), https://regmedia.co.uk/2019/10/08/deepfake_report.pdf [perma.cc/84G4-LMSA]. See generally Laura Wagner & Eva Cetinic, Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm, arXiv (May 20, 2025), https://doi.org/10.48550/arXiv.2505.04600. In keeping with colloquial usage, I use “porn,” “pornography,” and “pornographic” to denote depictions of nudity or sexual conduct, but it bears noting that jurists have observed that this vocabulary arguably mischaracterizes nonconsensual, sexualized depictions. Mary Anne Franks, “Revenge Porn” Reform: A View from the Front Lines, 69 Fla. L. Rev. 1251, 1257–58 (2017), cited in People v. Austin, 155 N.E.3d 439, 451 (Ill. 2019).
An AI user needs only a few photographs of his target to generate a pornographic “deepfake” of anyone from an ex-girlfriend to a celebrity.3Hany Farid, Mitigating the Harms of Manipulated Media: Confronting Deepfakes and Digital Deception, PNAS Nexus, July 2025, at 1, 4. To match what appears to be the typical scenario, I use “he/him” pronouns to refer in general to a creator or distributor of nonconsensual, deepfake pornography, and “she/her” pronouns to describe the person depicted. Ajder et al., supra note 2, at 2; Jess Weatherbed, Trolls Have Flooded X with Graphic Taylor Swift AI Fakes, The Verge (Jan. 25, 2024), https://theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending [perma.cc/N9KT-QAXX].
In early 2024, sexually explicit deepfakes of Taylor Swift gathered tens of millions of views on the social media site X.4Weatherbed, supra note 3.
Recent reporting has documented the wide availability and popularity of AI models calibrated to create pornographic images of specific people.5Emanuel Maiberg, Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People, 404 Media (July 15, 2025), https://www.404media.co/hugging-face-is-hosting-5-000-nonconsensual-ai-models-of-real-people/ [perma.cc/RP5Z-VU5L]; Emanuel Maiberg, A16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise, 404 Media (July 15, 2025), https://www.404media.co/a16z-backed-ai-site-civitai-is-mostly-porn-despite-claiming-otherwise/ [perma.cc/6Y98-E6J2]. See generally Wagner & Cetinic, supra note 2.
Women in politics and journalism are being threatened with deepfake pornography that uses their likenesses.6Mark Scott, Deepfake Porn Is Political Violence, Politico (Feb. 8, 2024), https://politico.eu/newsletter/digital-bridge/deepfake-porn-is-political-violence [perma.cc/963K-BELR]; Danielle Keats Citron, Sexual Privacy, 128 Yale L.J. 1870, 1922–23 (2019).
Schoolchildren across the nation are using AI to synthesize naked images of their classmates; two boys in their early teens have been charged with felonies for allegedly doing so.7Caroline Haskins, Florida Middle Schoolers Arrested for Allegedly Creating Deepfake Nudes of Classmates, Wired (Mar. 8, 2024), https://www.wired.com/story/florida-teens-arrested-deepfake-nudes-classmates/ [perma.cc/WJ3F-JC5X]; see also Jason Koebler & Emanuel Maiberg, A High School Deepfake Nightmare, 404 Media (Feb. 15, 2024), https://www.404media.co/email/547fa08a-a486-4590-8bf5-1a038bc1c5a1/ [perma.cc/3QQU-ED9H].
Alarm over pornographic deepfakes has united every state attorney general,8Meg Kinnard, Prosecutors in All 50 States Urge Congress to Strengthen Tools to Fight AI Child Sexual Abuse Images, AP News (Sep. 5, 2023), https://apnews.com/article/ai-child-pornography-attorneys-general-bc7f9384d469b061d603d6ba9748f38a [perma.cc/6XLY-DBBG].
the Biden White House,9Justin Sink, White House Urges Action After ‘Alarming’ Taylor Swift Deepfakes, Bloomberg (Jan. 26, 2024), https://bloomberg.com/news/articles/2024-01-26/white-house-urges-action-after-alarming-taylor-swift-deepfakes [perma.cc/RTU3-F8UP].
and the Trump White House.10Rebecca Shabad, Trump Signs Bill Cracking Down on Explicit Deepfakes, NBC News (May 19, 2025), https://nbcnews.com/politics/white-house/trump-sign-bill-cracking-deepfake-pornography-rcna207693 [perma.cc/GLD4-QFG7].
In May 2025, President Trump signed into law the TAKE IT DOWN Act, a criminal statute that specifically targets pornographic deepfakes of adults.11Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN Act), Pub. L. No. 119-12, 139 Stat. 55 (2025) (to be codified at 47 U.S.C. § 223(h)).
Forty-one states have passed similar civil or criminal legislation.12These numbers are accurate as of July 24, 2025, according to Public Citizen’s tracker. See Tracker: State Legislation on Intimate Deepfakes, Public Citizen (July 22, 2025), https://web.archive.org/web/20250725153026/https://citizen.org/article/tracker-intimate-deepfakes-state-legislation [perma.cc/YV8Q-9NF7]. For a systematic analysis of proposed and enacted deepfake-related legislation, see Thomas E. Kadri & Sonja R. West, Deepfake Torts: Emerging Tort Frameworks in U.S. Deepfake Regulation, 18 J. Tort L. 515 (2025).
Anti-deepfake laws are sweeping the nation. This Article focuses on two glaring details that these statutes seldom, if ever, duly acknowledge: Deepfakes are not photographs or video recordings; and often, they don’t even pretend that they are. Anti-deepfake statutes differ from information-privacy and defamation laws in a crucial respect: Information-privacy and defamation law regulate facts, or assertions of fact, about persons. Anti-deepfake laws ban outrageous depictions of persons, irrespective of any factual assertions they make. This difference matters for two reasons. First, legislators who approach deepfakes as a defamation, fraud, or forgery problem or as an information-privacy problem—that is, as a problem of false representations of fact or true disclosures of private facts—end up enacting statutes that may not redress the harms they should redress.
Second, because anti-deepfake statutes ban certain outrageous imagery per se, irrespective of any factual assertions the imagery makes, the rationales that justify defamation and information-privacy regimes do not justify these laws. Courts applying the First Amendment regard bans on offensive expressions of opinion with a distinct skepticism not directed at regulations of factual statements.13See, e.g., Matal v. Tam, 582 U.S. 218, 223 (2017) (stating “a bedrock First Amendment principle: Speech may not be banned on the ground that it expresses ideas that offend.”); Snyder v. Phelps, 562 U.S. 443, 458 (2011) (“[I]n public debate we must tolerate insulting, and even outrageous, speech in order to provide adequate breathing space to the freedoms protected by the First Amendment.” (cleaned up) (quoting Boos v. Barry, 485 U.S. 312, 322 (1988))).
This isn’t to say that courts never uphold the constitutionality of per se bans on outrageous expression. They do. Along with obscenity law, the most notable example is the federal ban on morphed child sexual abuse material (CSAM), a lower-tech predecessor to deepfakes in which nonsexual images of identifiable children are edited to depict sexual conduct.14See infra Section III.A.2.c.
But in upholding such bans, courts do not always admit that they forbid outrageous expression per se. Rather, courts sometimes pretend that they forbid expression because it signifies particular historical facts. In other words, when evaluating outrageous imagery, courts pretend to be regulating records of historical fact when they are in fact regulating mere depictions of fictional events. In the language of semiotics, courts pretend to be analyzing indexical images when they are in fact analyzing iconic images.15See infra Section II.A.
Properly drafted and properly understood, anti-deepfake statutes are content-based restrictions on noncommercial speech that is not necessarily obscene; is not necessarily defamatory, nor fraudulent, nor even false; and discloses no private facts. They are the most recent manifestation of a longstanding impulse to ban outrageous visual representations.16See infra Section III.B.2.
This impulse motivates not only historical regulations of flag and effigy burning, but also bans on morphed CSAM and trademark dilution by tarnishment that remain in force today. Anti-deepfake laws are also the most recent manifestation of the law’s impulse to mischaracterize per se bans on outrageous expression as something they are not. Courts relied on the conflation of records and depictions to uphold bans on morphed CSAM, and scholars and legislators are reenacting this maneuver today for anti-deepfake laws.
But anti-deepfake laws offer none of the analytical offramps that other bans on outrageous expression offer. Courts considering morphed CSAM could expediently classify it as “child pornography,” which is categorically unprotected by the First Amendment, even though the Supreme Court’s rationale for that categorical exclusion justifies regulating only records of abuse and not mere depictions of abuse.17See infra Section III.A.2.
This approach is unlikely to work as well for deepfakes, because there is no categorical First Amendment exclusion for pornographic depictions of adults. Nor can deepfakes be forbidden on the ground that they disclose true, private facts, as revenge pornography does, or on the ground that they necessarily make false statements of fact.
Although they use the language of facts, anti-deepfake laws actually regulate fictional expression. These laws will force courts to consider the degree to which American law can ban outrageous iconography as such—something it has reliably done and continues to do today—even when jurists have to admit that this is what the law is doing. The internal tensions of anti-deepfake laws invite us to reconsider how hostile our constitutional order ought to be to the regulation of outrageous expression per se—and, indeed, how hostile it ever really has been.
Part I examines anti-deepfake laws to distill their essential characteristics. Although commentators frequently analogize anti-deepfake laws to defamation and revenge-porn bans, Part I identifies a critical distinction: Anti-deepfake laws regulate not statements of fact, but outrageous expression per se. In privacy-theory terms, the laws redress “appropriation,” a privacy harm that does not require the disclosure of true facts or the assertion of falsehoods. However, the statutes attempt to do so using infelicitous concepts of truth and falsity, which leads to unclear coverage.
Part II uses semiotic theory to explain how deepfakes differ from photographs and video recordings. Photographs are indexical: They record a visual phenomenon as it appeared through a particular lens at a particular moment in time. Deepfakes are iconic: They represent by resemblance. We interpret indexical media as assertions of fact; as a result, accurate photographs can reveal private matters and deceptive photographs can defame. But deepfakes are merely icons. They do not necessarily assert or record facts in the way that photographs do. As a result, the legal rationales historically invoked to regulate indexical imagery cannot support the full breadth of today’s anti-deepfake laws.
Part III tours trademark dilution and CSAM law, past prohibitions on flag desecration and effigy burning, and the criminalization (vel non) of sexual fantasy to show that anti-deepfake laws address a well-understood harm and have close cousins in past and present American legal doctrines. Part III also, however, shows that courts mischaracterize the semiotic status of this harm in order to sidestep First Amendment scrutiny: To uphold the regulation of icons as constitutional, courts invoke rationales that instead justify the regulation of indices.
Finally, Part IV explains that properly understanding the semiotics of deepfakes is essential to appropriate and effective regulation. Congress and the states have rushed to enact laws that address a deluge of photorealistic, AI-generated pornography. Our impulse to outlaw outrageous depictions per se will collide with our tendency to deny that we are doing so. We may try to equate deepfakes with photographs, but commandeering the law of photographs and video recordings to regulate AI-generated imagery will produce bizarre outcomes—like classifying any sexually explicit image generated by popular AI models as “child pornography” under federal law, even if the generated imagery depicts only adults.18See infra Section IV.A.
Properly regulating deepfakes requires acknowledging that they are icons, not indices, and employing the legal theories that regulate them as such: obscenity law and an extended version of the tort of appropriation.
I. The Internal Tension of Anti-Deepfake Laws
Legal discussion of deepfakes frequently links their harms to two familiar bodies of law. The first is the regulation of false and deceptive statements, like defamation and fraud.19The TAKE IT DOWN Act refers to deepfakes as “digital forger[ies],” as does the Cyber Civil Rights Initiative. Pub. L. No. 119-12, 139 Stat. 55, 55 (2025) (to be codified at 47 U.S.C. § 223(h)(1)(B)); Laws, Cyber Civil Rights Initiative, https://cybercivilrights.org/existing-laws [perma.cc/H56Q-AV9C]. See generally Abigail George, Note, Defamation in the Time of Deepfakes, 45 Colum. J. Gender & L. 122 (2024); Michael P. Goodyear, Dignity and Deepfakes, 57 Ariz. St. L.J. 931 (2025).
The second is the law of information privacy, particularly prohibitions on disseminating so-called “revenge porn”—nonconsensual, intimate photographs and video recordings.20Franks and Waldman, for example, make both comparisons: they assert both that deepfakes are “closely related to what is often colloquially referred to as ‘revenge porn’ ” and that deepfakes are “a form of deliberately deceptive speech.” Mary Anne Franks & Ari Ezra Waldman, Sex, Lies, and Videotape: Deep Fakes and Free Speech Delusions, 78 Md. L. Rev. 892, 893–94 (2019); see also Rebecca A. Delfino, Pornographic Deepfakes: The Case for Federal Criminalization of Revenge Porn’s Next Tragic Act, 88 Fordham L. Rev. 887, 897–98 (2019).
But studying anti-deepfake statutes reveals that they differ essentially from these supposedly close relatives. Actionable defamation and revenge porn necessarily communicate actual or purported facts about a victim. Legal liability for deepfakes, by contrast, seldom requires that the deepfake assert any facts whatsoever about a victim.
This Part examines the civil and criminal statutes that, as of July 24, 2025, forty-one states and the federal government have enacted specifically to regulate nonconsensual, pornographic deepfakes depicting adults.21See supra note 12.
It finds that, although anti-deepfake laws commonly limit their coverage to “false” media, “forger[ies],” and/or apparently “authentic” media, the laws, in practice, cover media irrespective of truth or falsity. In other words, the statutes aren’t drafted properly. Instead of conditioning liability on a statement’s truth or falsity, as their language sometimes implies, anti-deepfake laws actually target a statement’s offensiveness.22I use the words “offensiveness” and “outrageousness” to encompass the harm that a statement may cause irrespective of any precise, constative content we might assign to it. Cf. Judith Butler, Excitable Speech: A Politics of the Performative 57, 76–77 (1997).
What makes anti-deepfake laws infelicitous is that they mean to regulate media because of how it looks: They target highly offensive aesthetics. However, instead of defining the laws’ scope expressly in terms of offensiveness, legislatures define covered subject matter in terms of the facts it appears to communicate. They seek to remedy a harm rooted in offense using the language of harms rooted in true or false assertions. This is an unbridgeable gap.
This Part begins by identifying the aspects of anti-deepfake laws that demonstrate that their focus is not factual propositions, but outrageous aesthetics. It then highlights a number of statutory-interpretation problems in enacted laws that result from legislatures using the language of factual propositions to regulate the incommensurable harm of offense. It then identifies anti-deepfake laws’ proper theoretical grounding—the privacy harm known as appropriation of likeness—and distinguishes it from the defamation and information-privacy regulations that might seem to be anti-deepfake statutes’ closest cousins.
A. Anti-Deepfake Laws Regulate Fiction Using the Vocabulary of Facts
The core conduct that anti-deepfake statutes proscribe is the dissemination23Some statutes may also cover mere creation or possession of a deepfake, but the laws’ focus is on dissemination. See Kadri & West, supra note 12, at 12.
of a “realistic,”24See infra Section I.A.2.
sexually explicit25See, e.g., Idaho Code § 18-6606 (2024); Minn. Stat. § 604.32.2(2) (2024); N.Y. Civ. Rights Law § 52-c(2)(a) (McKinney 2024); Tex. Penal Code Ann. § 21.165(b) (West 2025).
depiction of an “identifiable” 26See, e.g., Ala. Code § 13A-6-240(b) (2024); Haw. Rev. Stat. § 711-1110.9(1)(c) (2021); 740 Ill. Comp. Stat. 190/10(a) (2024); Minn. Stat. § 604.32.2(2)–(3) (2024); N.Y. Penal Law § 245.15(1)(a) (McKinney 2023).
person without that person’s consent.27See, e.g., Minn. Stat. § 604.32.2(1) (2024); N.Y. Penal Law § 245.15.1(b) (McKinney 2023); Ala. Code § 13A-6-240(a)(2) (2024). Although other laws have been enacted to regulate deepfakes—such as by establishing a generalized “property” right in digital likenesses, see, e.g., Tenn. Code Ann. § 47-25-1101, -1103(a) (2024) , or banning certain types of AI-generated electioneering media, see, e.g., Minn. Stat. § 609.771 (2024)—I use the term “anti-deepfake laws” to refer specifically to laws that single out sexual deepfakes for special treatment. Similarly, when I use the word “deepfake” without further context, I am referring to a pornographic deepfake.
Anti-deepfake laws come in both civil and criminal variants. Available civil remedies include disgorgement, actual damages, statutory damages of up to $150,000 per deepfake, punitive damages, and attorney’s fees.28See, e.g., Cal. Civ. Code § 1708.86(e) (West 2025).
Criminal penalties include multiyear prison terms.29See, e.g., TAKE IT DOWN Act, Pub. L. No. 119-12, § 2(a)(2), 139 Stat. 55, 55–58 (2025) (to be codified at 47 U.S.C. § 223(h)(4)(A)).
Civil anti-deepfake laws may not require proof that a violator intended to harm the depicted individual; instead, they may require actual or constructive knowledge that the depicted person did not consent to the creation or disclosure of a deepfake.30See, e.g., Cal. Civ. Code § 1708.86(b) (West 2025); 740 Ill. Comp. Stat. 190/10(a) (2024); Minn. Stat. § 604.32.2(a)(1) (2024); N.Y. Civ. Rights Law § 52-c(2)(a) (McKinney 2024).
Many criminal laws, meanwhile, require proof of some additional intent or knowledge. Several laws require intent to cause harm to the victim,31Ga. Code Ann. § 16-11-90(a)(1), (b) (2022); Haw. Rev. Stat. § 711-1110.9 (2024); N.Y. Penal Law § 245.15 (McKinney 2023); Va. Code Ann. § 18.2-386.2(A) (2024).
while others impose liability when “intent to self-gratify” is present.32S.D. Codified Laws § 22-21-4(3)(d) (2024); see also, e.g., Wyo. Stat. Ann. § 6-4-306(b)(iii)(B) (2024); N.D. Cent. Code § 12.1-27.1-03.3(6) (2025).
Another group of criminal laws imposes lower requirements, such as proof that the offender knew or should have known that the media was a deepfake.33Fla. Stat. § 836.13(3) (2025) (“know[] or reasonably should have known that [the] visual depiction was an altered sexual depiction”); La. Stat. Ann. § 14:73.13(B)(1) (2025) (“knowledge that the material is a deepfake that depicts another person”). For slightly different requirements, see Utah Code Ann. § 76-5b-205(2)(a) (West 2025) (“know[] or should reasonably know would cause a reasonable person to suffer emotional or physical distress or harm”) and Ala. Code § 13A-6-240(a) (2024) (“knowing[]” distribution).
Most important for present purposes, however, are two provisions that delineate the types of communications that anti-deepfake laws regulate: First, the laws usually apply irrespective of whether a deepfake would deceive a reasonable observer into believing it to be a record of historical fact. Second, the laws often limit their coverage to depictions rendered in “realistic” styles, or depictions “indistinguishable from” “authentic” media.
These two provisions exist in some tension. The first establishes that the factual assertions in a deepfake are beside the point: The typical law regulates media irrespective of whether any reasonable observer would understand it as factual. Yet even as the laws treat deceptiveness as irrelevant, they reassert the importance of verisimilitude through provisions that limit coverage to depictions that resemble authentic photo and videographic records of historical fact. In other words, a covered deepfake must resemble an authentic record of historical fact, yet it is irrelevant whether it could actually pass for such a record.
Why would a typical anti-deepfake law simultaneously provide that (1) a deepfake’s tendency to deceive is immaterial and (2) only “realistic” deepfakes are covered? And in practical terms, what media does this language actually cover? On its own, each of these provisions makes sense. Recent outrage over deepfakes shows that certain realistic, sexualized depictions can cause especially grave harm to their subjects, and that this harm can occur whether or not the deepfakes are understood as documentary fact.34See infra notes 103–104 and accompanying text.
In tandem, however, the two provisions illustrate anti-deepfake laws’ equivocal conception of the harm they seek to redress: Are deepfakes harmful because they are false, or for another reason? This equivocation, in turn, illuminates a deeper fault line in United States’ regulation of privacy and dignitary harms.
1. “Falsity” Without a Deceptiveness Requirement
Some anti-deepfake laws purport to limit their coverage to “false” media.35 Ark. Code Ann. § 5-14-139(a)(1)(B) (2025); Del. Code Ann. tit. 10, § 7802 (2024); Ga. Code Ann. § 16-11-90(b)(1) (2022); N.H. Rev. Stat. Ann. § 644:9-a (2025); S.C. Code Ann. § 16-15-330 (2025). Even the legislative history of the revised TAKE IT DOWN Act recites an emphasis on “fals[ity].” See H.R. Rep. No. 119-82, at 2 (2025).
But closer reading shows that language of falsity is generally a red herring: Hardly any anti-deepfake laws seem to require that a deepfake could instill a false belief in a reasonable observer. In fact, many laws specify that disclaimers of falsity offer no defense. Laws in California, Florida, Illinois, New York, South Carolina, Tennessee, and Washington expressly provide for liability even where the deepfake contains a disclaimer stating it is unauthorized or that it does not depict the person’s actual behavior.36Cal. Civ. Code § 1708.86(d) (West 2025); Fla. Stat. § 836.13(6) (2025); 740 Ill. Comp. Stat. 190/10(c) (2024); N.Y. Civ. Rights Law § 52-c(2)(b) (McKinney 2024); S.C. Code Ann. § 16-15-330(1) (2025); Tenn. Code Ann. § 39-17-1906(c) (2024); Wash. Rev. Code § 7.110.025(3) (2025).
Under nearly all anti-deepfake laws, then, a pornographic deepfake’s actual or perceived correspondence with historical fact is irrelevant.37Almost always—but not always. Not every regulation of pornographic deepfakes treats disclaimers as irrelevant. Louisiana’s law, for example, specifically excludes from its scope “any material . . . that includes content, context, or a clear disclosure visible throughout the duration of the recording that would cause a reasonable person to understand that the audio or visual media is not a record of a real event.” La. Stat. Ann. § 14:73.13(C)(1) (2025); see also N.J. Stat. Ann. § 2C:21-17.8(g)(1) (West 2025) (exempting “any content that a reasonable viewer or listener would not believe to authentically depict speech or conduct”). By contrast, laws regulating deepfakes in elections typically extinguish liability when a disclaimer is present. See, e.g., Cal. Elec. Code § 20010(b) (West 2025); N.Y. Elec. Law § 14.106(5)(b) (McKinney 2025) (containing an amendment by a 2024 session law to require in “any political communication that was produced by or includes materially deceptive media” a disclosure stating, “This (image, video, or audio) has been manipulated”); Wash. Rev. Code § 42.62.020(4) (2025). At least one election-related anti-deepfake law has already been preliminarily enjoined as unconstitutional. Kohls v. Bonta, 752 F. Supp. 3d 1187 (E.D. Cal. 2024).
A deepfake can be unlawful even if no reasonable observer would understand it as a record of something the victim actually did: For example, imagine a deepfake that depicts fantastical events and bears a conspicuous, effective, and unquestionably true disclaimer, “FICTION: NOT AN AUTHENTIC RECORDING.” A deepfake can also be unlawful even if it faithfully depicts true events: Imagine a jilted partner who, by memory, uses AI to generate painstakingly accurate reenactments of sexual encounters with his ex. Neither of these hypothetical deepfakes is “false.”38Goodyear argues, “Where deepfakes purport to depict actual events, they are false.” Goodyear, supra note 19, at 998. However, like any other representational medium, deepfakes can truthfully portray actual events. Cf., e.g., Commonwealth v. Serge, 837 A.2d 1255, 1257 (Pa. Super. Ct. 2003) (reciting, as true, facts depicted in computer-generated animation of the prosecution’s theory of the case leading to conviction, which had been admitted as demonstrative evidence), aff’d, 896 A.2d 1170 (Pa. 2006); Serge, 896 A.2d at 1179–80 (describing process of authenticating “accurate” computer-generated demonstrative evidence). Deepfakes are necessarily false insofar as they purport not to be deepfakes—but many deepfakes do not represent themselves as anything other than deepfakes.
There is an argument that both are true, though it may be most accurate simply to say that they’re neither true nor false.39See John C.P. Goldberg & Benjamin C. Zipursky, A Tort for the Digital Age: False Light Invasion of Privacy Reconsidered, 73 DePaul L. Rev. 461, 482 (2024) (discussing “non-newsworthy statements that would be found highly offensive by a reasonable person and would be nonactionable as opinion in a defamation context because they are not provably true or provably false”).
Yet typical anti-deepfake laws treat them no differently from deceptively false deepfakes. This refusal to differentiate seems entirely appropriate; these nonfalse deepfakes can still cause grave harm. But it belies the language of forgery and falsity that appears in these laws.
2. Subject Matter and “Realism” Requirements
At the same time as they discount whether a reasonable observer would regard a deepfake as documentary record, many anti-deepfake statutes define covered deepfakes in terms of their resemblance to documentary records. For instance, anti-deepfake statutes generally focus on visual media, and many limit their coverage to videos and still images.40Cal. Civ. Code § 1708.86(a)(3)(A) (West 2025); N.Y. Civ. Rights Law § 52-c(3)(a) (McKinney 2024); N.Y. Penal Law § 245.15(1)(a) (McKinney 2023); Tex. Penal Code Ann. § 21.165 (West 2025); Va. Code Ann. § 18.2-386.2 (2024).
The broadest state laws cover both pictorial and aural representations.41See, e.g., La. Stat. Ann. § 14:73.13 (2025); Minn. Stat. 604.32(1)(b) (2024).
No anti-deepfake law expressly covers written or spoken words, even though words alone can falsely portray an identifiable person engaging in sexual activity.42Cf., e.g., James v. Gannett Co., 353 N.E.2d 834, 837 (N.Y. 1976) (“It is old law that written charges imputing unchaste conduct to a woman are libelous per se . . . .”). Amy Adler notes that disfavor for images is a durable feature of obscenity and pornography law. See generally Amy Adler, The First Amendment and the Second Commandment, in Law, Culture and Visual Studies 161 (Anne Wagner & Richard K. Sherwin eds., 2014).
The statutes’ core coverage is photorealistic images and videos—although the language they use to articulate this coverage is often clumsy and confusing.
Given that the typical statute treats truthfulness or falsity as irrelevant, the most bewildering aspects of anti-deepfake laws are their requirements concerning how covered media must look. Anti-deepfake laws often expressly limit coverage to material that is “realistic,”43See, e.g., Cal. Civ. Code § 1708.86(a)(6) (West 2025); Fla. Stat. § 836.13(1)(a) (2025); Minn. Stat. § 604.32.1(b)(1) (2024); N.Y. Civ. Rights Law § 52-c(1)(b) (McKinney 2024); N.Y. Penal Law § 245.15(2)(d) (McKinney 2023); S.D. Codified Laws § 22-21-4(3)(a) (2024).
but they seldom define “realism.”44One exception is Nevada’s law, which refers specifically to “photorealis[m].” Nev. Rev. Stat. § 200.770 (2025).
Moreover, the scope of the typical law frustrates one obvious interpretation: It forecloses equating realism with observers’ tendency to perceive media as factual. “Realistic” must mean something different from “deceptive,” because anti-deepfake laws typically require no proof of deception and often specify that a violation can occur even when the media contains a disclaimer that it is false.45Cal. Civ. Code § 1708.86(d) (West 2025); Fla. Stat. § 836.13(6) (2025); N.Y. Civ. Rights Law § 52-c(2)(b) (McKinney 2024).
What anti-deepfake laws are targeting is not the assertion of particular truths or falsehoods, but the use of particular aesthetics. “Realism” is a style, and what constitutes realism varies depending on cultural and historical context.46Rebecca Tushnet, Worth a Thousand Words: The Images of Copyright, 125 Harv. L. Rev. 683, 724 (2012); Benjamin L.W. Sobel, Elements of Style: Copyright, Similarity, and Generative AI, 38 Harv. J.L. & Tech. 49, 84–86 (2024).
Some statutes, like those in Louisiana, Massachusetts, Pennsylvania, and South Dakota, require a sort of realism that would make the media appear “authentic” to a reasonable observer.47La. Stat. Ann. § 14:73.13(C)(1) (2025); Mass. Gen. Laws ch. 265, § 43A(b)(1) (2024); 18 Pa. Cons. Stat. § 3131 (2025); S.D. Codified Laws § 22-21-4(3)(a) (2024). So did the proposed DEFIANCE Act. Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, S. 3696, 118th Cong. § (3)(a)(3)(A) (as amended by S. Amend. 3049 and passed by Senate, July 23, 2024) [hereinafter DEFIANCE Act of 2024].
Louisiana’s law refers specifically to resemblance to “authentic” media that “record . . . the actual speech or conduct of the individual” depicted.48La. Stat. Ann. § 14:73.13(C)(1) (2025).
Here, “authentic” denotes documentary media, like photographs and video recordings. Unlike other forms of pictorial representation, which merely depict events, documentary media actually record real-life events. 49Compare Depict, Oxford Eng. Dictionary, https://doi.org/10.1093/OED/2187623999 (“to portray, delineate, figure anyhow”), with Record, Oxford Eng. Dictionary, https://doi.org/10.1093/OED/1987418679 (“[t]o convert (sounds, images, a broadcast, etc.) into permanent form . . . chiefly using magnetic tape or digital electronic techniques”).
As I use these words, to depict an event is to represent it pictorially, whereas to record an event is to capture contemporaneous evidence of its occurrence. 50Part II, infra, explains this distinction more precisely using semiotic terminology.
A record may be, but is not necessarily, a depiction, and vice versa.
In the sense that anti-deepfake statutes use the word, an “authentic” photograph is an image that is actually a photograph.51Although one might also ask whether an oil painting is “authentic,” this question connotes whether it was painted by a particular artist, not whether it is actually an oil painting.
Statutes that use the word “authentic” almost certainly require photorealism, “the quality in art . . . of depicting or seeming to depict real people . . . with the exactness of a photograph.”52Photorealism, Merriam-Webster, https://merriam-webster.com/dictionary/photorealism [perma.cc/4ZAZ-55PZ]. Nevada’s law makes this explicit. Nev. Rev. Stat. § 200.770 (2025).
A photorealistic image is one rendered in a style of realism that resembles an “authentic” photograph or video recording. Statutes that limit their coverage to “recording[s] of an individual”53Ala. Code § 13A-6-240(b) (2024).
or media “substantially derivative” of video recordings and photographs54Minn. Stat. § 604.32.1(b) (2024). Minnesota’s statute also includes “electronic image[s],” but because this term is enumerated in a list of otherwise documentary media, it probably does not encompass nondocumentary images. See Yates v. United States, 574 U.S. 528, 543 (2015) (describing “the principle of noscitur a sociis”).
are probably also meant to capture photorealistic media.
Other anti-deepfake laws, however, use far broader language. Florida’s law, for example, covers “any visual depiction that . . . depicts a realistic version of an identifiable person” nude or engaged in sexual conduct. 55Fla. Stat. § 836.13(1)(a) (2025).
Indiana’s criminal statute covers a “digital image . . . or video . . . that is of a quality, characteristic, or condition such that it appears to depict the alleged victim,” and several other states employ similar language.56Ind. Code § 35-45-4-8(c)(3) (2025); see also, e.g., Utah Code Ann. § 76-5b-205(1)(a)(ii) (West 2025); Me. Rev. Stat. Ann. tit. 17-A, § 511-A (2025) (“appears to show”); Mont. Code Ann. § 45-5-640(5)(b) (West 2025).
These laws are probably intended to regulate photorealistic, AI-generated media rather than pictorial depictions in general.57For instance, the “sponsor’s statement of intent” accompanying a Texas bill specifically mentions “artificial intelligence.” Tex. Senate, Bill Analysis, S. 88, 1st Sess., at 1 (2023), https://capitol.texas.gov/tlodocs/88R/analysis/html/SB01361F.htm [perma.cc/T5VW-XFRT]. Similarly, a legislative report on Florida’s statute frames the bill as a response to “technology advancing at a rapid rate.” Fla. Senate, Bill Analysis and Fiscal Impact Statement, 2022 Reg. Sess., at 2 (2022).
Yet the text might encompass depictions of all sorts.58For example, read literally, a Texas anti-deepfake statute from 2023 to 2025 encompassed videos of fictionalized theatre or puppetry performances, as these media can “depict a real person . . . performing an action that did not occur . . . .” Tex. Penal Code Ann. § 21.165(a)(1) (West 2025). Both the musical Hamilton and the marionette movie Team America: World Police depict real people—Alexander Hamilton and Kim Jong-Il, respectively—performing musical numbers that, to the best of my knowledge, they never performed.
3. “Falsity” and “Realism” Puzzles in the TAKE IT DOWN Act
Prodding at the language and the logic of anti-deepfake statutes reveals that they have been drafted in a manner that clashes with their goals. The statutes’ language suggests that their focus is on whether a deepfake can be distinguished from authentic media. Yet their practical scope includes all deepfakes of a particular aesthetic, irrespective of whether a reasonable observer could distinguish them from authentic media. The statutes require apparent “authenticity” while simultaneously discounting whether any reasonable viewer would take a deepfake to be authentic.
As an illustration, consider the federal TAKE IT DOWN Act, introduced by Senator Ted Cruz in June 2024 and enacted in significantly modified form in May 2025.59Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, S. 4569, 118th Cong. (2024) [hereinafter TAKE IT DOWN Act First Draft]; Pub. L. No. 119-12, 139 Stat. 55 (2025).
The first draft’s vague language exemplifies problems that appear in enacted state legislation. And although the enacted TAKE IT DOWN Act improves upon the first draft, the new federal law still contains a crucial ambiguity that illustrates its underdeveloped conception of harm: Are deepfakes regulated because of the facts they assert, or because of the way they look?
a. The First Bill’s Illustrative Shortcomings
The first draft of the TAKE IT DOWN Act provided that to be covered, a deepfake must “falsely depict an individual’s appearance or conduct.”60TAKE IT DOWN Act First Draft § 2(a)(2).
It further specified, “an individual appears in an intimate visual depiction if . . . the individual is actually the individual identified in the intimate visual depiction; or . . . a deepfake of the individual is used to realistically depict the individual such that a reasonable person would believe the individual is actually depicted in the intimate visual depiction.”61Id. (numbering omitted).
Inspecting these provisions reveals that they make no sense. Start with “falsely.” Is a “false[] depict[ion]” simply any ahistorical depiction? Or is it only a depiction that can reasonably be interpreted as asserting historical facts about a person? Many expressive works defy a true/false binary. Are Monet’s haystack paintings true or false? The question is nonsensical: As Rebecca Tushnet observes, “[visual] styles are neither true nor false.”62Tushnet, supra note 46, at 724.
Star Wars is not a documentary, but it is fiction rather than falsehood. Falsity denotes being “[c]ontrary to what is true, erroneous,” or outright “mendacious.”63False, Oxford Eng. Dictionary, https://doi.org/10.1093/OED/8506717465 (emphasis added).
If we don’t interpret media to be reporting historical facts, then it isn’t false—even if it isn’t true, either.64See Marc A. Franklin, Fiction, Libel, and the First Amendment, 51 Brook. L. Rev. 269, 273 (1985) (“[V]irtually the entire range of poetic and prose fiction would occupy this middle ground between truth and falsity. Language that does not purport to be reportorial is not automatically to be deprecated as ‘false.’ ”); cf. Richard A. Posner, Law and Literature 515 (3d ed. 2009) (“[O]ne of the adjustments we make in reading a work as literature rather than as history or sociology is generally to ignore issues of factuality . . . .”).
Limiting coverage to false depictions would just establish a takedown regime for certain types of defamation: It could cover images of libelous oil paintings—which are indeed “false[] depict[ions]”—but not photorealistic deepfakes whose content and/or context clearly communicate that they are fictional.
What about the proposed provision that an “individual appears in an intimate visual depiction . . . if the individual is actually the individual identified in” it?65TAKE IT DOWN Act First Draft § 2(a)(2) (emphasis added).
Read literally, this text is almost meaningless. Being “actually the individual identified in [an] intimate visual depiction” was probably meant to cover situations like revenge porn, in which the victim “is actually the individual” recorded in the media, and which the initial bill largely equated with deepfakes. But being recorded is distinct from being identified. As a result, it’s unclear what the “actually” in “actually . . . identified” means.66What’s more, one is “actually . . . identified” even when one is misidentified; the bill did not specify that the identification had to be correct.
A person identified in surveillance footage of a bank robbery and a person identified in an oil painting of a bank robbery is, in both cases, “actually” the person identified in the material in question. But only the surveillance footage directly evidences the individual’s participation in the robbery, because it is a record rather than a depiction.
Equally confusing was the bill’s provision that an individual “appears in an intimate visual depiction” if she “is . . . realistically depict[ed] . . . such that a reasonable person would believe [she] is actually depicted.”67TAKE IT DOWN Act First Draft § 2(a)(2).
On the narrowest plausible reading, the definition requires that a reasonable person actually would believe that the deepfake is an authentic photograph or video recording of the person it depicts. Reading the bill in this way makes the deepfake’s content and context especially significant: If the deepfake contains a disclaimer that it is a deepfake, or if the deepfake depicts fantastical conduct that could never take place in real life, then it may not be reasonable for an observer to believe that the deepfake depicts an individual’s speech or conduct as that speech or conduct truly occurred.
A broader reading of the first draft would be that it required a deepfake to be photorealistic but not necessarily misleading. So construed, the law would have covered even deepfakes that are so obviously fake that no reasonable observer could mistake them for authentic photographs or videos, as long as they are rendered in a photorealistic style. This is the best reading of the numerous statutes that require realism while simultaneously providing that disclaimers of falsity are no defense to liability. Unhelpfully, however, neither the first draft nor the enacted TAKE IT DOWN Act expressly addresses whether a disclaimer of falsity affects liability.
Finally, the broadest plausible reading of this language is that it required only that the depicted individual be identifiable from the deepfake, regardless of its realism. The first draft required only “that a reasonable person would believe the individual is actually depicted,” not that a reasonable observer would believe that the individual is actually recorded. Both the surveillance footage of the bank robbery and the oil painting of the bank robbery depict the robber. But only the footage records the robber.68Other statutes acknowledge this difference: Louisiana’s, for example, defines deepfakes as “falsely appear[ing] to a reasonable observer to be an authentic record of the actual speech or conduct of the individual . . . .” La. Stat. Ann. § 14:73.13(c)(1) (2025) (emphasis added).
This means that the broadest reading of the first draft would require only that a reasonable observer would recognize that a specific individual is depicted in the deepfake, thus covering cartoons, flipbooks, or any other nonphotorealistic pictorial medium capable of presentation in a still or video image.
The TAKE IT DOWN Act was significantly revised before its passage, but its original shortcomings remain illustrative, as similar errors appear in enacted state legislation. Numerous state statutes, for example, contain the problematic “appears to depict” language.69Del. Code Ann. tit. 10, § 7802 (2024); Ind. Code § 35-45-4-8(c)(3) (2025); Mont. Code Ann. § 45-5-640(5)(b) (West 2025); Utah Code Ann. § 76-5b-205(1)(a)(ii) (West 2025); see also Ark. Code Ann. § 5-14-139(b)(2) (2025) (“an ordinary person . . . would conclude that the depiction is of the identifiable person”); N.C. Gen. Stat. § 14-190.5A(a)(2) (2025) (“a reasonable person would believe the image depicts an identifiable individual”); Wyo. Stat. Ann. § 6-4-306 (2024) (“purports to represent an identifiable person”).
Others define covered deepfakes as “false” despite omitting any requirement that they be understandable as assertions of fact.70See supra note 35.
Finally, many state laws attempt to treat deepfakes and revenge porn identically, despite fundamental ontological differences.71See, e.g., Cal. Civ. Code § 1708.86 (West 2025); Colo. Rev. Stat. § 18-7-107 (2025); Ga. Code Ann. § 16-11-90 (2022); Haw. Rev. Stat. § 711-1110.9 (2024); N.C. Gen. Stat. § 14-190.5A (2025); Neb. Rev. Stat. § 25-3502(2) (2025); Va. Code Ann. § 18.2-386.2 (2024); Vt. Stat. Ann. tit. 13, § 2606 (2025). As Kadri and West note, treating deepfakes and revenge porn alike leads to infelicities. See Kadri & West, supra note 12, at 17 (“[T]raditional revenge porn statutes often draw lines between private and nonprivate situations by including exceptions for images taken ‘in public’ or without ‘reasonable expectation of privacy.’ This concept obviously translates poorly to deepfakes, where the depicted conduct never occurred and no ‘private’ moments exist.”).
b. The Unresolved Problem of “Indistinguishability”
The enacted TAKE IT DOWN Act improves upon the first draft. The modified law removes the incoherent language about “actual[] depict[ion]” and all mentions of the ambiguous word “false.” Instead, the enacted statute defines a covered “digital forgery” as “any intimate visual depiction of an identifiable individual . . . that, when viewed as a whole by a reasonable person, is indistinguishable from an authentic visual depiction of the individual.”72Pub. L. No. 119-12, § 2(a)(2), 139 Stat. 55, 55 (2025) (to be codified at 47 U.S.C. § 223(h)(1)(B)).
The statute also addresses offenses involving “digital forgeries” in a separate section from offenses involving documentary revenge porn, while the first draft grouped them together.73Compare id., with TAKE IT DOWN Act First Draft § 2(a)(2).
However, the enacted law still contains a crucial ambiguity: Does its definition of “digital forgery” refer to a deepfake’s aesthetic appearance or its propositional content? There are two different ways that a deepfake might be “indistinguishable from an authentic visual depiction”: It might be aesthetically indistinguishable or propositionally indistinguishable. If a deepfake need only be aesthetically indistinguishable, then it need only be photorealistic, even if its content or context make it obvious that it is not authentic.
Alternatively, the statute could require a deepfake to be propositionally indistinguishable from an authentic photograph or video recording, covering only deepfakes capable of convincing a reasonable observer that they are authentic records, or that they otherwise state documentary facts.74The TAKE IT DOWN Act is not the first regulation of sexual imagery to employ unclear language of “indistinguishability.” The Child Pornography Prevention Act of 1996 (CPPA) defined “child pornography” to include images that “appear[] to be[] of a minor engaging in sexually explicit conduct,” a phrase that the legislative history described as covering media “virtually indistinguishable to unsuspecting viewers from unretouched photographs of actual children engaging in identical sexual conduct.’ ” Pub. L. No. 104-208, § 121(2)(4), 110 Stat. 3009, 3009–28 (1996); United States v. Hilton, 167 F.3d 61, 72 (1st Cir. 1999) (quoting S. Rep. No. 104-358, pt. I, at 7). This left unclear whether the standard was based on aesthetic realism or on what a viewer believed the image recorded. The Supreme Court struck down the CPPA’s “appears to be” language, and Congress amended the law to cover images “indistinguishable from” an actual minor, defining this as what “an ordinary person viewing the depiction would conclude” depicts an actual minor, but excluding cartoons, drawings, and the like. See infra note 205; 18 U.S.C. § 2256(8)(B), (11); Pub. L. No. 108-21, § 502(c), 117 Stat. 650, 679 (2003). This, too, does not definitively resolve whether the test is aesthetic or propositional. In connection with a different statutory provision, the Supreme Court has stated that “simulated” sexual intercourse requires “caus[ing] a reasonable viewer to believe” actual minors engaged in sexual conduct, as opposed to fictional scenes. United States v. Williams, 553 U.S. 285, 296–97 (2008); see also 18 U.S.C. § 2252A(a)(3)(B)(ii); 18 U.S.C. § 2256(2).
The statutory text seems to point to the latter, as it specifies that a “digital forgery” is a depiction that “when viewed as a whole by a reasonable person, is indistinguishable from an authentic visual depiction.”75Pub. L. No. 119-12, § 2(a)(2), 139 Stat. 55, 55 (2025) (to be codified at 47 U.S.C. § 223(h)(1)(B)) (emphasis added).
When a reasonable person views a communication “as a whole,” that person presumably pays attention not only to its aesthetic appearance, but also to the content it depicts and the context in which it is presented.76At least, this is how courts in defamation cases analyze whether a communication “as a whole” is capable of defamatory meaning. Farah v. Esquire Mag., 736 F.3d 528, 535 (D.C. Cir. 2013) (quoting Afro-Am. Publ’g Co. v. Jaffe, 366 F.2d 649, 655 (D.C. Cir. 1966) (en banc)). To analyze statements as a whole, courts look to the context of the publication, which “includes not only the immediate context of the disputed statements, but also the type of publication, the genre of writing, and the publication’s history of similar works.” Id.
In many instances, attention to content and context will put a reasonable person on notice that a photorealistic deepfake is an AI-generated depiction rather than an authentic documentary record.
Yet a propositional-indistinguishability requirement clashes with the TAKE IT DOWN Act’s evident purpose. In addition to its criminal prohibitions on publishing deepfakes, the law requires covered online platforms to establish a notice-and-takedown regime for intimate visual depictions.77Pub. L. No. 119-12, § 3, 139 Stat. 55 (2025) (to be codified at 47 U.S.C. § 223a).
The law expressly states that covered platforms include online services that regularly “publish, curate, host, or make available content of nonconsensual intimate visual depictions.”78Id. at § 4 (to be codified at 47 U.S.C. § 223a). The legislation already appears to have discouraged such sites from operating: Shortly after the bill passed both chambers of Congress, a “[m]ajor deepfake porn site” called MrDeepFakes shut down. Alana Wise, Major Deepfake Porn Site Shuts Down, NPR (May 6, 2025), https://npr.org/2025/05/06/nx-s1-5388422/mr-deepfakes-porn-site-ai-shut-down [perma.cc/Y5FC-AGZ2]. Some scholars have noted uncertainty about whether the statute’s takedown regime covers deepfakes. See James Grimmelmann, Deconstructing the Take It Down Act, 68 Commc’ns of the ACM 28, 29–30 (2025); Renée DiResta, Mary Anne Franks, Becca Branum, Adam Conner & Jen Patja, Lawfare Daily: Digital Forgeries, Real Felonies: Inside the TAKE IT DOWN Act, Lawfare at 22:58–23:23 (May 6, 2025), https://lawfaremedia.org/article/lawfare-daily–digital-forgeries–real-felonies–inside-the-take-it-down-act [perma.cc/ZWA9-CD63]. While the statute could be clearer, there remains a strong argument that its takedown regime encompasses deepfakes: The law defines a “digital forgery” as a specific type of “intimate visual depiction,” Pub. L. No. 119-12, § 2(a)(2), 139 Stat. 55, 55 (2025) (to be codified at 47 U.S.C. § 223(h)(1)(B)), and it defines “intimate visual depiction” by cross-reference to the term’s definition in a 2022 statute, codified at 15 U.S.C. § 6851, that provides a private right of action for nonconsensual disclosures of intimate visual depictions, id. (to be codified at 47 U.S.C. § 223(h)(1)(E)). The National Association of Attorneys General recently noted that § 6851’s “application to digital forgeries is currently unsettled.” David Leibert, Congress’s Attempt to Criminalize Nonconsensual Intimate Imagery: The Benefits and Potential Shortcomings of the TAKE IT DOWN Act, Nat’l Ass’n of Att’ys Gen. (Aug. 26, 2025), https://www.naag.org/attorney-general-journal/congresss-attempt-to-criminalize-nonconsensual-intimate-imagery-the-benefits-and-potential-shortcomings-of-the-take-it-down-act/ [perma.cc/JJS9-DKBT]. For what it is worth, § 6851 defines “visual depiction” with a cross-reference to a third statute, which defines the term without reference to authenticity, see 18 U.S.C. § 2256(5), and that same statute expressly contemplates that a “visual depiction” may not be an authentic photograph and may instead be a computer-generated image that does not record historical fact, see 18 U.S.C. § 2256(8)(C); infra Section III.A.2.c (discussing § 2256(8)(C)).
It is intuitively obvious why the TAKE IT DOWN Act—a statute designed to limit the proliferation of nonconsensual deepfakes—would specifically target websites whose raison d’être is promulgating nonconsensual deepfakes. But this intuition depends to some degree on reading the law to target aesthetic indistinguishability rather than propositional indistinguishability. Of the places a deepfake might appear, a website that primarily or exclusively hosts deepfakes is among the least likely to sow confusion about factual propositions. A reasonable person viewing a deepfake “as a whole” on a site overtly dedicated to deepfakes will very probably distinguish it “from an authentic visual depiction of the individual.”79TAKE IT DOWN Act, Pub. L. No. 119-12, § 2(a)(2), 139 Stat. 55, 55 (2025) (to be codified at 47 U.S.C. § 223(h)(1)(B)). Again, the significance of a publication’s context is something defamation doctrine recognizes. See, e.g., Farah, 736 F.3d at 537 (noting that the “primary intended audience” of an allegedly defamatory communication on a blog “would have been familiar with [defendant]’s history of publishing satirical stories”); see also Quentin J. Ullrich, Note, Is This Video Real? The Principal Mischief of Deepfakes and How the Lanham Act Can Address It, 55 Colum. J.L. & Soc. Probs. 1, 8 (2021) (acknowledging that “[v]iewers of videos on sites like MrDeepFakes almost certainly know they are fake”); supra note 76.
Thus, the law’s specific enumeration of nonconsensual-intimate-image sites suggests an intent to target aesthetics rather than factual assertions.
Other anti-deepfake laws more clearly communicate an aesthetic indistinguishability requirement.80Still other laws, however, evince even greater ontological confusion. Maryland’s law refers to “an image . . . that is indistinguishable from the person.” Md. Code Ann., Crim. Law § 3-809(a)(6)(i)(2) (West 2025). Representations of people are not people, and almost always, they are exceedingly easy to distinguish from actual, embodied people. Moreover, a three-dimensional dummy truly indistinguishable from a person would be excluded by that same law’s carveout for “sculpture.” Id. § 3-809(a)(6)(iii)(3). For the same drafting gaffe, see Tex. Civ. Prac. & Rem. Code Ann. § 98B.001 (West 2025).
Take Texas’s 2025 revisions to its criminal statute, which replace a definition of a deepfake as “a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality,” with the definition, “ ‘Deep fake media’ means a visual depiction . . . that appears to a reasonable person to depict a real person, indistinguishable from an authentic visual depiction of the real person.”81Act of June 20, 2025, ch. 1133, sec. 2, § 21.165(a)(1), 2025 Tex Sess. Law Serv. at Ch. 1133 (West) (codified as amended at Tex. Pen. Code Ann. § 21.165 (2025)).
The revisions also state that it is not a defense if the deepfake is labeled or otherwise indicated to be inauthentic.82Id. sec. 3, § 21.165(c)(c-2)(2).
Similarly, South Carolina’s law covers imagery “that appears to a reasonable person to be indistinguishable from an authentic visual depiction . . . regardless of whether the visual depiction indicates . . . that [it] is not authentic.”83 S.C. Code Ann. § 16-15-330(1) (2025).
The Texas and South Carolina statutes are arguably internally inconsistent—they cover depictions “indistinguishable from . . . authentic” records even when observers can easily distinguish them—and they are certainly inconsistent with a propositional-indistinguishability standard. What harmonizes their provisions is reading “indistinguishable” as an aesthetic requirement: If a deepfake looks like an authentic photograph or video recording, it can constitute a violation even if it is in practice entirely distinguishable.
Anti-deepfake laws target outrageous aesthetics, not false propositions. They remedy a harm that stems from an outrageous style of fictional depiction, whether or not a reasonable observer would take the depiction to assert historical facts about the person depicted. Yet the laws employ a vocabulary better suited to regulating the communication of facts. Understanding why the laws do so, and what language better captures their theory of harm, requires excavating the theory of privacy that justifies anti-deepfake statutes.
B. Anti-Deepfake Laws’ Theoretical Scaffolding
Anti-deepfake laws are hard to interpret because, perhaps inadvertently, their drafters legislated right on top of a legal fault line. One can divide the harms redressed by American privacy law and dignitary torts into two discrete categories: harms that derive from disclosures of fact and harms that do not. Defamation, for example, falls into the former category, as it concerns false statements of fact. Contemporary privacy-and-technology discourse, too, tends to focus on factual disclosures. Specifically, it focuses on what I’ll call “information privacy”—the collection, transmission, and use of actual or purported facts about persons.84See, e.g., Danielle Keats Citron & Daniel J. Solove, Privacy Harms, 102 B.U. L. Rev. 793, 810 (2022) (noting that the privacy torts do not address “modern privacy problems involving the collection, use, and disclosure of personal data” and therefore “have little application to contemporary privacy issues”); Paul Ohm, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, 57 UCLA L. Rev. 1701, 1705, 1728, 1760 (2010) (characterizing privacy harm as the result of “connecting individuals to harmful facts” about them).
Information-privacy harms result from improper disclosures of personally identifiable facts like someone’s financial information or HIV status.85See, e.g., Shah v. Cap. One Fin. Corp., 768 F. Supp. 3d 1033, 1044–45 (N.D. Cal. 2025) (permitting claim under California Consumer Privacy Act to proceed based on alleged disclosures of “personal and financial information, including employment information, bank account information, citizenship status, and credit card preapproval or eligibility”); N.Y. Pub. Health Law §§ 2780, 2782 (McKinney 2025) (protecting confidentiality of HIV status).
Like defamation, information privacy regulates statements of purported facts about persons.
In parallel, however, runs a current of privacy doctrine unconcerned with factual representations. Among other things, this body of law redresses injuries caused by appropriation—the dignitary harm wrought by having one’s likeness used for the purposes of another.86See infra Section I.B.3.a.
Appropriative harm can occur even when no private facts are revealed and no falsehoods are asserted. It is appropriative harm that anti-deepfake laws address. The violation that the statutes remedy is a highly offensive87“Highly offensive” is a term of art discussed infra Section I.B.3.b.
use of an individual’s likeness, whether or not that use asserts any falsehoods or discloses any true, private facts about the victim.
The problem is that, as Part I.A showed, legislation and scholarship on deepfakes frequently approach an appropriative harm as if it were an information-privacy harm. The mismatch produces an internal tension. In practice, anti-deepfake statutes almost always treat a deepfake’s propositional content as irrelevant. Only a small minority of statutes require a deepfake to be capable of deceiving a reasonable viewer; many provide that disclaimers of inauthenticity are irrelevant. At the same time, however, the statutes describe regulated subject matter in terms of its apparent authenticity. Legislatures, perhaps wary of explicitly regulating offensive aesthetics as such, have tried to couch an aesthetic prohibition in the seemingly safer vocabulary of information privacy. Yet the principle that best explains the statutes’ coverage is not whether the depiction is indistinguishable from an authentic record nor whether the depiction is false—indeed, actionable deepfakes may defy either criterion88See supra Section I.A.1.
—but rather whether the depiction is “highly offensive.”
Identifying anti-deepfake laws’ theoretical pedigree is important because it informs both how these statutes should be written and how they should be read. Recognizing that harmful deepfakes are highly offensive appropriations of likeness reveals respects in which some statutes are too narrow. Laws that extinguish liability in the presence of a disclaimer misdiagnose an appropriative harm as a defamatory harm.89See Ariz. Rev. Stat. Ann. § 16-1023(A)(2) (2025); La. Stat. Ann. § 14:73.13(C)(1) (2025).
By contrast, laws that ban “depiction[s]” without limitation to highly offensive representational styles are unduly broad restrictions on fictional expression.90See, e.g., Fla. Stat. § 836.13 (2025).
An imprecise identification of the harm to which they respond leaves some anti-deepfake statutes conceptually incoherent. The following subsections unpack the potential justifications for anti-deepfake laws. Because the harms of deepfakes are so frequently mischaracterized as injuries that stem from deceptive assertions of fact, legislators mistakenly attempt to target the problem by regulating “false” imagery or by equating them with information-privacy violations like revenge porn. But anti-deepfake laws are neither defamation laws nor information-privacy laws because they eschew any requirement that the media in question disclose facts about the person depicted. Statutes that regulate pornographic deepfakes by targeting assertions of fact will never address what’s most objectionable about them, which is neither falsity nor truth, but their highly offensive appropriation of a nonconsenting person’s likeness.
1. Anti-Deepfake Laws Are Not (Just) Defamation Laws
Some legislative history justifies anti-deepfake laws by referring to viewers’ inability to differentiate deepfakes from factual records, which suggests concerns about misleading statements of fact.91See, e.g., Michelle Hinchey, Sponsor Memorandum in Support of N.Y. S.B. S1042A, 2023–2024 Sess., https://nysenate.gov/legislation/bills/2023/S1042/amendment/A [perma.cc/Z6AN-54FG] (“As this technology improves, . . . it becomes nearly impossible to depict what is a real image and what is doctored.”); Hearing on SHB 1999 Before the Senate S. L. & Just. Comm., 2024 Leg. Sess. at 42:26 (Wash. 2024) (remarks of Rep. Tina Orwall), https://tvw.org/video/senate-law-justice-2024021275/?eventID=2024021275 [perma.cc/9MU4-3PS8] (“The bill in front of you is really about the fabricated images . . . . [T]hey’re just as harmful, right? Someone cannot distinguish.”). See generally Marc Jonathan Blitz, Deepfakes and Other Non-Testimonial Falsehoods: When Is Belief Manipulation (Not) First Amendment Speech?, 23 Yale J.L. & Tech. 160 (2020) (characterizing the harms of deepfakes as related to falsity).
Similarly, legislation may refer to deepfakes as “digital forger[ies]” or imply that they entail “fraud”—words that presuppose an intent to deceive viewers about a factual proposition.92See, e.g., 47 U.S.C. § 223(h)(B); see also Fraud, Black’s Law Dictionary (12th ed. 2024) (“[a] knowing misrepresentation or knowing concealment of a material fact made to induce another to act to his or her detriment”); Forgery, Black’s Law Dictionary (12th ed. 2024) (“[t]he act of fraudulently making a false document or altering a real one to be used as if genuine”).
But a common characteristic of many anti-deepfake laws reveals that these statutes seek to remedy a wrong distinct from harmful falsehood. Several laws and federal bills explicitly state that a violator cannot avoid liability by including a disclaimer that states that a deepfake does not record factual conduct on the part of the person depicted.93See supra Section I.A.1.
At least one state that initially required a deepfake be “created with the intent to deceive”94 Tex. Penal Code Ann. § 21.165(a)(1) (West 2025).
later amended the law, removing the intent requirement and specifying that a disclaimer of inauthenticity could not serve as a defense.95Act of June 20, 2025, ch. 1133, sec. 2, § 21.165(c-2), 2025 Tex Sess. Law Serv. at Ch. 1133 (West) (codified as amended at Tex. Pen. Code Ann. § 21.165 (2025)).
Provisions like these are inconsistent with the theory that the harmfulness of deepfakes comes solely from their ability to defame, deceive, and defraud.
The Supreme Court’s defamation jurisprudence teaches that “statements that cannot ‘reasonably be interpreted as stating actual facts’ about an individual” are protected by the First Amendment, at least when they pertain to matters of public concern.96Milkovich v. Lorain J. Co., 497 U.S. 1, 20 (1990) (quoting Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 50 (1988)).
This requirement reflects that a “false representation of fact” is an essential element of defamation.97Pring v. Penthouse Int’l, Ltd., 695 F.2d 438, 440 (10th Cir. 1982).
As the Court has explained, “[u]nder the First Amendment there is no such thing as a false idea.”98Gertz v. Robert Welch, Inc., 418 U.S. 323, 339–40 (1974).
On this understanding, an expressive statement that does not assert or imply a factual proposition cannot be actionably99For a discussion of the distinction between defamation and actionable defamation, see Jeffrey S. Helmreich, True Defamation, 4 J. Free Speech L. 835, 840–46 (2024) (noting that defamation may include truthful speech).
defamatory.100The Restatement’s definition of defamation corroborates this view, although as a constitutional matter it remains an “open issue” whether “statements not provably false about matters of purely private significance” can be actionably defamatory. Restatement (Second) of Torts § 566 (A.L.I. 1977) (opinion “is actionable only if it implies the allegation of undisclosed defamatory facts” (emphasis added)); § 4:2.4 Sack on Defamation, 4-25–26; see also Robert D. Sack, Protection of Opinion Under the First Amendment: Reflections on Alfred Hill, “Defamation and Privacy Under the First Amendment”, 100 Colum. L. Rev. 294, 329–30 (2000).
If anti-deepfake laws addressed only false statements of fact, it would make no sense for them to provide for liability even when the deepfake is accompanied by a disclaimer that effectively communicates that it is unauthorized and ahistorical. Of course, the presence of a disclaimer does not render a statement nondefamatory per se.101New Times, Inc. v. Isaacks, 146 S.W.3d 144, 160–61 (Tex. 2004).
But some disclaimers surely make it impossible for any reasonable viewer to interpret a deepfake “as stating actual facts about an individual.”102Milkovich v. Lorain J. Co., 497 U.S. 1, 20 (1990) (quotation marks omitted) (quoting Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 50 (1988)); see Stanton v. Metro Corp., 438 F.3d 119, 128 (1st Cir. 2006).
To declare disclaimers legally irrelevant is to declare that legally cognizable harm occurs even when no reasonable viewer could understand a deepfake as stating actual facts about the individual depicted. In other words, it is to acknowledge that legally cognizable harm occurs even when a deepfake is non-defamatory as a matter of law.
The Senate made this precise acknowledgement in late July 2024, when it passed a version of the DEFIANCE Act that had been amended with a finding that “individuals depicted in [sexually explicit] digital forgeries are profoundly harmed when the content is produced, disclosed, or obtained without the consent of those individuals. These harms are not mitigated through labels or other information that indicates that the depiction is fake.”103DEFIANCE Act, S. 3696, 118th Cong. § 2(3) (2024) (as amended by S. Amend. 3049 and passed by Senate, July 23, 2024) (emphasis added).
In a decision published two days later, Meta’s Oversight Board made the same observation.104Explicit AI Images of Female Public Figures, Oversight Bd. (July 25, 2024), https://oversightboard.com/decision/bun-7e941o1n [perma.cc/8X7Y-C84G].
Popular attitudes, too, corroborate that the perceived harms of deepfakes are about something more than defamation. One study asked participants to evaluate a hypothetical nonconsensual, pornographic deepfake that was labeled as false and found that “[t]here was no significant effect of labeling on the perceived harmfulness or blameworthiness of the video.”105Matthew B. Kugler & Carly Pace, Deepfake Privacy: Attitudes and Regulation, 116 Nw. U. L. Rev. 611, 636, 640 (2021).
Deepfakes fuel deep outrage that is distinct from their capacity to misinform.106Daniel Immerwahr gets it exactly right when he observes,
In worrying about deepfakes’ potential to supercharge political lies and to unleash the infocalypse, moreover, we appear to be miscategorizing them. . . . Their role better resembles that of cartoons, especially smutty ones. Manipulated media is far from harmless, but its harms have not been epistemic. Rather, they’ve been demagogic, giving voice to what the historian Sam Lebovic calls “the politics of outrageous expression.”
Daniel Immerwahr, What the Doomsayers Get Wrong About Deepfakes, New Yorker (Nov. 13, 2023), https://newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review [perma.cc/FV9Q-4K3T] (quoting Sam Lebovic, Fake News, Lies, and Other Familiar Problems, 4 J. Free Speech L. 513, 516 (2024)).
2. Anti-Deepfake Laws Are Not Information-Privacy Laws
Just as deepfakes’ harms are often compared to defamatory harms, they are also compared to information-privacy harms. Prohibitions on disseminating “revenge porn”—authentic photographs and video recordings of victims naked or engaged in sexual conduct—are a frequent parallel. In many respects, this comparison is an apt one: Both revenge porn and deepfakes “turn[] individuals into objects of sexual entertainment against their will, causing intense distress, humiliation, and reputational injury.”107Franks & Waldman, supra note 20, at 893.
But revenge porn authentically records private events and thus falls in the heartland of information-privacy regulation, whereas deepfakes do not authentically record private facts—and often do not purport to.
Although some definitions of “revenge porn” refer to “images” without differentiation, jurists consistently employ the term to refer to photographs and video recordings—which record true, private facts about a person’s intimate appearance and activities—rather than to refer to, say, oil paintings and charcoal sketches that depict nudity.108For example, Citron and Franks’s article on revenge porn defines the material as “sexually graphic images of individuals [distributed] without their consent,” which is broad enough to include sketches and paintings in addition to photographs and video recordings. Danielle Keats Citron & Mary Anne Franks, Criminalizing Revenge Porn, 49 Wake Forest L. Rev. 345, 346 (2014). But its exclusive focus is on photographs and video recordings: Every example of nonconsensual pornography that the article discusses involves photographs or video recordings, and it uses phrasing that equates “images” with photographs. See, e.g., id. at 359–60 (“took the image herself” and “took the photo herself” used interchangeably). For a notable counterexample that describes a written account of a sexual encounter as “revenge porn,” see Caitlin Flanagan, The Humiliation of Aziz Ansari, Atlantic (Jan. 14, 2018), https://theatlantic.com/entertainment/archive/2018/01/the-humiliation-of-aziz-ansari/550541 [perma.cc/U64F-HLLT].
Revenge-porn statutes often specify that their coverage is limited to recordings rather than hypothetical depictions of what a victim might have done, or how her body might look.109See, e.g., N.J. Stat. Ann. § 2C:14-9(c) (West 2025) (“photograph, film, videotape, recording or any other reproduction of the image”); Cal. Civ. Code § 1708.85 (West 2025) (“a photograph, film, videotape, recording, or any other reproduction of another”).
Thus, revenge-porn bans fit in comfortably with information-privacy regulation because their core function is to restrict the circulation of records of private facts. The harm of revenge porn derives not just from how it looks, but from its ability to document private facts. Resemblance to private facts alone is insufficient. A dead ringer for a victim of revenge porn doesn’t gain a legal claim just by virtue of being a doppelganger; applicable statutes specify that it is “the person depicted” who possesses the cause of action.110See, e.g., 740 Ill. Comp. Stat. 190/10 (2024).
Analogously, jurists have justified revenge-porn bans in part through comparisons to criminal voyeurism.111State v. Katz, 179 N.E.3d 431, 458 (Ind. 2022) (“The invasion of privacy [from voyeurism] is similar to the invasion from nonconsensual pornography—that is, an individual should be able to control and consent to the situations in which their private areas are viewed and captured by another person.”).
Both revenge porn and voyeurism involve not merely a portrayal of a victim’s private affairs, but actual, nonconsensual access to those private affairs. The victim of a voyeur is the person whom the voyeur actually observes, just as the immediate victim of revenge porn is the person actually recorded in the images.
By contrast, anti-deepfake laws regulate resemblance rather than documentation. The statutes regulate deepfakes without regard to the facts they disclose about the victim. Anti-deepfake laws are thus not information-privacy laws—but this does not mean that anti-deepfake laws aren’t privacy laws. Legislators and scholars frequently, and justifiably, characterize deepfakes’ harms as privacy harms.112DEFIANCE Act, S. 3696, 118th Cong. § 2(4) (as amended by S. Amend. 3049 and passed by Senate, July 23, 2024) (“the privacy of . . . victims is violated”); Natalie Lussier, Nonconsensual Deepfakes: Detecting and Regulating This Rising Threat to Privacy, 58 Idaho L. Rev. 353 (2022); Citron, supra note 6, at 1924–25 (discussing the harm of nonconsensual, pornographic deepfakes as a “sexual-privacy invasion”); Congressman Joe Morelle Authors Legislation to Make AI-Generated Deepfakes Illegal, U.S. Congressman Joseph Morelle (May 5, 2023), http://morelle.house.gov/media/press-releases/congressman-joe-morelle-authors-legislation-make-ai-generated-deepfakes [perma.cc/U34Q-HLA5] (announcing “legislation to protect the right to privacy online amid a rise of artificial intelligence (AI) and digitally-manipulated content”); S.B. 309, 31th Leg., Reg. Sess. (Haw. 2021) (discussing “privacy issues . . . including . . . deep fake technology”); cf. Danielle Citron & Mary Anne Franks, Evaluating New York’s “Revenge Porn” Law: A Missed Opportunity to Protect Sexual Privacy, Harv. L. Rev. Blog (Mar. 19, 2019), https://harvardlawreview.org/blog/2019/03/evaluating-new-yorks-revenge-porn-law-a-missed-opportunity-to-protect-sexual-privacy [perma.cc/F9V7-9APL] (discussing revenge porn, not deepfakes).
Danielle Citron describes the harm of deepfakes as a hijacking of identity: They “mak[e] [a subject] be a sexual object in ways that [she] didn’t choose. . . . [I]t takes your sexual identity and exposes it in ways you didn’t choose.”113Brian Feldman, MacArthur Genius Danielle Citron on Deepfakes and the Representative Katie Hill Scandal, Intelligencer (Oct. 31, 2019), https://nymag.com/intelligencer/2019/10/danielle-citron-on-the-danger-of-deepfakes-and-revenge-porn.html [perma.cc/G5JD-8389].
Having this autonomy wrenched away, Citron argues, interferes with liberty, autonomy, and self-development.114Citron, supra note 6, at 1884–86; cf. Citron & Franks, supra note 108, (discussing revenge porn, not deepfakes).
Citron is justified in characterizing the harms of deepfakes as privacy harms, but she is justified by a theory of privacy that the fact-based rubric of information privacy elides and that anti-deepfake statutes conceal by using the vocabulary of facts. What makes deepfakes harmful is that they appropriate a person’s likeness in a highly offensive way.
3. Anti-Deepfake Laws Ban Highly Offensive Appropriations of Likeness
Information privacy looms large in privacy scholarship. Several prominent theories of privacy exclusively concern information privacy.115See, e.g., Helen Nissenbaum, Privacy as Contextual Integrity, 79 Wash. L. Rev. 119 , 123–24 (2004) (“The goals of this Article are more limited, not aiming for a full theory of privacy but only a theoretical account of a right to privacy as it applies to information about people.”); Cynthia Dwork & Aaron Roth, The Algorithmic Foundations of Differential Privacy, 9 Founds. & Trends in Theoretical Comput. Sci. 211 (2013).
Daniel Solove’s taxonomy of privacy contains sixteen subcategories of “socially recognized privacy violations,” thirteen of which concern information privacy.116Daniel J. Solove, A Taxonomy of Privacy, 154 U. Pa. L. Rev. 477, 478–83 (2006). Solove’s category of “information collection” focuses entirely on the adverse consequences that result from gathering factual information about persons, either through surveillance or interrogation. Id. at 491. The category of “information processing” is also based on true, or purportedly true, facts. “Aggregation,” “identification,” “insecurity,” “exclusion,” and “secondary use” all involve the acquisition, derivation, or analysis of true facts about a person; or access to and correction of purportedly true but erroneous information. See id. at 483, 508, 512, 517, 522–23. The same is true of most of the harms that Solove places under the umbrella of “information dissemination”: “[D]isclosure,” “exposure,” “increased accessibility,” “blackmail,” and “breach of confidentiality” all concern true information about a person. See id. at 526, 530, 536, 539–40, 542. Solove notes that the harm of breach of confidentiality is a “violat[ion] of trust,” and, like disclosure, “involve[s] the revelation of secrets about a person.” Id. at 526–27.
But information privacy is not the entirety of privacy law. Indeed, the remaining three of Solove’s privacy violations—“intrusion” upon a person’s seclusion, “decisional interference,” and “appropriation”—are not necessarily information-privacy violations. The first two do not capture the harms of deepfakes; deepfakes can be created and consumed without any intrusion upon a person’s seclusion, and if anything, a policy against decisional interference supports an unrestricted entitlement to create and consume deepfakes in private.117Id. at 558–59, 561. As an example of judicial protection from decisional interference, Solove cites Stanley v. Georgia, in which the Supreme Court struck down a ban on the private possession of obscene materials, reasoning that “the Constitution protects the right to receive information and ideas . . . regardless of their social worth” and that this “right takes on an added dimension” in a “prosecution for mere possession of printed or filmed matter in the privacy of a person’s own home.” Id. at 560 (quoting Stanley v. Georgia, 394 U.S. 557, 564 (1969)). Stanley would suggest that government regulation of the private creation and consumption of deepfakes abridges the decisional privacy of the deepfake consumer—although that interest might of course be subordinate to other interests. See, e.g., Osborne v. Ohio, 495 U.S. 103, 108 (1990) (“[T]he interests underlying child pornography prohibitions far exceed the interests justifying the Georgia law at issue in Stanley.”); see also infra note 321 (describing a 2025 district court decision dismissing, under Stanley, a charge for possession of obscene, AI-generated CSAM, but declining to dismiss a charge for production).
However, appropriation—“the use of [a person’s] identity to serve the aims and interests of another”—does capture deepfakes’ harms.118Solove, supra note 116, at 491.
a. “Appropriation” Is the Harm
Appropriation is the privacy violation that explains the harms of deepfakes. Appropriation is a confusing body of law because it has bifurcated into two doctrines that redress distinct injuries.119Some commentators have suggested that appropriation and the right of publicity are better thought of as a single legal doctrine. See Eric E. Johnson, Disentangling the Right of Publicity, 111 Nw. U. L. Rev. 891, 903 (2016).
The first, the “right of publicity,” is generally understood as a property right in commercial uses of name, image, and likeness, which permits individuals to internalize the commercial value of their identity.120Cf., e.g., Waits v. Frito-Lay, 978 F.2d 1093, 1098 (9th Cir. 1992). It bears noting that Jennifer Rothman, one of the foremost right-of-publicity scholars, resists a dichotomous “public-private” or “privacy-property” categorization and emphasizes that the right is rooted in privacy interests. See generally Jennifer E. Rothman, The Right of Publicity: Privacy Reimagined for a Public World (2018).
Many proposals tout the right of publicity as a tool to combat harmful deepfakes, and the right may, in many cases, be an excellent fit.121Alice Preminger & Matthew B. Kugler, The Right of Publicity Can Save Actors from Deepfake Armageddon, 39 Berkeley Tech. L.J. 783 (2024); Mark Roesler & Garrett Hutchinson, What’s in a Name, Likeness, and Image? The Case for a Federal Right of Publicity Law, 13 Landslide, Sep./Oct. 2020, at 20 (“The right of publicity could be an unexpected vehicle by which to combat th[e] issue [of pornographic deepfakes].”); Ullrich, supra note 79, at 26; Jesse Lempel, Combatting Deepfakes Through the Right of Publicity, Lawfare (Mar. 30, 2018), https://lawfaremedia.org/article/combatting-deepfakes-through-right-publicity [perma.cc/G94D-MVED].
But because the right of publicity paradigmatically requires commercial uses of likeness and emphasizes the use’s economic value rather than the dignitary harm it causes, even its proponents acknowledge that it may not address pornographic deepfakes circulated in a noncommercial context.122Preminger & Kugler, supra note 121, at 807.
The second branch of appropriation jurisprudence better approximates the harm of nonconsensual pornographic deepfakes, and it is this body of law that I refer to as the appropriation tort. Like the right of publicity, the appropriation tort focuses on nonconsensual uses of likeness, but unlike the right of publicity, the appropriation tort aims to recompense plaintiffs for dignitary injuries rather than missed licensing revenue. Appropriation is a privacy violation that wreaks a dignitary harm; it “turns a man into a commodity and makes him serve the economic needs and interest of others.”123Edward J. Bloustein, Privacy as an Aspect of Human Dignity: An Answer to Dean Prosser, 39 N.Y.U. L. Rev. 962, 988 (1964); see also id. at 989–91 (suggesting that dignitary interests of this sort are privacy interests).
Appropriation was first recognized, both at common law and in statutes, in the early twentieth century.124For an early statute, see N.Y. Civ. Rights L. § 50 (McKinney 2024) (originally codified in 1909), and see also Lohan v. Take-Two Interactive Software, Inc., 97 N.E.3d 389, 393 (N.Y. 2018) (discussing the history of § 50).
In the 1905 case Pavesich v. New England Life Insurance Company, the Supreme Court of Georgia permitted a common-law privacy claim against an insurance company that used the plaintiff’s portrait in an advertisement without his consent.125Pavesich v. New England Life Ins. Co., 50 S.E. 68 (Ga. 1905).
Pavesich identified a harm strikingly similar to the harm scholars associate with deepfakes. The court observed that a nonconsensual use of likeness in advertising can instill in the wronged person “a realization that his liberty has been taken away from him; and, as long as the advertiser uses him for these purposes, he cannot be otherwise than conscious of the fact that he is for the time being under the control of another . . . .”126Id. at 80; see also Solove, supra note 116, at 548 (quoting Pavesich).
Nonconsensual deepfakes can interfere with liberty in a similar way. As Citron explains, “[b]eing able to reveal one’s naked body, gender identity, or sexual orientation at the pace and in the way of one’s choosing is crucial to identity formation. When the revelation of people’s sexuality or gender is out of their hands at pivotal moments, it can shatter their sense of self.”127Citron, supra note 6, at 1884.
Indeed, although Pavesich concerned an advertisement, its reasoning would seem to apply to commercial and noncommercial uses of likeness in equal measure. “If one’s picture may be used by another for advertising purposes,” the court wrote, “it may be reproduced and exhibited anywhere” from “private dwellings” to “the walls of a brothel. By becoming a member of society, neither man nor woman can be presumed to have consented to such uses of the impression of their faces and features . . . .”128Pavesich, 50 S.E. at 80.
Similarly, the Restatement of Torts recognizes that the appropriation tort may apply even when “the use is not a commercial one, and even though the benefit sought to be obtained is not a pecuniary one.”129 Restatement (Second) of Torts § 652C cmt. b (A.L.I. 1977).
At its most expansive, then, appropriation would seem to forbid almost any use of a plaintiff’s likeness.
Appropriation’s broad black-letter definition can’t, however, be taken at face value.130One scholar calls it “nonsensically overbroad.” Johnson, supra note 119, at 906.
Proscribing all uses of likeness that redound to a defendant’s “advantage” or “benefit” would ensnare commonplace activities in the heartland of First Amendment protection, like news reporting, artistic photography and portraiture, and fiction and nonfiction writing.131Id. at 905–07; cf. Robert C. Post, Rereading Warren and Brandeis: Privacy, Property, and Appropriation, 41 Case W. Rsrv. L. Rev. 647, 671–72 (1990).
Pavesich’s scandalized discussion of portraiture clashes with present-day understandings of third parties’ rights to express themselves and to document the world around them.132In fact, Pavesich was decided before the Supreme Court held that the Fourteenth Amendment incorporated the First Amendment against state governments. Gitlow v. New York, 268 U.S. 652 (1925).
Contrary to Pavesich’s assertion that “[b]y becoming a member of society, neither man nor woman can be presumed to have consented to” public displays of their likenesses,13350 S.E. at 80.
contemporary society presumes that by appearing in public, we have assented both to our likenesses being recorded and to those images being displayed publicly in a variety of contexts.134For instance, in affirming the dismissal of a statutory appropriation claim by a plaintiff who was photographed on the street and used to illustrate a news story, the New York Court of Appeals observed in 1982 that, “other than in the purely commercial setting covered by [the statute], an inability to vindicate a personal predelection [sic] for greater privacy may be part of the price every person must be prepared to pay for a society in which information and opinion flow freely.” Arrington v. N.Y. Times Co., 434 N.E.2d 1319, 1323 (N.Y. 1982); see also, e.g., Nussenzweig v. DiCorcia, No. 108446/05, 2006 WL 304832, at *8 (N.Y. Sup. Ct. Feb. 8, 2006) (“[P]laintiff finds the use of the photograph bearing his likeness deeply and spiritually offensive. . . . [P]laintiff’s distress . . . is not redressable in the courts of civil law. In this regard, the courts have uniformly upheld Constitutional 1st Amendment protections, even in the face of a deeply offensive use of someone’s likeness.”); William Prosser, Privacy, 48 Cal. L. Rev. 383, 391–92 (1960) (“Neither is it such an invasion [of privacy] to take [a person’s] photograph in [a public] place, since this amounts to nothing more than making a record, not differing essentially from a full written description, of a public sight which any one present would be free to see.”).
In other words, contemporary appropriation jurisprudence shields a vast array of legitimate interests in nonconsensually referring to, depicting, or describing a specific person. Despite defining a prima facie appropriation claim in broad terms, courts invoke the First Amendment to protect these expressive interests from the tort’s literal scope.135Johnson, supra note 119, at 904.
Indeed, even as they recite the Restatement’s broad definition, courts have rejected appropriation claims based on uses of identity in expressive works, including expressive works sold for profit.136See, e.g., De Havilland v. FX Networks, LLC, 230 Cal. Rptr. 3d 625, 638-40 (Cal. Ct. App. 2018); Benavidez v. Anheuser Busch, Inc., 873 F.2d 102, 104 (5th Cir. 1989); Neff v. Time, Inc., 406 F. Supp. 858, 861 (W.D. Pa. 1976).
Some jurists might go further and argue for limiting appropriation to commercial injuries,137Cf. Rebecca Tushnet, A Mask That Eats into the Face: Images and the Right of Publicity, 38 Colum. J.L. & Arts 157, 159 (2015) (“[T]o the extent a right of publicity is appropriate at all, it should not extend to noncommercial speech . . . .”). It bears noting, however, that Tushnet’s article focuses on “celebrities.” See generally id. Separately, some treatises describe appropriation as being undertaken “for a commercial use” or hedge and state that it is “usually for commercial gain.” 1 William L. Prosser, Handbook of the Law of Torts § 107, at 1056 (1941); 2 Rodney A. Smolla, Law of Defamation § 10:2 (2d ed. 2025), cited in Moore v. Sun Publ’g Corp., 881 P.2d 735, 743 (N.M. Ct. App. 1994). Statutory regimes often expressly limit appropriation to uses in advertising or trade. See, e.g., N.Y. Civ. Rights Law § 50 (McKinney 2024); Cal. Civ. Code § 3344 (West 2025); Wash. Rev. Code Ann. § 63.60.050 (2025). Some scholarly commentary on appropriation presents the cause of action as commercial appropriation of identity. See, e.g., Johnson, supra note 119, at 893; Samantha Barbas, From Privacy to Publicity: The Tort of Appropriation in the Age of Mass Consumption, 61 Buff. L. Rev. 1119, 1120 (2013); Mark P. McKenna, The Right of Publicity and Autonomous Self-Definition, 67 U. Pitt. L. Rev. 225, 225 n.2 (2005); Bloustein, supra note 123, at 985–86.
but courts have also permitted appropriation claims to proceed against expressive and/or political uses of a plaintiff’s identity.138See, e.g., Doe v. TCI Cablevision, 110 S.W.3d 363, 371 (Mo. 2003); Browne v. McCain, 611 F. Supp. 2d 1062, 1065, 1071 (C.D. Cal. 2009) (denying motion to strike right-of-publicity claim arising out of use of a song in a political commercial); cf. Cardtoons, L.C. v. Major League Baseball Players Ass’n, 95 F.3d 959, 968, 970 (10th Cir. 1996) (holding that use of baseball players’ likenesses on parody trading cards violated Oklahoma right-of-publicity statute prima facie, but First Amendment precluded liability).
Of course, many deepfakes—such as simulated product endorsements—are cut-and-dry appropriation. But harmful deepfakes may lack the indicia of “advantage” that delimit the appropriation tort. Deepfakes may be disseminated without any prospect of direct or indirect pecuniary gain. While a number of cases permit appropriation or right-of-publicity claims against noncommercial speech, these cases at least tend to involve expressive media sold for profit or situations in which the plaintiff’s reputation or standing realizes publicity for a defendant.139See supra note 136 (collecting cases). For an example closer to the paradigmatic deepfake scenario of private, noncommercial use, see Jarrett v. Butts, 379 S.E.2d 583, 585 (Ga. Ct. App. 1989) (“no wrongful appropriation occurred” where defendant photographed plaintiff and the “photographs were never sold, published, or publicly displayed” (emphasis added)).
Meanwhile, noncommercial deepfakes do not trade on the plaintiff’s identity in quite the manner that the appropriation doctrine contemplates. Disseminating a deepfake in an online chat between schoolmates, for example, would violate anti-deepfake statutes but probably wouldn’t support a claim of appropriation.140Cf. Jarrett, 379 S.E.2d at 585.
An advertisement would qualify as appropriation if it used a plaintiff’s name or her picture; whether the invocation is written or pictorial, the same identity is employed for the same benefit. Meanwhile, sharing a written sexual fantasy that refers to another party by name would not violate an anti-deepfake statute, but depicting that fantasy in a deepfake would. The written fantasy and the deepfake both employ the depicted person’s identity for the same end. What distinguishes the deepfake is that it employs a more outrageous mode of expression.141For further discussion of “mode of expression” as a legal concept, see infra note 296.
b. “Highly Offensive” Is the Limiting Principle
Because anti-deepfake laws target outrageous expression as such, they elevate offensiveness in a way that appropriation doctrine may not. In this respect, they resemble the tort of false light invasion of privacy, which gives recourse to a plaintiff who was “place[d] . . . before the public in a false light” that “would be highly offensive to a reasonable person.”142 Restatement (Second) of Torts § 652E (A.L.I. 1977).
Paradigmatic false light claims include a newspaper’s false insinuation that a family was living in “dirty and dilapidated conditions” and a tabloid’s use of the plaintiff’s photograph to illustrate a fictitious story of a centenarian who became pregnant after an extramarital affair.143Cantrell v. Forest City Publ’g Co., 419 U.S. 245, 248 (1974); Peoples Bank & Tr. Co. of Mountain Home v. Globe Int’l Publ’g, Inc., 978 F.2d 1065, 1067 (8th Cir. 1992).
Unlike appropriation, false light incorporates the same falsity requirement as defamation.144Cf. Solove, supra note 116, at 550.
Expression that “cannot ‘reasonably [be] interpreted as stating actual facts’ about an individual” cannot support a false light claim.145Khodorkovskaya v. Gay, 5 F.4th 80, 84 (D.C. Cir. 2021). New York Times v. Sullivan’s “actual malice” standard applies to false light claims involving “matters of public interest.” Time, Inc. v. Hill, 385 U.S. 374, 387–88 (1967). A false light claim may be predicated on true statements, but only if the way in which those truths are presented creates a specific false impression of fact. Sack, supra note 100, § 12:3.1 at 12:20–22.
A recent paper by John Goldberg and Benjamin Zipursky argues for extending false light to address deepfakes that disclaim their inauthenticity. Although they do not contest the authority requiring false light claims to rest on a specific false statement, Goldberg and Zipursky argue that “[f]or a whole range of non-newsworthy statements that are publicly disseminated and would be found highly offensive by a reasonable person, there should be no requirement to sort the true from the false because they should be actionable either way.”146Goldberg & Zipursky, supra note 39, at 481–82 (emphasis added).
They specifically observe that pornographic deepfakes “should not become nonactionable just by virtue of a visible disclaimer (of nonauthenticity) on the image itself.”147Id.
False light, they argue, protects a person’s interest in controlling the presentation of “ordinarily private aspects of their lives” to the public—an interest presumably implicated whether or not an offensive depiction is understood as factual.148Id. at 475 (emphasis omitted); see id. at 476–78.
Goldberg and Zipursky correctly apprehend that the harm of pornographic deepfakes—unlike defamation or information-privacy harm—is independent of factual truth or falsity, because deepfakes are “highly offensive” either way.149Id. at 482.
Whether understood as factual or not, pornographic deepfakes interfere with self-determination, as Citron, Franks and Waldman, and others argue.
The theory underlying anti-deepfake laws, then, is a hybrid of false light and appropriation. Today’s anti-deepfake statutes redress the injury that appropriation redresses, but subject to the offensiveness limitation that appears in the false light tort. By focusing on the most offensive uses of identity—those that are (1) pornographic150Although existing anti-deepfake laws single out sexual media for special treatment, it may be that legislators will deem other representations offensive enough to warrant prohibition. In mid-2025, LeBron James sent a cease-and-desist letter to the creators of a tool that could generate photorealistic imagery depicting James, dressed in his Lakers uniform, with a protruding, pregnant abdomen. Jason Koebler, LeBron James’ Lawyers Send Cease-and-Desist to AI Company Making Pregnant Videos of Him, 404 Media (July 24, 2025), https://www.404media.co/lebron-james-lawyers-send-cease-and-desist-to-ai-company-making-pregnant-videos-of-him/ [perma.cc/BEW6-ZYUG]. Goodyear observes correctly that “[r]eputational harms are not . . . limited to sexual deepfakes” and lists, as an example, depicting someone “[w]earing culturally inappropriate dress.” Goodyear, supra note 19, at 948. If legislatures choose to prohibit harms from nonpornographic depictions, it will tee up hard questions. Are irreverent deepfakes—a pregnant LeBron or an immodestly dressed “tradwife” influencer—profoundly personal trespasses that ought to be prohibited, or are they social commentary that ought to be protected?
and (2) involve the manipulation of persons’ realistic visual likenesses rather than merely the invocation of their names—anti-deepfake laws incorporate something akin to false light’s “highly offensive to a reasonable person” requirement, which is a limitation the appropriation tort lacks.151Compare Restatement (Second) of Torts § 652E (A.L.I. 1977), with id. § 652C.
In turn, anti-deepfake statutes eschew false light’s requirement of a false statement and appropriation’s focus on uses that advantage a defendant.
The “highly offensive” nature of deepfakes relates to a specific victim, not the general public. This is an important distinction; although it is unconstitutional to ban “express[ing] ideas that offend” in general,152Matal v. Tam, 582 U.S. 218, 223 (2017).
offense inflicted upon a specific person is an element of numerous legal claims.153See, e.g., Restatement (Second) of Torts § 19 (A.L.I. 1965) (battery); id. § 652E (A.L.I. 1977) (false light); see also Joel Feinberg, Offense to Others 10–22 (1987).
Nonconsensual deepfakes are highly offensive in the relevant sense because of (a) what they depict and (b) the circumstances of their production. Both ingredients matter. A depiction of a nude body is not highly offensive in itself.154Of course, attitudes vary. See, e.g., Justice Department Ends Nude Cover-Ups, NBC News (June 26, 2005), https://nbcnews.com/id/wbna8360632 [https://perma.cc/AL5H-QM5G] (describing nude statues in the Justice Department’s atrium being restored to full view after being obscured by drapes during the tenure of Attorney General John Ashcroft).
Neither is a nonconsensual use of an identifiable person’s likeness as such. What makes nonconsensual deepfakes offensive is that they are sexual depictions, rendered in a particular representational style, without the consent of the person depicted. The conjunction of these properties distinguishes deepfakes from sexual depictions generally and nonconsensual depictions generally. Unlike the harm to the general public that obscenity law addresses, the offensiveness of deepfakes is specific and relational.155Feminist legal scholarship differentiates between the asserted harms of obscenity and the harms of pornography. See Catharine A. MacKinnon, Not a Moral Issue, 2 Yale L. & Pol’y Rev. 321, 329 (1984) (quoting Louis Henkin, Morals and the Constitution: The Sin of Obscenity, 63 Colum. L. Rev. 391, 395 (1963)); id. at 332.
Thus, Citron, Goldberg, and Zipursky are all correct, because they say essentially the same thing.156Citron does not parse the different privacy torts as finely as Goldberg and Zipursky do. Goldberg and Zipursky acknowledge that false light “shares with . . . appropriation . . . a focus on the wrong of interfering unduly with how (and whether) others perceive them, how others imagine, think, and feel about them, and especially how (and whether) others dwell upon ordinarily private aspects of their lives.” Goldberg & Zipursky, supra note 39, at 475 (emphasis omitted). Similarly, in an article that does not discuss pornographic deepfakes, Robert Post and Jennifer Rothman “postulate . . . an ideal tort” that they call the “right of dignity.” Robert C. Post & Jennifer E. Rothman, The First Amendment and the Right(s) of Publicity, 130 Yale L.J. 86, 122 (2020). They explain, “[A] right of dignity would fill a gap in the existing dignitary torts” by restricting “highly offensive” appropriations of likeness irrespective of whether they defame, disclose private facts, or are intended to cause emotional distress. Id. at 124–25.
Goldberg and Zipursky propose that “false light” drop the requirement of a false statement. Meanwhile, Citron proposes extending appropriation to cover outrageous, noncommercial speech.157 Danielle Keats Citron, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age 137 (2022).
Their arguments share a premise: Certain offensive uses of likeness interfere with one’s interest in controlling how intimate aspects of one’s identity are presented to the public—even if those uses of likeness realize no tangible benefit for the defendant and assert no facts about the person depicted.
4. Why Getting the Theory Right Matters
There is a more fundamental reason for the confusion that surrounds anti-deepfake laws. Anti-deepfake laws do not just regulate a different harm from defamation or information privacy; they also regulate an entirely different subject matter. Defamation and information-privacy laws regulate materials that assert facts about persons. In other words, they regulate materials that purport to be records about persons. But anti-deepfake laws regulate outrageous depictions of persons, pure and simple. Regulating images because of how they depict someone is dramatically different from regulating images because of what facts they assert. Anti-deepfake laws do the former, while defamation and revenge-porn laws do the latter. This difference places anti-deepfake laws in a distinct category of regulation from fact-based regimes like defamation and information-privacy law, with a distinct constitutional jurisprudence. The discipline of semiotics explains this difference with a precision that has eluded scholars and legislators to date.
II. Semiotics Helps Us Understand Anti-Deepfake Laws
Semiotics is the study of “signs.”158Philipp Strazny, Semiotics, in Encyclopedia of Linguistics 949 (2005); Barton Beebe, The Semiotic Analysis of Trademark Law, 51 UCLA L. Rev. 621, 626–29 (2004).
A sign, in turn, “is an object which stands for another to some mind”; it is something that communicates meaning.159 James Hoopes, Peirce on Signs: Writings on Semiotic by Charles Sanders Peirce 141 (2014).
The words in this Article are signs; an air raid siren is a sign; a photograph is a sign; a deepfake is a sign. These examples, like all signs, are representations that communicate some meaning to a beholder. The semiotic theory of the nineteenth-century scholar C.S. Peirce provides an analytical framework for probing the differences between deepfakes and other media, and this framework illuminates the subject matter and the harm that the typical anti-deepfake law regulates.
A. Peircian Icons, Indices, and Symbols
For Peirce, each sign involves a signifying element (e.g., the written word “book”); an object (e.g., the book that the signifying element represents); and an interpretant (“the understanding that we have of the sign/object relation”).160Albert Atkin, Peirce’s Theory of Signs, in The Stanford Encyclopedia of Philosophy (Edward N. Zalta & Uri Nodelman eds., Summer 2013 ed.), https://plato.stanford.edu/archives/sum2013/entries/peirce-semiotics [perma.cc/7AY8-K2JF].
Most relevant for present purposes are the trichotomous categories that Peirce propounded to taxonomize signs in terms of their relationship to their objects: icons, indices, and symbols.
- An icon is a sign that relates to its object through resemblance.161Richard J. Parmentier, Signs in Society: Studies in Semiotic Anthropology 4, 17 (1994); see also Atkin, supra note 160. This thumbs-up wingding, 👍, is an iconic representation of a fist with a thumb raised.
- An index is a sign that relates to its object through physical or temporal contiguity.162Parmentier, supra note 161, at 4, 17; see also Atkin, supra note 160. A footprint in sand is an indexical sign.163See Parmentier, supra note 161, at 8. Some force—very probably a foot—made contact with the sand and left the foot-shaped depression. We interpret the footprint as a sign that someone walked on the sand.
- A symbol is a sign that relates to its object by convention.164Id. at 6. Both the phrase “thumbs-up!” and the image 👍 are symbols that signify approval. This meaning is arbitrary. In a different linguistic or cultural context, the written words “thumbs-up!”, or the sounds of those words spoken, or the thumbs-up gesture, may not be meaningful. Or their arbitrarily assigned meaning may differ; in some cultures, for example, the thumbs-up gesture is obscene.165David Anderson, Matthew Stuart & Shayanne Gal, 5 Everyday Hand Gestures That Can Get You in Serious Trouble Outside the US, Business Insider (Jan. 5, 2019), https://businessinsider.com/hand-gestures-offensive-different-countries-2018-6 [perma.cc/WP8Z-28CB].
A sign can function in multiple ways at once. Photographs, for example, signify both iconically and indexically. Photographs mechanically record how their subject “really appeared” at a particular moment (and thus are indexical); they also generally “look like” their subjects (and thus are iconic).166See, e.g., Göran Sonesson, Visual Signs in the Age of Digital Computation, in Ensayos Semióticos: Dominios, modelos y miradas desde el cruce de la naturaleza y la cultura, 1076–78 (Adrián Gimate-Welsh ed., 2000); see also infra Section II.B.
Although Peirce contributed much more to semiotics than just his icon-index-symbol trichotomy, the trichotomy is all we need to clarify sufficiently our understanding of deepfakes and the laws that regulate them. Up until this point, my Article has used words like “documentary photograph” or “video recording” in an effort to differentiate media that record perceptual information from media that merely depict things pictorially. But I used these words as approximations for concepts that Peircian semiotics allows us to discuss precisely. When I differentiated between “documentary” “records” like photographs and “non-documentary” “depictions” like drawings and paintings, what I was really doing was differentiating between indices and icons.
Peirce’s theory lets us describe exactly how deepfakes differ from the photographs and videos they resemble, and why these differences may be legally significant. The next two Sections, respectively, explain the semiotic relationship between a photograph or video recording and what it represents, and the distinct semiotic relationship between a deepfake and what it represents.
B. The Semiotics of Photographs and Video Recordings
Photographs bear an indexical relation to their objects. When film is exposed to light, the light’s interaction with the film determines the developed image.167Kris Paulsen, The Index and the Interface, 122 Representations 83, 86–87 (2013).
A direct, physical relationship exists between what the camera lens “saw” and what viewers of the photograph “see.” A similar relationship holds for digital photographs and videos.168Although the indexicality of digital photographs is a more contested proposition, this Article treats both digital and analog photographs as indices. For an overview of the debate, see id. at 87–89.
Our understanding that photographs are indexical helps explain why we often ascribe “truth claims” to photographs.169Tom Gunning, What’s the Point of an Index? Or, Faking Photographs, 25 Nordicom Rev. 39, 42 (2004).
We understand photographs to depict reality mechanically, and thus to fulfill a documentary function that drawings and paintings do not.170Id. at 41–42.
We refer to video evidence as a “smoking gun.”171See, e.g., David Wickert, ‘Smoking Gun’ Video of Georgia Vote Count Is Now Evidence Against Trump, Atlanta J.-Const. (Aug. 5, 2023), https://www.ajc.com/politics/smoking-gun-video-now-evidence-against-trump/J6ORVROLMRBPZHK2DYALIZJ624/ [perma.cc/9LX6-FWDT]; Kyle Schnitzer & Georgett Roberts, Video from Ed Sheeran Concert Is “Smoking Gun” in Marvin Gaye Copyright Case, N.Y. Post (Apr. 25, 2023), https://nypost.com/2023/04/25/video-from-ed-sheeran-concert-is-smoking-gun-in-marvin-gaye-copyright-case [perma.cc/8KZ4-5WP4].
A smoking gun is an indexical, and thus incriminating, sign that a gun was fired recently.172Smoking Gun, Wikipedia (2023), https://en.wikipedia.org/w/index.php?title=Smoking_gun&oldid=1161207239 [perma.cc/8JMN-H9AH]; Smoking Gun, Merriam-Webster, https://merriam-webster.com/dictionary/smoking+gun [perma.cc/687N-YK7P].
Like a smoking gun, we understand video evidence to be an indexical—indeed, essentially irrefutable—showing that something happened.
However, photographs don’t just signify indexically. What makes photographs especially powerful is that they also signify iconically. Photographs don’t merely record a phenomenon; they also resemble a contemporaneous visual experience of that phenomenon. In contrast to, say, a seismograph—which is an indexical record of an earthquake, but which doesn’t “look” or “feel” like an earthquake in any phenomenologically relevant sense—a photograph actually looks like the pattern of light that it records indexically.
Photographs’ indexical properties don’t make them immutably truthful. Photographs can be doctored or deceptively composed to misrepresent reality.173See generally Gunning, supra note 169; Paulsen, supra note 167; Katrina Geddes, Ocularcentrism and Deepfakes: Should Seeing Be Believing?, 31 Fordham Intell. Prop. Media & Ent. L.J. 1042 (2021).
Mischief results when photographs invite us to misinterpret them (or, in Peircian terms, when photographs produce interpretants that diverge from the photographs’ true relationships with their objects). That a photograph communicates a misrepresentation does not mean that the photograph is not an index. Rather, it means that people are misinterpreting the photograph. For example, Civil War photographers would move corpses from the locations in which they had fallen and pose them for photographs. The resulting photographs are still indexical signs. They just don’t indexically signify the visual appearance of the circumstances of a soldier’s death, which is what naïve viewers might expect. Rather, these photographs indexically signify the visual appearance of a scene that reflects the photographer’s compositional alterations.
Historical methods can help us assign the correct interpretant to a photograph that might otherwise mislead us. A historian studying a series of posed Civil War photographs that purported to depict a dead “sharpshooter” concluded, “[t]he type of weapon seen in these photographs was not used by sharpshooters. This particular firearm is seen in a number of [the photographer’s] scenes at Gettysburg and probably was the photographer’s prop.”174The Case of the Moved Body, Libr. of Cong., https://loc.gov/collections/civil-war-glass-negatives/articles-and-essays/does-the-camera-ever-lie/the-case-of-the-moved-body [perma.cc/YK4N-A7ZD].
The takeaway is that we interpret photographs in light of our knowledge and assumptions about what photographs communicate. Photographs, though indexical, can mislead; the veracity of our interpretations depends on the information we have available to assist us.
What makes photographs special, then, is not so much that they are indices, but that they are indexical icons. After all, every sign is an index of whatever caused it—we just may not find much of interest in what the sign signifies indexically. A hardcover book indexically signifies that paper underwent the bookbinding process, but readers are generally much more interested in the symbolic signification of the book’s text. A doctored photograph isn’t an indexical sign of how photons passed through a lens, but it is an indexical sign of post-hoc manipulation.175Indeed, analysts interpret photographs of Kim Jong Un not to learn what the dictator “truly” looks like, but rather to ascertain how North Korean propaganda entities choose to photoshop him. Sarah Emerson, Why Does North Korea Keep Photoshopping Kim Jong-un’s Ears?, Vice (June 21, 2017), https://vice.com/en/article/ywzjj5/why-does-north-korea-keep-photoshopping-kim-jong-uns-ears [perma.cc/75DW-3BZ2].
This tees up a crucial insight: as important as a sign’s actual relationship with its object is the relationship we think it has.
Photographs’ and video recordings’ status as indexical icons—and our corresponding convention of interpreting them as such—explains why the law often affords them singular treatment. Sometimes, indexical status alone makes media special. While police sketches are inadmissible hearsay, authenticated surveillance video is admissible evidence.176Compare, e.g., People v. Coffey, 182 N.E.2d 92, 94 (N.Y. 1962) (stating that composite sketch by police artist is “[o]rdinarily” inadmissible hearsay), with People v. Patterson, 710 N.E.2d 665, 667 (N.Y. 1999) (“[R]elevant videotapes and technologically generated documentation are ordinarily admissible under standard evidentiary rubrics.”).
Indexical evidence of some event, like surveillance footage or a murder weapon, can be admitted in court as substantive “real” evidence, while merely iconic evidence, like a wholly computer-generated recreation of an event, is “demonstrative” evidence.177See, e.g., United States v. Rembert, 863 F.2d 1023, 1028 (D.C. Cir. 1988); People v. Jennings, 324 N.W.2d 625, 627 (Mich. Ct. App. 1982). I thank Rebecca Wexler for suggesting this point to me.
In other contexts, what gives photographs a special legal position is that they are both indexical and iconic. Consider child pornography laws. There are many indexical signs of the sexual abuse of minors that these laws do not encompass, such as the results of a pregnancy test. Instead, child pornography laws focus on audiovisual child sexual abuse material because it both records real-life abuse and resembles that activity in the way we deem most morally salient.178See infra Section III.A.2.a. Some strictly iconic depictions of minors—such as an oil painting or a sculpture—are expressly exempted by the federal child pornography statute. See 18 U.S.C. § 2256(8)(B), (11).
C. The Semiotics of Deepfakes
Deepfakes do not indexically record visual phenomena as photographs do.179See Rebecca Uliasz, On the Truth Claims of Deepfakes: Indexing Images and Semantic Forensics, J. Media Art Study & Theory, Apr. 2022, at 63, 69.
Sure, deepfakes indexically signify something, just like a painting indexically signifies that paint made contact with canvas. What distinguishes deepfakes from photographs is not that deepfakes “aren’t indexical”; it’s that deepfakes don’t indexically signify visual phenomena in the way photographs do. Instead, what deepfakes signify indexically is the outcome of complicated statistical analyses in an AI model.180See id.
If image-generating AI produces a photorealistic picture of a White man in a lab coat when prompted to depict a “doctor,” that generated image may indexically signify certain patterns in the data that trained the AI. What that image doesn’t signify indexically, however, is that particular photons passed through a particular lens at a particular moment in time to create that particular image. Unlike photographs, which are indexical icons, deepfakes are icons of indexical icons.181Cf. id. at 68.
Anti-deepfake laws’ photorealism requirement is simply a requirement that a covered deepfake must be a convincing icon of a photograph.182See supra Section I.A.2.
Because deepfakes are icons, not indexical records of a discrete, observable event—which is what we interpret photographs and video recordings to be—many of the rationales that justify regulating sexually explicit photographs do not apply to deepfakes. Photo- or videographic revenge porn presents a conventional information-privacy issue because it indexically documents a victim’s private actions and the actual appearance of private body parts. Similarly, photo- or videographic CSAM can be regulated on the ground that its creation is “intrinsically related to the sexual abuse of children”: indexical records of abuse require that abuse take place.183See New York v. Ferber, 458 U.S. 747, 759 (1982).
Correspondingly, deceptive deepfakes present a conventional defamation issue because we interpret them to be indexical records of perceptual facts, even though they are merely icons.184The term “dicentization” describes the interpretation of an icon as an index. Christopher Ball, On Dicentization, 24 J. Linguistic Anthropology 151, 152 (2014).
Both revenge porn and deceptive deepfakes assert factual propositions, because we understand both to be indices and we understand photographic indices to make truth claims.
By contrast, many deepfakes are, in context, obviously fake. They can’t reasonably be understood to make a truth claim about how photons struck a lens at a particular moment in time and space. The many grounds that justify regulating indexical images, or deceptive simulacra, are unavailable to regulate obvious deepfakes, which are mere icons. To regulate non-deceptive deepfakes, we must acknowledge that we are not regulating the dissemination of true or false factual propositions, as information-privacy and defamation law respectively do. Nor are we regulating the real-life conduct that the imagery appears to depict, as we are when we regulate indexical CSAM. Instead, we are regulating something closer to flag burning or blasphemous drawings: We are regulating outrageous uses of icons per se.
One counterargument might be that no matter how conspicuously disclaimed, viewers simply cannot regard deepfakes as anything other than assertions of fact.185See George, supra note 19, at 155–56, 158 n.237. As distinct from the claim that photorealistic depictions are always understood as assertions of fact, George also argues that “hyperrealistic” styles are always intended as statements of fact because they “aim at truth and assert knowledge” even when they carry disclaimers of falsity. See id. at 155. With respect, I believe this is mistaken. Representational styles are neither true nor false, and media can be both fictional and photorealistic without being false. See supra notes 37–39, 65–67 and accompanying text. To quote Searle, the novelist’s “speech act . . . does not commit her to the possession of evidence” and involves “no commitment to the truth of the proposition.” John R. Searle, The Logical Status of Fictional Discourse, 6 New Literary Hist. 319, 323 (1975).
This argument treats deepfake viewers like the apocryphal silent-film audience that fled the theater in fear, believing a train on screen was actually barreling into their seats.186Eric Grundhauser, Did a Silent Film About a Train Really Cause Audiences to Stampede?, Slate (Jan. 5, 2017), https://slate.com/human-interest/2017/01/the-silent-film-that-caused-audiences-to-stampede-from-the-theater.html [perma.cc/U688-PGCE].
Instead of positing that we believe anything that looks like a train barreling at us to be a train barreling at us, this argument posits that we reflexively assume that anything that looks like a photograph is a photograph.
Yet the differing regulation of disclaimers in pornographic deepfakes and electioneering deepfakes shows that even the legislatures that enacted these anti-deepfake statutes do not accept this counterargument. States that treat disclaimers as irrelevant in pornographic deepfakes have also passed laws regulating election-related deepfakes, and these laws expressly exempt electioneering deepfakes that contain disclaimers.187Compare Cal. Elec. Code § 20010(b) (West 2025), and Wash. Rev. Code § 42.62.020(4) (2025), with Cal. Civ. Code § 1708.86(d) (West 2025), and Wash. Rev. Code § 7.110.025(3) (2025). See also supra note 37.
That state laws treat disclaimers as effective in electioneering media but ignore them in sexual media is strong evidence that laws prohibiting pornographic deepfakes are regulating something beyond deception.
Moreover, a reasonable observer in 2024 doesn’t interpret all photorealistic media as photographic. Critics of deepfakes acknowledge that “as the public becomes more educated about the threats posed by deep fakes,” the public will quite sensibly doubt that photorealistic media is authentically photographic.188Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753, 1785–86 (2019).
If we read anti-deepfake laws as establishing an irrebuttable presumption that it is always reasonable to interpret a photorealistic image as a photograph, then we are reading these laws to codify media illiteracy unmoored from present-day interpretive practice.189Cf. Geddes, supra note 173, at 1073–74 (questioning “the social utility of ocularcentrism”).
What the foregoing shows is that the premise of anti-deepfake laws isn’t, and shouldn’t be, that deepfakes inevitably assert facts in the same manner that indexical images do. Rather, the premise is that deepfakes are harmful even when they cannot reasonably be understood as documentary fact. Anti-deepfake laws self-consciously regulate icons qua icons, and they target pornographic uses precisely because of what those uses communicate. This means that anti-deepfake laws are in the business of regulating outrageous expressions of opinion, rather than true or false assertions of fact. This, in turn, might suggest that these laws are unconstitutional. American law, however, has a long and diverse history of content-based regulation of outrageous iconography, which Part III recounts.
III. Iconic Signs in American Law
This Part examines disparate American legal doctrines, each of which illuminates a distinct aspect of the typical anti-deepfake law. Trademark dilution law tells us that it has not been held unconstitutional to regulate nondeceptive uses of images simply because of their putative tendency to distort emotional attitudes towards the referents of those images.190See Jack Daniel’s Props., Inc. v. VIP Prods. LLC, 143 S. Ct. 1578, 1587, 1593 (2023); see also 2 Anne Gilson LaLonde, Gilson on Trademarks § 5A.01(6)(b) (2025) (explaining that the Supreme Court’s decision to not mention any First Amendment challenges to the Lanham Act’s tarnishment provision in Jack Daniel’s “may well put the speculation to rest about the provision’s constitutionality”).
CSAM case law tells us both that courts often conflate indices and icons,191See infra notes 256–262 and accompanying text.
and that, at least for morphed images, criminal prohibitions on nondeceptive icons have been upheld as constitutional.192See infra notes 227–228 and accompanying text.
Attitudes towards flag and effigy burning confirm both that desecrating a symbol is a distinct, and possibly more outrageous, act than merely disparaging its object in words, and that this distinction has been legally dispositive.193See infra Sections III.B.1 & III.B.2.
The legal status of written sexual fantasies shows us that fantastical images can sometimes be treated as more “real” than fantastical words and thus illuminates anti-deepfake laws’ realism requirements.194Compare, e.g., United States v. Valle, 807 F.3d 508, 517 (2d Cir. 2015) (“there is no actual distinction to be drawn between the [defendant’s] ‘real’ and ‘fantasy’ chats [about kidnapping and torturing women]”), with DEFIANCE Act of 2024, S. 3696, 118th Cong. § 2(3) (as amended by S. Amend. 3049 and passed by Senate, July 23, 2024) (“The[] harms [of nonconsensual, pornographic deepfakes] are not mitigated through labels or other information that indicates that the depiction is fake.”).
These doctrines demonstrate both that anti-deepfake laws address time-honored concerns, and that bans on nondeceptive, disparaging uses of icons are precedented. But they also reveal the constitutional challenges that anti-deepfake laws present. Courts will have to consider head-on whether certain uses of icons can be proscribed simply because they are outrageous, without being able to deflect by mischaracterizing icons as indices or by relying on categorical First Amendment exclusions. In other words, courts will have to choose between our longstanding and legally enshrined impulses to regulate outrageous iconography, and a line of First Amendment case law that would seem to disfavor precisely such regulation.
A. Contemporary Bans on Outrageous Iconography
1. Trademark Dilution
The doctrine of trademark dilution by “tarnishment” illuminates why anti-deepfake laws single out pornographic media. Semiotically speaking, trademarks function as symbols rather than icons: They are arbitrary signs that designate a provider of goods or services.195See Beebe, supra note 158, at 637 & n.85.
But just as an iconic depiction of a person represents that person, a trademark symbol represents the brand to which it corresponds. Just as anti-deepfake laws prohibit degrading uses of images irrespective of their deceptiveness, dilution law allows brands to prevent uses of their marks that associate the marks with negative connotations—even if those uses do not convey false information or confuse consumers.196See Rebecca Tushnet, More than a Feeling: Emotion and the First Amendment, 127 Harv. L. Rev. 2392, 2398 (2014).
Trademark tarnishment occurs when a mark is used in an unsavory context, especially a sexual one.197See, e.g., V Secret Catalogue, Inc. v. Moseley, 605 F.3d 382, 385 (6th Cir. 2010) (dilution statute “creates a kind of rebuttable presumption, or at least a very strong inference, that a new mark used to sell sex related products is likely to tarnish a famous mark if there is a clear semantic association between the two”); Dallas Cowboys Cheerleaders, Inc. v. Pussycat Cinema, Ltd., 467 F. Supp. 366, 371, 377 (S.D.N.Y. 1979), aff’d, 604 F.2d 200 (2d Cir. 1979); 3 J. Thomas McCarthy, McCarthy on Trademarks and Unfair Competition § 24:67 (5th ed.).
Although run-of-the-mill trademark infringement requires a plaintiff to show a likelihood that consumers will be confused as to the source or origin of goods,19815 U.S.C. § 1114(1).
dilution is actionable even with no likelihood of confusion.199E.g., 15 U.S.C. § 1125(c)(1); N.Y. Gen. Bus. Law § 360-L (McKinney 2025).
Dilution law lets brands prohibit uses of symbols that ostensibly might imbue those symbols with undesirable emotional connotations.
As Tushnet notes, dilution law goes beyond defamation because it gives brands control over the social and emotional atmospherics of their marks, not just assertions of fact.200Tushnet, supra note 196, at 2392; see also Jessica Litman, Breakfast with Batman: The Public Interest in the Advertising Age, 108 Yale L.J. 1717, 1728 (1999).
Similarly, anti-deepfake laws may find cognizable harm even when the person who altered the images is the only person who has seen them, or when the imagery is so obviously fake that no reasonable observer could regard it as documenting fact.201See, e.g., Ala. Code § 13A-6-240(a)(2) (2024); Fla. Stat. § 836.13 (2025); Tex. Penal Code Ann. § 21.165 (West 2025); see also supra Section I.A.1. Indeed, seemingly by coincidence, a recent paper uses the exact language of “tarnish[ment]” to describe how deepfakes harm the persons they depict. George, supra note 19, at 152.
However, while federal dilution law covers only commercial uses, anti-deepfake laws cover noncommercial uses, too.20215 U.S.C. § 1125(c)(3).
Scholars question trademark dilution’s constitutionality, and although dilution law has not yet been struck down as unconstitutional, the Supreme Court has in recent years invalidated bans on registering “disparag[ing]” and “immoral[] or scandalous” trademarks as impermissible viewpoint-based discrimination. 203Iancu v. Brunetti, 139 S. Ct. 2294, 2297 (2019); Matal v. Tam, 582 U.S. 218, 223 (2017). Recently, in a high-profile case on remand from the Supreme Court, a trial court refused to adjudicate a First Amendment challenge to the dilution statute, holding the argument waived. VIP Prods. LLC v. Jack Daniel’s Props. Inc., No. CV-14-02057-PHX-SMM, 2025 WL 275909, at *15 (D. Ariz. Jan. 23, 2025). For skeptical views of trademark dilution’s constitutionality, see, for example, Mark A. Lemley & Rebecca Tushnet, First Amendment Neglect in Supreme Court Intellectual Property Cases, 2023 S. Ct. Rev. 85, 97–109 (2024), and Lisa P. Ramsey, Free Speech Challenges to Trademark Law After Matal v. Tam, 56 Hous. L. Rev. 401, 444, 455 (2018). For a different perspective, see Jake Linford, Retrenching Speech Protective Thresholds in Trademark Law, 73 Am. U. L. Rev. F. 191, 222–24 (2024).
2. Child Sexual Abuse Material (CSAM)
The law of CSAM is a tour of all the same semiotic issues and imprecise reasoning that deepfakes have elicited. CSAM jurisprudence reveals that courts have upheld the constitutionality of regimes that regulate nondeceptive icons qua icons. Peirce’s semiotics maps seamlessly onto the law of CSAM. The federal definition of “child pornography” encompasses (1) images whose production “involve[d] the use of a minor engaging in sexually explicit conduct,” (2) “digital . . . or computer-generated image[s] that . . . [are] indistinguishable from[] that of a minor engaging in sexually explicit conduct,” and (3) images that were “created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.”20418 U.S.C. § 2256(8).
The corresponding case law identifies three distinct categories of CSAM. The first is indexical imagery whose manufacture necessitates abusing children or photographing intimate parts of their bodies. The second is purely iconic “virtual” CSAM, which is constitutionally protected speech precisely because it is not indexical.205The Supreme Court has not yet considered “virtual” child pornography falling under the “indistinguishable from” prong of the statute. In a 2002 case, the Court held that earlier statutory language—which encompassed media that “appears to be[] of a minor engaging in sexually explicit conduct”—was unconstitutionally overbroad. Ashcroft v. Free Speech Coal., 535 U.S. 234, 241, 258 (2002). See also infra Section III.A.2.b.
The third category comprises nonsexual photographs of children that have been morphed to depict sexual conduct.
a. Indexical Images of Child Sexual Abuse
The rationale for restricting the first category of images—those whose production necessarily involves children’s participation—is for obvious reasons the least controversial. In New York v. Ferber, the Supreme Court held that the First Amendment did not bar a bookstore owner’s conviction under a New York statute that prohibited “promoting a sexual performance by a child.”206New York v. Ferber, 458 U.S. 747, 751–52 (1982) (quoting N.Y. Penal Law § 263.15 (McKinney 1977)).
The defendant, Ferber, sold films “depicting young boys masturbating” to an undercover police officer, and the jury had found that the images in question were not obscene.207See id. at 752, 759, 764–65. The New York statute, read literally, might have been broad enough to encompass altered images that depict sexual conduct in which no child actually participated, because its definition of “sexual conduct” included “simulated” depictions that “create[] the appearance of such conduct.” N.Y. Penal Law § 263.00(1) (McKinney 1977); id. §§ (3), (5). But the filings leave little doubt that the films in Ferber were actual records of children’s real-life conduct, and subsequent interpretations of the statute imply that a conviction could not be based solely on images altered to depict sexual conduct in which children did not actually participate. See People v. Foley, 692 N.Y.S.2d 248, 256 (N.Y. App. Div. 1999), aff’d, 731 N.E.2d 123 (N.Y. 2000).
There was apparently no dispute that the materials Ferber sold were indexical, videographic depictions.208Cf. Ferber, 458 U.S. at 764–65 (“[D]istribution of . . . depictions of sexual conduct, not otherwise obscene, which do not involve live performance or photographic or other visual reproduction of live performances, retains First Amendment protection.”).
The Court held that material of this sort is unprotected by the First Amendment, even if not obscene.209Id. at 752, 765.
It noted the “surpassing importance” of protecting children from sexual abuse, and it reasoned that the distribution of media recording minors engaged in sexual activity “is intrinsically related to the sexual abuse of children in at least two ways”: First, the images “are a permanent record of the children’s participation and the harm to the child is exacerbated by their circulation”; and second, controlling the production of child pornography requires controlling its distribution network.210Id. at 757–60.
The harms that Ferber identified depend on CSAM being an indexical sign. Most importantly, Ferber identified a harm “intrinsic[]” to the creation of child pornography—it requires that children engage in real-life sexual conduct.211Id. at 759–60.
Additionally, because the images record abuse that children suffered before the camera, their circulation “violates ‘the individual interest in avoiding disclosure of personal matters.’ ”212Id. at 759 n.10 (quoting Whalen v. Roe, 429 U.S. 589, 599 (1977)).
This framing depends on the images’ status as indexical records of real-life events. In a subsequent case, Osborne v. Ohio, the Court relied on these rationales to uphold bans on private possession of child pornography.213Osborne v. Ohio, 495 U.S. 103, 110 (1990).
Ferber and Osborne both teach that indexical CSAM’s documentary properties warrant treatment as a distinct category of unprotected speech under the First Amendment.
b. Iconic Images of Child Sexual Abuse
Just as Ferber establishes that indexical images of child abuse fall outside the First Amendment, it is equally clear that images with no indexical relationship to an actual child cannot be criminalized categorically as child pornography. In 1996, Congress expanded the definition of child pornography to include “any visual depiction . . . [that] appears to be[] of a minor engaging in sexually explicit conduct.”21418 U.S.C. § 2256(8)(B) (1996), invalidated by, Ashcroft v. Free Speech Coal., 535 U.S. 234, 258 (2002).
In Ashcroft v. Free Speech Coalition, however, the Supreme Court held that this prohibition of so-called “virtual” child pornography was unconstitutionally overbroad because it covered expression protected by the First Amendment.215Free Speech Coal., 535 U.S. at 258.
The lack of an indexical relationship between virtual child pornography and the content it depicts was central to the Court’s reasoning: Such images “create[] no victims by [their] production.”216Id. at 250.
Another way of expressing the Court’s conclusion is to say that virtual child pornography is merely an iconic depiction of child abuse, and not an indexical record. It is for this reason that the Court could state that virtual child pornography “creates no victims by its production.”217See id.
After Free Speech Coalition, Congress amended the “appears to be” language to instead ban visual depictions “indistinguishable from[] that of a minor engaging in sexually explicit conduct,”218Pub. L. No. 108-21, 117 Stat. 650, 678 (2003); 18 U.S.C. § 2256(8)(B).
though commentators believe that this language is unconstitutional for the same reasons.219 Riana Pfefferkorn, Addressing Computer-Generated Child Sex Abuse Imagery: Legal Framework and Policy Implications 6–7 (2024), https://s3.documentcloud.org/documents/24403088/adressing-cg-csam-pfefferkorn-1.pdf [perma.cc/FMX8-56DC].
c. Manipulated Indices: Morphed Images
Free Speech Coalition left an important lacuna. The lawsuit did not challenge § 2256(8)(C) of the federal statute, which defined child pornography to include “visual depiction[s] . . . created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.”22018 U.S.C. § 2256(8)(C); Free Speech Coal., 535 U.S. at 252.
About these so-called “morphed” images, the Court noted only that “they implicate the interests of real children and are in that sense closer [than virtual child pornography is] to the images in Ferber. Respondents do not challenge this provision, and we do not consider it.”221Free Speech Coal., 535 U.S. at 242.
The Court’s brief discussion of § 2256(8)(C) seemed to acknowledge that the ontology of morphed images is more complicated than that of indexical records of abuse or strictly virtual icons. On one hand, unlike the videos in Ferber, morphed images are not records of physical abuse. On the other hand, they are images “of” “real children” in a sense that purely virtual images are not, because the starting point for a morphed image is an indexical image of a real-life child—albeit typically a nonsexual one.222See id.
Morphed images are criminalized in a way that reflects this ambivalent status. Even the Department of Justice has stated that “the production of a morphed image of child pornography is not as serious a crime as the production of genuine child pornography.” 223 United States Sentencing Commission, The History of the Child Pornography Guidelines 50–51 (2009), https://ussc.gov/sites/default/files/pdf/research-and-publications/research-projects-and-surveys/sex-offenses/20091030_History_Child_Pornography_Guidelines.pdf [perma.cc/YVR9-N2VJ].
Unlike indexical CSAM, federal law does not prohibit simple possession of morphed images or creation without intent to distribute.22418 U.S.C. § 2252A(a)(7).
Rather, the statute prohibits receipt, distribution, and production with intent to distribute.22518 U.S.C. §§ 2252A(a)(2), (a)(7).
However, several state anti-deepfake laws prohibit simple possession of pornographic deepfakes depicting a minor.226See, e.g., La. Stat. Ann. § 14:73.13(A) (2025); Md. Code Ann., Crim. Law § 11-208(b)(1) (West 2025).
Although the Supreme Court has not considered the constitutionality of the federal criminalization of morphed images, the federal courts of appeals to consider the issue have universally upheld the statute, though they have split as to their rationale.227The New Hampshire Supreme Court, however, held a state statute unconstitutional as applied to the private possession of images that superimposed minors’ faces onto adult bodies engaged in sex acts. State v. Zidel, 940 A.2d 255, 256, 265 (N.H. 2008).
Three courts of appeals have held morphed images to be categorically unprotected speech, but the Eighth Circuit has assumed that morphed images, unlike indexical CSAM, are protected speech.228Contrast United States v. Mecham, 950 F.3d 257, 267 (5th Cir. 2020), and Doe v. Boland, 698 F.3d 877, 884 (6th Cir. 2012), and United States v. Hotaling, 634 F.3d 725, 730 (2d Cir. 2011), with United States v. Anderson, 759 F.3d 891, 895 (8th Cir. 2014).
The earliest and narrowest decision came from the Eighth Circuit in 2005 in United States v. Bach. Bach affirmed a conviction based on § 2256(8)(C) where the defendant had received an image of a minor exhibiting his genitals, onto which the face of a different minor had been superimposed.229United States v. Bach, 400 F.3d 622, 625, 631–32 (8th Cir. 2005).
The Eighth Circuit reasoned that, because the nude figure in the photo was a minor, “[u]nlike the virtual pornography protected by the Supreme Court in Free Speech Coalition, the picture . . . implicates the interests of a real child and does record a crime.”230Id. at 632.
The court was careful to note, however, that “[t]his is not the typical morphing case,” because the photo Bach received did not involve merely an “innocent picture of a child” but instead incorporated a photograph of the “lasciviously posed body . . . of” a second child.231Id.
For this reason, Bach held that “[a]lthough there may well be instances in which the application of § 2256(8)(C) violates the First Amendment, this is not such a case.”232Id.
The court stated that the image “involves the type of harm which can constitutionally be prosecuted under Free Speech Coalition and Ferber,” thereby suggesting that it was holding it categorically excluded from First Amendment protection.233Id.
The Eighth Circuit revisited Bach nine years later in United States v. Anderson, in which a defendant had been convicted for superimposing a minor’s face onto a photograph of adults engaged in sexual conduct.234United States v. Anderson, 759 F.3d 891, 893, 895 (8th Cir. 2014).
Distinguishing Bach, the court in Anderson observed, “[n]o minor was sexually abused in the production of Anderson’s image. . . . [T]his difference is significant enough to distinguish Anderson’s image from the unprotected speech in Bach.”235Id. at 895.
Instead of holding the morphed image to be categorically unprotected by the First Amendment, Anderson treated the image as protected speech and held instead that the prohibition on morphed images satisfied strict scrutiny.
For all the care Anderson took in distinguishing Bach—and all the care Bach took in distinguishing Free Speech Coalition—Anderson’s strict scrutiny analysis was slapdash. The court described the government’s compelling interest as follows: “[M]orphed images are like traditional child pornography in that they are records of the harmful sexual exploitation of children. The children, who are identifiable in the images, are violated by being falsely portrayed as engaging in sexual activity.”236Id. at 896 (quoting Shoemaker v. Taylor, 730 F.3d 778, 786 (9th Cir. 2013)).
As for narrow tailoring, the court wrote, “the harm a child suffers from appearing as the purported subject of pornography in a digital image that is distributed via the Internet can implicate a compelling government interest regardless of the image’s verisimilitude or the initial size of its audience.”237Id.
Anderson’s compelling-interest analysis is incoherent. In stating that “morphed images are like traditional child pornography in that they are records of the harmful sexual exploitation of children,” Anderson got the ontology of morphed images exactly wrong—morphed images are unlike traditional child pornography precisely because they are not records of the harmful sexual exploitation of children.238Of course, this is only true of what Bach called “typical morphing case[s],” which involve nonsexual photographs of children. 400 F.3d at 632.
To call an image a “record” is to suggest that it indexically documents an event. This is how Bach used the term when it observed that the picture at issue in that case, an indexical image of a child’s genitalia, “record[s] a crime.”239Id. (emphasis added).
Moreover, in the next sentence, Anderson acknowledges that morphed images are not records at all; rather than document an event that occurred, they “falsely portray[]” identifiable minors “as engaging in sexual activity.”240Anderson, 759 F.3d at 896 (emphasis added).
The harm that Anderson is describing is essentially a grievous form of libel.241See Amy Adler, Inverting the First Amendment, 149 U. Pa. L. Rev. 921, 990 & n.312 (2001).
If spreading such libel about a child is “sexual exploitation,” then morphed images perhaps constitute the “sexual exploitation of children,” but in no event do they record the sexual exploitation of children.
Even if Anderson’s compelling-interest analysis were coherent, it would still be irreconcilable with the narrow-tailoring analysis that follows it. The harm that the court identified to support the government’s compelling interest is the “false[] portray[al] [of victims] as engaging in sexual activity.”242Anderson, 759 F.3d at 896 (quoting Shoemaker v. Taylor, 730 F.3d 778, 786 (9th Cir. 2013)).
But just sentences later, in its narrow-tailoring analysis, the court stated that “the harm a child suffers from appearing as the purported subject of pornography in a digital image . . . can implicate a compelling government interest regardless of the image’s verisimilitude.”243Id. (emphasis added).
This assertion belies the court’s earlier characterization of the harm of morphed images. If the harm is truly that the images “falsely portray” minors as having engaged in sexual activity, then verisimilitude matters. If expression doesn’t purport to assert facts, then it might be fictional, but it isn’t false.244See supra Section I.B.4.
In the defamation context, if speech cannot “reasonably be understood as describing actual facts,” then it cannot be actionably “false.”245Pring v. Penthouse Int’l, Ltd., 695 F.2d 438, 440 (10th Cir. 1982).
But the charge on which Anderson was convicted did not require a finding that a reasonable person would have perceived the morphed images as documentations of fact, and the statute does not allow a defendant to avoid liability by simply adding a disclaimer stating that an image has been morphed. The harm Anderson identifies is not a false portrayal of an identifiable child engaged in sexual activity, but a portrayal of any sort. The problem is not indices, or even icons that deceptively resemble indices, but icons pure and simple.
Courts hold morphed images categorically unprotected by ignoring the differences between indices and icons. In United States v. Hotaling, the Second Circuit affirmed a defendant’s conviction under §§ 2252A(a)(5)(B) and 2256(8)(C) for superimposing nonpornographic photographs of minors over the heads of adults photographed in sexually explicit circumstances. Citing Ferber and quoting Osborne—cases that both involved indexical records of minors in sexually explicit positions246See Osborne v. Ohio, 495 U.S. 103, 107 n.1 (1990); New York v. Ferber, 458 U.S. 747, 752 (1982). See also supra notes 211–212.
—the Second Circuit asserted that “emotional and reputational harms” sufficed to justify criminalizing possession of morphed images of a child’s likeness.247United States v. Hotaling, 634 F.3d 725, 726, 728–29 (2d Cir. 2011) (citing Osborne, 495 U.S. at 109–11).
Then, citing Free Speech Coalition’s discussion of indexical child pornography, Hotaling stated that “the Supreme Court has made it clear that the harm begins when the images are created.”248Id. at 730 (citing Ashcroft v. Free Speech Coal., 535 U.S. 234, 254 (2002)).
Hotaling thus held that morphed images “[are] not protected expressive speech under the First Amendment.”249Id. at 726–27.
Hotaling made three significant analytical moves. First, it extended Ferber and Osborne by reasoning that the “emotional and reputational harms” of the circulation of child pornography justify criminal prohibitions, even when the media are not indexical depictions of the real-life sexual abuse of children or of children’s actual bodies. Neither Ferber nor Osborne suggested that the “haunting” rationale applied to media that did not indexically record minors nude or engaged in sexual conduct. Even if the “haunting” rationale does apply to iconic media, neither Ferber nor Osborne held that “haunting” alone was a sufficient basis for holding child pornography laws constitutional.250Ferber, 458 U.S. at 759; Osborne, 495 U.S. at 109–10.
Second, Hotaling formulated the constitutional test to match Free Speech Coalition’s dicta about morphed images. Hotaling cited Free Speech Coalition’s holding that the virtual child pornography ban was unconstitutional because “the child-protection rationale for speech restriction does not apply to materials produced without children.”251Hotaling, 634 F.3d at 729 (quoting United States v. Williams, 553 U.S. 285, 289 (2008); Free Speech Coal., 535 U.S. at 258).
With this citation as support, Hotaling stated that to evaluate whether imagery is protected speech, “[t]he underlying inquiry is whether an image of child pornography implicates the interests of an actual minor.”252Hotaling, 634 F.3d at 729.
It then quoted Free Speech Coalition’s dictum that “morphed images . . . implicate the interests of real children and are in that sense closer to the images in Ferber” than purely virtual images are.253Id. (quoting Free Speech Coal., 535 U.S. at 242).
Hotaling concluded that “the interests of actual minors are implicated when their faces are used in creating morphed images that make it appear that they are performing sexually explicit acts” and thus that such images “are not protected expressive speech under the First Amendment.”254Hotaling, 634 F.3d at 729–30.
Hotaling reframed the constitutional inquiry to fit its purposes. In stating that the constitutional test is “whether an image . . . implicates the interests of an actual minor,” Hotaling did not rely on a Supreme Court case that actually so held. Rather, it relied on Free Speech Coalition’s holding that material that did not implicate the interests of an actual child was protected speech.255See Free Speech Coal., 535 U.S. at 254–56.
Hotaling’s formulation of the test is the inverse of Free Speech Coalition’s: If material does implicate the interests of a child, the First Amendment doesn’t protect it. This holding extends Free Speech Coalition because, as a matter of logic, a proposition does not entail its inverse.
More importantly, Hotaling exploited the ambiguity that it had baked into its own formulation of the First Amendment test. Free Speech Coalition did not hold that “implicating” any interest of a minor necessarily relegates an image to the constitutionally unprotected category of child pornography. Rather, its focus was on distinguishing iconic imagery from indexical imagery—or, in its words, distinguishing virtual pornography from material that “caused [harm] to its child participants.”256See id. at 249 (emphasis added).
The images in Ferber implicated the interests of actual minor “participants”: They were documentary records of sexual conduct by minors, and their circulation would republish private matters and trigger haunting memories. The images in Hotaling, on the other hand, were simulacra of abuse; their production did not involve minor “participants” in the sense that Ferber used the word. The circulation of the images in Hotaling would expose children to the “risk of reputational harm and . . . the psychological harm of knowing that their images were exploited,” but their creation did not necessitate the abuse of a minor “participant.”257Hotaling, 634 F.3d at 730.
Hotaling’s third move was to equate the harms of morphed images with the harms of indexical records of child abuse. Recall that Hotaling relied on “haunting,” which is premised upon the harms of images’ circulation, rather than of their creation.258Carissa Hessick, The Limits of Child Pornography, 89 Ind. L.J. 1437, 1477–78 (2014).
Hotaling also asserted, “[T]he Supreme Court has made it clear that the harm begins when the images are created,” but it cited a page of Free Speech Coalition that discussed indexical records of child abuse, not morphed images. 259Hotaling, 634 F.3d at 730 (citing Free Speech Coal., 535 U.S. at 254).
Free Speech Coalition expressly “d[id] not consider” the ban on morphed images.260Free Speech Coal., 535 U.S. at 242.
Even federal law does not outlaw morphed images from the moment of creation.261See supra notes 224–225.
The “haunting” rationale “implies a return of a previous experience” and thus, while it may be grounds to regulate indices of child abuse, it is a flimsy basis for regulating icons.262Adler, supra note 241, at 990; see also Osborne v. Ohio, 495 U.S. 103, 143 n.18 (1990) (Brennan, J., dissenting).
Nonetheless, morphed-image case law shows that it has been held constitutional to prohibit some nondeceptive, noncommercial, outrageous uses of icons per se.
3. Morphed CSAM, Deepfakes, and the First Amendment
Judicial reasoning about morphed images is so tortured because Supreme Court precedents put appellate courts in an awkward position. Free Speech Coalition avoided adjudicating whether morphed images were protected speech. Eight years later, in United States v. Stevens, the Court held that depictions of animal cruelty were not categorically unprotected speech. Stevens acknowledged that “the First Amendment has permitted restrictions upon the content of speech in a few limited areas . . . . including obscenity, defamation, fraud, incitement, and speech integral to criminal conduct,” but rejected “a freewheeling authority to declare new categories of speech outside the scope of the First Amendment.”263United States v. Stevens, 559 U.S. 460, 468–69, 472 (2010) (cleaned up).
Soon after, the Court relied on Stevens to refuse categorical First Amendment exemptions for violent video games and false statements.264Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 792 (2011); United States v. Alvarez, 567 U.S. 709, 722 (2012).
Although Stevens hedged that there may remain “some categories of speech that have been historically unprotected, but have not yet been specifically identified or discussed as such in our case law,” lower courts read it to suggest an end to novel categorical exclusions from the First Amendment.265559 U.S. at 472; see State v. Katz, 179 N.E.3d 431, 453 (Ind. 2022).
For example, every state supreme court to consider the constitutionality of a revenge-porn statute has, citing Stevens, declined to hold revenge porn categorically unprotected speech.266Katz, 179 N.E.3d at 453; State v. Casillas, 952 N.W.2d 629, 637–38 (Minn. 2020); People v. Austin, 155 N.E.3d 439, 455 (Ill. 2019); State v. VanBuren, 214 A.3d 791, 807 (Vt. 2019); see also Ex parte Fairchild-Porche, 638 S.W.3d 770, 792–93 (Tex. App. 2021).
Free Speech Coalition and Stevens presented a dilemma for lower courts adjudicating constitutional challenges to morphed-image laws. The expedient option was to classify morphed images as unprotected “child pornography,” ignoring that they lack the indexical link to real-life sexual abuse that Ferber, Free Speech Coalition, and Stevens all presented as the defining justification for child pornography’s categorical exclusion from First Amendment protection.267Ashcroft v. Free Speech Coal., 535 U.S. 234, 250 (2002); Stevens, 559 U.S. at 471 (“Ferber presented a special case: The market for child pornography was ‘intrinsically related’ to the underlying abuse . . . .’ ” (quoting New York v. Ferber, 458 U.S. 747, 759, 761 (1982))).
The alternative was to subject the bans to strict scrutiny.
Ultimately, three circuits chose the expediency of categorical exclusion, and the Eighth Circuit contorted itself to hold that morphed-image bans satisfied strict scrutiny.268United States v. Mecham, 950 F.3d 257, 267 (5th Cir. 2020); Doe v. Boland, 698 F.3d 877, 884 (6th Cir. 2012); United States v. Hotaling, 634 F.3d 725, 730 (2d Cir. 2011); United States v. Anderson, 759 F.3d 891, 895 (8th Cir. 2014).
No matter their approach, these courts relied on justifications for banning indexical images without acknowledging that morphed images are icons. The courts thereby managed to sustain the practice of banning outrageous iconography per se while purporting to adhere to Free Speech Coalition and Stevens.
A morphed-image case from the Fifth Circuit, United States v. Mecham, contains an instructive First Amendment analysis. Mecham justified holding morphed images categorically unprotected by observing that the federal definition of “child pornography” includes not just imagery that records criminal abuse, but also imagery that shows a “lascivious exhibition” of a minor’s genitals, like images taken surreptitiously with a hidden camera.269Mecham, 950 F.3d at 266. For a critique of classifying images not produced through sexual abuse as “child pornography,” see Adler, supra note 241, at 946–57.
True enough. But Mecham overlooked a more persuasive distinction between morphed images and indexical child pornography: Only the latter authentically records private facts about a person. Even if it does not document physical abuse, indexical child pornography is objectionable for the same reason that revenge porn is: Its distribution, as Ferber put it, “violates ‘the individual interest in avoiding disclosure of personal matters.’ ”270New York v. Ferber, 458 U.S. 747, 759 n.10 (1982) (quoting Whalen v. Roe, 429 U.S. 589, 599 (1977)).
Conspicuously fake morphed images, like deepfakes, do not.
Deepfakes represent a more troublesome extension of the same First Amendment problem that the morphed-image cases presented. Whereas morphed images proved relatively easy for courts to slot into an unprotected category of speech, deepfakes depicting adults do not fall into any recognized category of unprotected speech—at least, not if courts’ refusals to find revenge porn categorically unprotected are any indication. Assuming Stevens is correctly read as hostile towards new categories of unprotected speech that might encompass deepfakes, anti-deepfake laws will have to run the gauntlet of strict scrutiny.
B. Impermissible Bans on Outrageous Expression
Although dilution and morphed CSAM show that per se bans on outrageous iconography have avoided First Amendment invalidation, such bans are not uniformly constitutional. Attempts to outlaw flag burning, effigy desecration, and written sexual fantasies reaffirm that anti-deepfake laws respond to longstanding interests. Judicial curtailments of these attempted bans help us understand the circumstances that have failed to justify per se bans on outrageous expression.
1. Flag Burning
Historical regulation of flag desecration helps explain anti-deepfake laws’ focus on altered images. Like trademarks, flags operate as Peircian symbols when they signify a particular group or nation. For some, the American flag is a symbol not just of the country, but also “of freedom, of equal opportunity, of religious tolerance, and of good will for other peoples who share our aspirations.”271Texas v. Johnson, 491 U.S. 397, 437 (1989) (Stevens, J., dissenting).
Desecrating the flag can be a grave affront, so much so that at one time forty-eight states criminalized it.272Id. at 434 (Rehnquist, C.J., dissenting). On August 25, 2025, President Trump issued an executive order vowing “to restore respect and sanctity to the American Flag and prosecute those who incite violence or otherwise violate our laws while desecrating this symbol of our country, to the fullest extent permissible under any available authority.” Exec. Order No. 14341, 90 Fed. Reg. 42127 (Aug. 25, 2025).
In 1989, a 5–4 Supreme Court struck down a Texas ban on flag burning as unconstitutional.273Johnson, 491 U.S. at 399.
The Court held that Texas’s asserted “interest in preserving the flag as a symbol of nationhood and national unity” could not justify criminalizing political expression that took the form of flag burning.274Id. at 420.
Dissenting, Justice Stevens argued that the Texas statute did not proscribe the communication of “disagreeable ideas” so much as a particular “mode of expression.”275Id. at 437–38 (Stevens, J., dissenting).
“The concept of ‘desecration,’ ” he wrote, “does not turn on the substance of the message the actor intends to convey, but rather on whether those who view the act will take serious offense.” 276Id. at 438.
“[E]ven if the actor knows that all possible witnesses will understand that he intends to send a message of respect, he might still be guilty of desecration if he also knows that this understanding does not lessen the offense taken by some of those witnesses.”277Id.
Justice Stevens’s argument about flag burning tracks intuitive attitudes about deepfakes. On his telling, what’s wrong with flag burning is not so much the proposition it expresses as the manner of expression.278Id.
The same is true of deepfakes, which elicit special outrage not because they express any particular proposition, but because they employ a particular manner of expression. For example, one pornographic deepfakes site contained the disclaimer, “We respect each and every celebrity featured. The OBVIOUS fake face swap porn is in no way meant to be demeaning. It’s art that celebrates the human body and sexuality.”279Megan Farokhmanesh, Is It Legal to Swap Someone’s Face into Porn Without Consent?, The Verge (Jan. 30, 2018), https://theverge.com/2018/1/30/16945494/deepfakes-porn-face-swap-legal [https://perma.cc/NZ42-GYDX ].
I can’t imagine this disclaimer placated anyone. Like Justice Stevens’s example of flag burning “intend[ed] to send a message of respect,” the “respect[ful]” creation of pornographic deepfakes to “celebrate[] the human body and sexuality” is still objectionable, because it is still desecration.
Nonconsensual deepfakes and flag burning are both outrageous not strictly because of the propositions they communicate, but because of the outrageousness of their communicative method. But they do not cause identical sorts of outrage. Flag burning concerns only the use of a single, “unique” symbol.280See generally Johnson, 491 U.S. 397 (describing flag as “unique” throughout).
By contrast, to proscribe nonconsensual deepfakes is to proscribe infinitely many possible photorealistic depictions of every individual’s unique face. Flag burning disrespects a collective, but a deepfake desecrates an individual likeness (although commentators accurately observe that the institution of pornographic deepfakes evinces disrespect for women collectively).281See Jesselyn Cook, Here’s What It’s Like to See Yourself in a Deepfake Porn Video, HuffPost (June 23, 2019), https://huffpost.com/entry/deepfake-porn-heres-what-its-like-to-see-yourself_n_5d0d0faee4b0a3941861fced [https://perma.cc/E8DP-LFMC] (quoting Mary Anne Franks). See generally Wagner & Cetinic, supra note 2.
But anti-deepfake laws share the same fundamental stance as Justice Stevens in Texas v. Johnson: What must be regulated is not the expression of any particular proposition or idea, but rather a mode of expression that is simply too outrageous to tolerate.282See Johnson, 491 U.S. at 437 (Stevens, J., dissenting); see also infra note 296.
2. Libel, Emotional Distress, and Effigies
The legal and cultural significance of effigies helps us understand why anti-deepfake laws paradigmatically regulate uses of the likeness of another. In 1992, the late singer Sinéad O’Connor concluded her performance on Saturday Night Live by holding a photograph of Pope John Paul II up to the camera and tearing it.283Jon Caramanica, The Night Sinead O’Connor Took on the Pope on ‘S.N.L.’, N.Y. Times, Aug. 1, 2023, https://nytimes.com/2023/07/26/arts/music/sinead-oconnor-snl-pope.html [perma.cc/2JCJ-YN9Q].
Her act is now praised as a protest against sexual abuse within the Roman Catholic Church.284Id.
At the time, however, O’Connor’s demonstration led to death threats, a literal steamrolling of her records in Times Square, and significant damage to her musical career.285Ethan Alter, Why Sinéad O’Connor’s 1992 ‘Saturday Night Live’ Appearance Was ‘Like a Canceling’, Yahoo! Ent. (July 26, 2023), https://yahoo.com/entertainment/why-sin-ad-oconnors-1992-185050791.html [perma.cc/Y8E9-CAC4]; Simon Hattenstone, Sinéad O’Connor: ‘I’ll Always Be a Bit Crazy, but That’s OK’, Guardian (May 29, 2021), https://theguardian.com/music/2021/may/29/sinead-oconnor-ill-always-be-a-bit-crazy-but-thats-ok-rememberings [perma.cc/6C3B-CLY3].
Her protest probably would not have elicited the same outcry if, instead of destroying a photograph, O’Connor had simply stared into the camera and stated, “I protest sexual abuse in the Roman Catholic Church.” There’s something especially outrageous about defacing a realistic likeness of a person.286See Tushnet, supra note 46, at 705, 709.
Abrahamic religions acknowledge the power of images by forbidding visual depictions of the godhead.287Adler, supra note 42, at 163–64.
Islam forbids visual likenesses of prophets, and the past two decades have seen multiple outbreaks of deadly violence following Western political cartoonists’ publication of disparaging caricatures of the Prophet Muhammad.288Why Does Depicting the Prophet Muhammad Cause Offence?, BBC News (Oct. 4, 2021), https://bbc.com/news/world-europe-30813742 [perma.cc/65TJ-L7R8]; Dan Bilefsky, Denmark Is Unlikely Front in Islam-West Culture War, N.Y. Times, Jan. 8, 2006, https://www.nytimes.com/2006/01/08/world/europe/denmark-is-unlikely-front-in-islamwest-culture-war.html [https://perma.cc/C425-JSGN]; Nicki Peter Petrikowski, Charlie Hebdo Shooting, Britannica (Sep. 19, 2024), https://britannica.com/event/Charlie-Hebdo-shooting [perma.cc/E8FF-NZYX].
Perhaps in recognition of the special power of effigies, historical definitions of libel have acknowledged that the defacement of images can provoke legally actionable outrage. Blackstone explained with respect to criminal libel that “it is immaterial with respect to the essence of a libel, whether the matter of it be true or false; since the provocation, and not the falsity, is the thing to be punished criminally.”2894 William Blackstone, Commentaries *150–53.
Some historical sources suggest that civil libel could be shown from the defacement of images, such as burning a plaintiff in effigy.290Eugene Volokh, Symbolic Expression and the Original Meaning of the First Amendment, 97 Geo. L.J. 1057, 1065–66 (2009); see also Eyre v. Garlick (1878) 42 JPR 68 (QBD) at 68 (Eng.) (considering appeal of civil libel action based on the defendants “burning [the plaintiff] in effigy” and observing, “whether this was libellous or not was a question for the jury”); Brown v. Paramount Publix Corp., 270 N.Y.S. 544, 548 (N.Y. App. Div. 1934) (referring to “the ancient libel committed by the burning of the plaintiff in effigy”).
And to this day, courts permit claims for intentional infliction of emotional distress when a plaintiff can show that a defendant intentionally or recklessly caused severe emotional distress by defacing an effigy.291See Restatement (Second) of Torts § 46 (A.L.I. 1965); Bowman v. Heller, 651 N.E.2d 369, 376 (Mass. 1995) (imposing liability for intentional infliction of emotional distress on an employee who distributed images of his supervisor’s face superimposed on explicit photographs of nude women); Muratore v. M/S Scotia Prince, 656 F. Supp. 471, 482 (D. Me. 1987), aff’d in relevant part, 845 F.2d 347 (1st Cir. 1988). A number of anti-deepfake statutes omit an “intent to harm” requirement, Kadri & West, supra note 12, at 17, which suggests a purpose of protecting a broader dignitary interest rather than simply preventing harassment, Citron & Franks, supra note 108 (criticizing “intent to harm” requirements in revenge-porn laws as “convert[ing] what should be a sexual privacy law into a harassment law”).
In the mid-twentieth century, the Supreme Court interpreted the First Amendment to require defamation plaintiffs to prove the falsity of the defendant’s statement, at least where the subject of the statement was a matter of public concern.292Phila. Newspapers, Inc. v. Hepps, 475 U.S. 767, 776 (1986); Milkovich v. Lorain J. Co., 497 U.S. 1, 19–20, 20 n.6 (1990).
The First Amendment similarly limits public figures’ claims for intentional infliction of emotional distress.293Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 56 (1988).
The Court also explained that “[u]nder the First Amendment there is no such thing as a false idea.”294Gertz v. Robert Welch, Inc., 418 U.S. 323, 339 (1974).
Accordingly, insofar as a defacement of an effigy merely expresses outrageous disrespect and does not insinuate any factual proposition, it cannot be defamatory today.295Criminal libel—to the extent it remains a viable doctrine at all—also incorporates constitutional limitations on liability, such as a truth defense. See, e.g., State v. Turner, 864 N.W.2d 204, 209 (Minn. Ct. App. 2015); Tollett v. United States, 485 F.2d 1087, 1094 (8th Cir. 1973).
But the venerable history of legal and theological regulation of the creation of effigies suggests an understanding that images can provoke outrage that text may not.
As with pornographic deepfakes and flag burning, the chief harm of disparaging effigies derives from the mode of expression rather than the proposition expressed.296Of course, one might reasonably question whether the mode of expression can be separated from the proposition expressed. Some sitting Justices, however, have expressed openness to differentiating between a “mode of expression” and a “particular message or idea.” See Iancu v. Brunetti, 588 U.S. 388, 401 (Roberts, C.J., concurring in part); id. at 412 (Sotomayor, J., concurring in part).
A provocateur who wears a sandwich board displaying a written message of contempt for Muslims might, in some abstract sense, be said to communicate the same message as a provocateur who burns an effigy of the Prophet Muhammad. But the provocateurs’ communications differ meaningfully.297“The medium is the message.” See generally Marshall McLuhan, Understanding Media: The Extensions of Man (MIT Press 1994) (1964). Cf. R v. Coskun [2025] Westminster MC, [29] (Eng.), https://judiciary.uk/wp-content/uploads/2025/06/Rex-v-Hamit-Coskun.pdf [perma.cc/YCV9-MZGA] (implicitly differentiating between the defendant’s “provocative act” of burning the Quran and the “bad language” that “accompan[ied]” it).
Disrespecting an effigy is not a true or false statement, but an outrageous form of disrespect. Anti-deepfake statutes don’t regulate propositional statements; they regulate uses of effigies.
3. Written Sexual Fantasies
Legal analysis of fantasies disseminated in writing illuminates anti-deepfake laws’ “realism” requirements.298See supra Section I.A.2.
In 2013, a jury convicted former New York City Police Department officer Gilberto Valle of conspiracy to commit kidnapping, based on messages from a sexual fetish website “in which Valle and three alleged co-conspirators discuss in graphic detail kidnapping, torturing, raping, murdering, and cannibalizing women.”299United States v. Valle, 301 F.R.D. 53, 59 (S.D.N.Y. 2014), aff’d in part, rev’d in part, 807 F.3d 508 (2d Cir. 2015).
Valle—who came to be known as the “Cannibal Cop”—had shared “Facebook photographs of women he knew” and exchanged detailed messages with other forum users about kidnapping, torturing, raping, and cannibalizing those women.300Id. at 59, 66–77.
Valle disclosed the women’s true first names, but never their surnames, and he consistently misrepresented details relevant to a potential kidnapping, such as where he lived, whether he had surveilled his putative targets, and where those targets lived.301Id. at 61, 85.
Valle maintained that these supposedly incriminating communications were in fact consistent with fantasy, and that the government failed to establish that he had the necessary criminal intent or that he entered into an actual agreement to commit kidnapping.302Id. at 83.
More than a year after Valle’s conviction, the district court entered a judgment of acquittal, which the Second Circuit affirmed.303Id. at 115; United States v. Valle, 807 F.3d 508, 528 (2d Cir. 2015).
The government had argued that, while some of Valle’s chats were “fantasy,” others demonstrated “real” criminal intent.304Valle, 807 F.3d at 516.
An FBI agent who reviewed Valle’s chats explained, “[i]n the [chats] that I believe[d] were fantasy, the individuals said they were fantasy,” while “the ones that I thought were real . . . . described dates, names and activities that you would use to conduct a real crime.”305Valle, 301 F.R.D. at 65.
Both the district court and the Second Circuit held that the “real” and “fantasy” chats were “indistinguishable”—both contained fabricated and fantastical elements, and even Valle’s “real” chats were unaccompanied by any effort to meet putative coconspirators in person—and thus the prosecution could not prove the necessary criminal intent.306Valle, 807 F.3d at 516–17, 523; Valle, 301 F.R.D. at 86.
In a similar case, United States v. Alkhabaz, the defendant, Jake Baker, posted to a public Internet newsgroup a story that “graphically described the torture, rape, and murder of a woman who was given the name of” one of his university classmates.307United States v. Baker, 890 F. Supp. 1375, 1379 (E.D. Mich. 1995), aff’d sub nom., United States v. Alkhabaz, 104 F.3d 1492 (6th Cir. 1997).
However, the defendant also transmitted written sexual fantasies in private emails to a single correspondent, and on the basis of these emails the government charged him with violating the interstate-threat statute, 18 U.S.C. § 875(c).308Id. at 1378–79, 1386.
The emails, among other things, expressed a desire “to do it to a really young girl” and described how one might abduct an unspecified woman in Baker’s dormitory.309Id. at 1377, 1387–88.
The district court quashed the indictment on the ground that the emails did not express an intent to act and were thus protected by the First Amendment. The court wrote, “Discussion of desires, alone, is not tantamount to threatening to act on those desires. Absent such a threat to act, a statement is protected by the First Amendment.”310Id. at 1388; see id. at 1389–90.
The Sixth Circuit affirmed on statutory rather than constitutional grounds. It interpreted the statute to require “that a reasonable person . . . would perceive” the ostensible threat “as being communicated to effect some change or achieve some goal through intimidation.”311United States v. Alkhabaz, 104 F.3d 1492, 1495 (6th Cir. 1997). The Sixth Circuit later concluded that Alkhabaz wrongly read an “intimidation requirement” into the statute; see United States v. Doggart, 906 F.3d 506, 512 (6th Cir. 2018).
The court concluded, “[N]o reasonable person would perceive such communications as being conveyed to effect some change or achieve some goal through intimidation. Quite the opposite, Baker and [Baker’s correspondent] apparently sent e-mail messages to each other in an attempt to foster a friendship based on shared sexual fantasies.”312Alkhabaz, 104 F.3d at 1496. For a contrasting case, see Dayton v. Davis, 735 N.E.2d 939, 941–42 (Ohio Ct. App. 1999).
If Valle or Baker had communicated their sexual fantasies by making pornographic deepfake imagery of identifiable women and sharing it in online chats, such conduct would undoubtedly be a violation of many criminal and civil anti-deepfake statutes. Yet several scholars have sympathized with Valle and criticized his prosecution as motivated by “discomfort, disgust, and confusion toward his online fantasy life.”313Andrew Gilden, Punishing Sexual Fantasy, 58 Wm. & Mary L. Rev. 419, 448–49 (2016); see, e.g., Thea Johnson & Andrew Gilden, Common Sense and the Cannibal Cop, 11 Stan. J. C.R. & C.L. 313 (2015); Michal Buchhandler-Raphael, Overcriminalizing Speech, 36 Cardozo L. Rev. 1667, 1691 (2014); Kaitlin Ek, Note, Conspiracy and the Fantasy Defense: The Strange Case of the Cannibal Cop, 64 Duke L.J. 901, 941 (2014); cf. Susan W. Brenner, Complicit Publication: When Should the Dissemination of Ideas and Data Be Criminalized?, 13 Alb. L.J. Sci. & Tech. 273, 379–83 (2002) (approving of the judgment in Alkhabaz). But see Nicholas Barnes, Note & Comment, The Cannibal Cop: Criminal Conspiracy in the Digital Age, 25 Temp. Pol. & C.R. L. Rev. 1, 14 (2016) (“There can be no doubt that the Cannibal Cop agreed with others to commit a crime.”).
Valle’s fantasies about torturing, raping, murdering, and cannibalizing female acquaintances surely strike many as deeply antisocial. In comparison to such ghoulishness, the fantasy that many pornographic deepfakes make manifest—“I’d like to see this person naked”—seems relatively innocuous.314The prosecution in Valle made this very point in its summation:
There is a reason why the word ‘fantasy’ gets sprinkled over and over again through every cross-examination. . . . It is because [when] we think of fantasies, we normally have a positive idea. You think of Mariah Carey . . . . Gil Valle’ s fantasy is about seeing women executed. . . . . That’s not a fantasy that is OK.
United States v. Valle, 301 F.R.D. 53, 107 (S.D.N.Y. 2014), aff’d in part, rev’d in part, 807 F.3d 508 (2d Cir. 2015).
For whatever reason, however, relatively few scholars have opposed anti-deepfake laws as encroachments on sexual fantasy.315But see Lara Karaian, Addressing Deepfake Porn Doesn’t Require New Criminal Laws, Which Can Restrict Sexual Fantasy and Promote the Prison System, The Conversation (Mar. 24, 2024), https://theconversation.com/addressing-deepfake-porn-doesnt-require-new-criminal-laws-which-can-restrict-sexual-fantasy-and-promote-the-prison-system-223815 [perma.cc/W82L-5XVG]. Cf. Carl Öhman, Introducing the Pervert’s Dilemma: A Contribution to the Critique of Deepfake Pornography, 22 Ethics & Info. Tech. 133, 139 (2020) (positing that “[d]eepfakes are impermissible when considered as a phenomenon and permissible when considered as isolated cases, whereas sexual fantasies are normally equally permissible on both levels”).
Some scholars even explicitly frame deepfakes as a kind of fantasizing that deserves to be punished.316See, e.g., Jacquelyn Burkell & Chandell Gosse, Nothing New Here: Emphasizing the Social and Cultural Context of Deepfakes, First Monday (Dec. 2, 2019), https://doi.org/10.5210/fm.v24i12.10287 (“[T]here is something powerfully disturbing and deeply wrong with being an involuntary participant in someone’s sexual fantasies (made manifest) and having your likeness co-opted for the sexual purposes of an (unknown) other.”); Regina Rini & Leah Cohen, Deepfakes, Deep Harms, 22 J. Ethics & Soc. Phil. 143, 147 (2022).
The widespread intuition seems to be that a fantasy remains a fantasy when it’s represented in writing, but it becomes “real” when represented in a photorealistic image.
But why would it be that disseminating a deepfake is a “real” harm, while disseminating gruesome text accompanied by nonpornographic photographs is constitutionally protected “fantasy?”317Cf. Gilden, supra note 313, at 471–72; Buchhandler-Raphael, supra note 313, at 1691.
Valle even co-opted his targets’ likenesses: Though his portrayals of sadistic sexual conduct were strictly symbolic, he disseminated targets’ photographs along with them.318Valle also “possessed images and videos involving acts of sexual violence against women,” although the district court opinion gives no indication that he modified images of his targets to impart sexual content. Valle, 301 F.R.D. at 96.
What Valle and Baker didn’t do, however, was manipulate an icon itself; Baker used text alone, and Valle paired innocuous photographs with appalling text. This is the only material difference between Valle’s and Baker’s chats and a deepfake offense. The dismissal of the cases against Baker and Valle—and the sympathy afforded to these defendants by some of the academic commentariat—imply that this difference is legally and morally dispositive. Unlike a verbal disquisition on cannibalism, a deepfake is an iconic desecration of a person’s image. It is semiotics and the irrational power of images that explain why deepfakes are real criminal wrongs while Valle’s and Baker’s words were unactionable fantasies.
Table 1: Characteristics of Regulations of Outrageous Expression

C. In Summation
The disparate legal doctrines surveyed in this Part illuminate distinct aspects of the typical anti-deepfake law. Trademark dilution tells us that we are still bound today by laws that regulate nondeceptive uses of images because of their putative tendency to negatively affect emotional attitudes towards the referents of those images. CSAM jurisprudence reveals that federal courts of appeals have uniformly upheld the constitutionality of criminal prohibitions on nondeceptive, noncommercial uses of icons that do not disclose private facts. Flag-burning bans show that desecrating an icon or symbol can be distinctly more outrageous than using disparaging words, but their constitutional invalidation also shows that nonindividualized harm may not be sufficient grounds for regulating outrageous speech. The arc of defamation law shows that actionable reputation-harming statements must be false and not merely outrageous. And the legal status of written sexual fantasies suggests that anti-deepfake laws’ focus on photorealistic subject matter may be an essential limitation.
The constellation of features that characterize constitutional and unconstitutional regulations of outrageous iconography also underscores the constitutional precarity of anti-deepfake laws. On one hand, because anti-deepfake laws remedy harmful uses of images that target specific individuals, they are meaningfully distinct from unconstitutional attempts to regulate written sexual fantasies or nonindividualized harms like flag burning or virtual CSAM, or false statements generally. On the other hand, because covered deepfakes neither disclose private facts nor defame, and likely do not fall into an established category of unprotected speech, they differ meaningfully from iconic manipulations that courts have held constitutional to ban. Given this legal context, Part IV explains what defensible regulation of deepfake pornography might look like.
IV. The Law of Deepfakes Is the Law of Icons, Not Indices
AI’s ability to synthesize photorealistic pornography has deeply concerned policymakers—particularly when that pornography depicts children.319See, e.g., Kinnard, supra note 8.
This technology also threatens to push the judiciary’s tortured semiotic reasoning to a breaking point. One issue in particular has added urgency to concerns about AI-generated pornography: In late 2023, researchers disclosed that a major dataset used for training image-generating AI contained hundreds of CSAM images.320 David Thiel, Identifying and Eliminating CSAM in Generative ML Training Data and Models 2, 8 (2023), https://doi.org/10.25740/kh752sm9123.
It is not yet clear how Free Speech Coalition’s holding applies to photorealistic, AI-generated CSAM321In early 2025, a district court dismissed a count alleging possession of obscene, AI-generated imagery of a minor under 18 U.S.C. § 1466A(b)(1), on the ground that the First Amendment protects the private possession of obscene materials, even if those materials depict children. United States v. Anderegg, No. 24-cr-50-jdp, slip op. at 3, 19–20 (W.D. Wis. Feb. 13, 2025), appeal docketed, No. 25-1354 (7th Cir. Mar. 3, 2025). The court declined to dismiss a separate count under § 1466A(a)(1) for production of child obscenity, id. at 20, and the defendant faces an additional distribution count for which he made no First Amendment challenge, id. at 13–14.
, or whether the presence or absence of indexical CSAM in training data will be legally significant. But interested parties are already calling for legislation to address the issue, and these calls will only intensify as image- and video-generating technology continues to develop and proliferate.322See, e.g., Kinnard, supra note 8.
Attempting to regulate all photorealistic imagery as if it were indexical results in doctrinal incoherence. Effective regulation of deepfakes requires employing the legal theories that regulate iconic imagery for its iconic properties.
A. The Law of Indices Cannot Address Deepfakes Coherently
One response to the AI-generated pornography crisis is to simply ignore the semiotic differences between iconic and indexical media and equate the two legally. This approach simplifies the regulatory gameplan: Just add deepfake-related clauses to existing revenge-pornography statutes—as several states have already done323See supra note 71.
—and treat photorealistic AI-generated CSAM as equivalent to indexical CSAM. At least where AI-generated CSAM depicts identifiable children, prosecutions under the morphed-image prong of the child pornography statute could kick into high gear, since courts seem unbothered by the semiotic infirmities of this approach.32418 U.S.C. § 2256(8)(C). For a discussion of the semiotic confusion of morphed-image prosecutions, see supra Section III.A.2.c.
The law of indexical images is a powerful machine, but the complicated ontology of AI-generated images is already starting to grind its gears. Consider the problem of image-generating models trained on CSAM. Riana Pfefferkorn concludes that the federal child pornography statute prohibits media “generated using training data that included photographic CSAM” because such “[a]buse-trained” CSAM “ ‘involves the use of’ actual abuse” and thus meets the statutory definition of child pornography.325 Pfefferkorn, supra note 219, at 10 (quoting 18 U.S.C. § 2256(8)(A)).
This may be the best reading of the statute, but it also entails that the same language prohibits any abuse-trained AI-generated image that depicts sexually explicit conduct at all, irrespective of whether it appears to depict minors or adults.326Pfefferkorn writes that “it is likely constitutional to criminalize ML-generated images of child sex abuse where the ML model was trained on actual abuse imagery,” but that there are “significant constitutional issues with prosecuting non-abuse-depicting ML-generated images that may have photographic CSAM somewhere in their metaphorical DNA.” Id. at 25.
Recall that the definition of “child pornography” includes “any visual depiction . . . of sexually explicit conduct” when its production “involves the use of a minor engaging in sexually explicit conduct.”32718 U.S.C. § 2256(8).
“[S]exually explicit conduct” is defined broadly to include various forms of intercourse and nudity, without reference to the participants’ ages.328Id. § 2256(2)(A).
Thus, if training on photographic CSAM is all that is required for an image-generating AI’s output to “involve[] the use of a minor engaging in sexually explicit conduct,” then that output will fall within the federal statute’s coverage so long as it depicts anything that meets the broad definition of “sexually explicit conduct.”329Id. § 2256(2)(A), (8)(A).
The puzzles of applying the federal child pornography statute to AI-generated media illustrate the limits of indexicality as a rationale for regulating images.330Cf. Pfefferkorn, supra note 219, at 25 (“How far does the First Amendment allow Ferber’s ‘harm to real children’ rationale to extend when it comes to [AI]-generated imagery that, to look at it, has no connection to children or sex?”). New York v. Ferber, 458 U.S. 747 (1982).
The presence of indexical CSAM in AI training data is dreadful, but liability that hinges on this fact is both over- and underinclusive. If “involv[ing] the use of a minor engaging in sexually explicit conduct”33118 U.S.C. § 2256(8)(A).
is the evil, simpliciter, then every image produced by an abuse-trained AI model—no matter how innocuous its content—is tainted, because its production as a matter of fact involved the real-life sexual abuse of a minor. On the flipside, prohibitions on morphed CSAM and nonconsensual deepfakes show us that photorealistic iconicity alone can suffice for banning an image, even if the image has no indexical relation to real-life abuse and deceives no one. Nonconsensual, photorealistic pornography is regulated because it is an outrageous use of icons, not because it indexically documents abuse.
Judges and lawmakers are already being invited to confront photorealistic, AI-generated pornography with the same haphazard semiotic analyses that morphed-image cases like Anderson and Hotaling employed.332See supra Section III.A.2.c.
In September 2023, the attorneys general of fifty-four states and territories sent a letter urging Congress to “act to deter and address child exploitation, such as by expanding existing restrictions on CSAM to explicitly cover AI-generated CSAM.”333Letter from National Association of Attorneys General to Patty Murray, President Pro Tempore; Kevin McCarthy, Speaker of the House; Chuck Schumer, Senate Majority Leader; Steve Scalise, House Majority Leader; Mitch McConnell, Senate Minority Leader; and Hakeem Jeffries, House Minority Leader, Artificial Intelligence and the Exploitation of Children (Sep. 5, 2023), https://naag.org/wp-content/uploads/2023/09/54-State-AGs-Urge-Study-of-AI-and-Harmful-Impacts-on-Children.pdf [perma.cc/6FKQ-7P52] (emphasis omitted).
The letter warns that AI-generated CSAM is “still problematic” because it is “often based on source images of abused children”; it “often still resembles actual children”; it “support[s] the growth of the child exploitation market by . . . stoking the appetites of those who seek to sexualize children”; and it is “quick and easy to generate.”334Id.
In support of the letter, South Carolina’s Attorney General Alan Wilson asserted that generating realistic-looking child pornography “creat[es] demand for the industry that exploits children.”335Kinnard, supra note 8.
The production of photorealistic, pornographic depictions of children is a deeply troubling phenomenon. But it is troubling because it is an outrageous and antisocial use of icons. The prosecutors’ letter, meanwhile, appeals to indexicality. Its first point is a strict indexicality argument; taking it seriously suggests that the prosecutors’ concern with pornographic images is far too narrow, since all outputs of abuse-trained AI are “based on source images of abused children.” The second point seems to rely on the same conflation of indices and icons that has been used to uphold morphed-image laws. The third point is an argument that Free Speech Coalition expressly rejected.336Ashcroft v. Free Speech Coal., 535 U.S. 234, 253 (2002).
The fourth point holds no independent weight; that these images are “easy to generate” is only bad if the images are themselves bad for other reasons.
Finally, Attorney General Wilson’s remarks contravene Free Speech Coalition’s admonition that “[p]rotected speech does not become unprotected merely because it resembles the latter.”337Id. at 255.
The argument that photorealistic AI-generated CSAM “exploits children” even when it does not depict an identifiable child depends on mistaking icons for indices. A vegan Impossible Burger resembles a hamburger, but this does not entail that Impossible Burgers “creat[e] demand for the industry that exploits” cows. It has been suggested that AI-generated CSAM could stoke demand for indexical CSAM produced specifically for training AI.338 Pfefferkorn, supra note 219, at 11.
But as Free Speech Coalition notes, “[t]he mere tendency of speech to encourage unlawful acts is not a sufficient reason for banning it,” and besides, this possible misconduct is unmoored from what makes photorealistic CSAM outrageous.339535 U.S. at 253.
I doubt many people’s alarm about AI-generated CSAM would be assuaged if they were assured that advances in image synthesis would not encourage the production of indexical CSAM for use as training data.340But some people would indeed be reassured! See Danielle Bernstein, Could AI-Generated Porn Help Protect Children?, Wired (Aug. 22, 2023), https://wired.com/story/artificial-intelligence-csam-pedophilia [perma.cc/L9MF-F2Z8] (positing “that AI-generated child sexual material could actually benefit society in the long run by providing a less harmful alternative to the already-massive market for images of child sexual abuse”).
The problem, once again, is not that AI-generated nonconsensual pornography indexically records harms, but that it iconically signifies something abhorrent.
B. The Law of Icons Coherently Addresses Deepfakes
Regulating AI-generated pornography doesn’t require willful ignorance of semiotic realities. As applied to images, defamation law, information-privacy law, and CSAM doctrine paradigmatically regulate actual or perceived indices. Deepfakes are icons, and they defy the rationales used to regulate indices. But we already have two relevant doctrines that regulate icons qua icons: obscenity and appropriation of likeness. These doctrines, unlike those that focus on images qua indices, regulate the aspects of deepfakes that actually trouble us. Indeed, if a nondeceptive deepfake causes harm of the sort anti-deepfake laws mean to redress, it does so either because it is obscene or because it wrongfully appropriates someone’s likeness.
1. Obscene Deepfakes Are Bad Because They Are Obscene
Obscenity is a constitutionally unprotected category of speech.341United States v. Williams, 553 U.S. 285, 288 (2008).
Unlike child pornography, however, obscenity is unprotected not because it indexically records harmful conduct, but because “obscene . . . . utterances are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.”342Roth v. United States, 354 U.S. 476, 485 (1957) (quoting Chaplinsky v. New Hampshire, 315 U.S. 568, 571–72 (1942)); see also Pfefferkorn, supra note 219, at 3.
Whether material is obscene depends on what is known as the Miller test, which considers (1) whether the work “appeals to the prurient interest;” (2) whether it depicts sexual conduct “in a patently offensive way”; and (3) whether it “lacks serious literary, artistic, political, or scientific value.”343United States v. Schales, 546 F.3d 965, 970 (9th Cir. 2008) (quoting Miller v. California, 413 U.S. 15, 24 (1973)).
The Miller test is not a backwards-looking examination of the circumstances of the media’s production, but a forward-looking inquiry into what the media communicates. The state regulates obscenity not because producing obscenity entails inflicting real-life abuse, nor because obscenity defames anyone or invades privacy, but because regulating obscenity fulfills a “governmental responsibility for communal and individual ‘decency’ and ‘morality.’ ”344Louis Henkin, Morals and the Constitution: The Sin of Obscenity, 63 Colum. L. Rev. 391, 391 (1963); State v. VanBuren, 214 A.3d 791, 800 (Vt. 2019) (“The purposes underlying government regulation of obscenity and of nonconsensual pornography are distinct . . . .”).
An obscenity-based approach to regulating pornographic AI output is already on the books, and federal prosecutors are beginning to employ it against AI-generated CSAM.345See generally United States v. Anderegg, No. 24-cr-50-jdp (W.D. Wis. Feb. 13, 2025).
While “child pornography” can be prohibited even if it is not obscene, a distinct “child obscenity” statute, 18 U.S.C. § 1466A, proscribes sexual images of children that are obscene.346See United States v. Williams, 553 U.S. 285, 297 (2008).
The different ontologies of obscenity and CSAM correspond to meaningful semiotic differences between paradigmatic obscenity and paradigmatic CSAM. Unlike the law of child pornography, which chiefly regulates indices, the law of child obscenity (and obscenity in general) focuses on icons. For example, § 2256 specifically excludes “drawings, cartoons, sculptures, or paintings” from its coverage of “image[s] . . . indistinguishable from” indexical child pornography, while § 1466A explicitly includes “visual depiction[s] of any kind, including a drawing, cartoon, sculpture, or painting.”347Compare 18 U.S.C. § 2256(11), with id. § 1466A(a)–(b).
These semiotic differences track the different harms that obscenity and CSAM laws address. The foundational harm targeted by CSAM laws is the harm inherent in indexical records of abuse. Photographing a child being sexually abused requires a child to be harmed; drawing a cartoon of such abuse does not. By contrast, the core harm of obscenity isn’t in how it’s produced, but in what it represents. An obscene photograph is just as bad as an obscene painting because both images are iconic signs for a worthless, antisocial message.
The child obscenity statute, and the catchall obscenity statutes, §§ 1460–62, would seem like the perfect vehicles for addressing the harms of AI-generated pornography. Yet obscenity law is in desuetude; Pfefferkorn reports that in the past twenty years “there have been over fifty-fold more federal court decisions citing 2252A than 1466A.”348 Pfefferkorn, supra note 219, at 8.
Why might this be? Pfefferkorn suggests that child-obscenity offenses are harder to prosecute than child-pornography offenses: “The knowing possession, receipt, or distribution of (photographic) CSAM is tantamount to a strict liability offense, whereas an obscenity case entails the more probing inquiry of the three-pronged Miller test.”349Id. at 8–9.
But Pfefferkorn also anticipates that § 1466A may become a more appealing vehicle for prosecutors as AI tools for producing photorealistic images proliferate and make it harder for the government to prove that the defendants’ images depict a real-life child.350Id. at 9, 16–20.
Pivoting to obscenity law to address photorealistic pornography has risks. One risk is that obscenity might not cover the breadth of cases that current child pornography or anti-deepfake laws would. For example, a recent Fifth Circuit decision held that graphic iconic and written depictions of violent sexual abuse of children were obscene, but a drawing of “an adolescent girl alone, reclining and apparently masturbating” with “no indication . . . [of] being forced to perform a sexual act” was not.351United States v. Arthur, 51 F.4th 560, 570 (5th Cir. 2022).
When it comes to non-child pornography, commentators are divided over obscenity’s present-day viability; some maintain that “[t]oday, pornography is ubiquitous and essentially legal,” while others call such arguments “short-sighted and, in many respects, incorrect.”352Compare Brian L. Frye, The Dialectic of Obscenity, 35 Hamline L. Rev. 229, 236 (2012), with Jennifer M. Kinsley, The Myth of Obsolete Obscenity, 33 Cardozo Arts & Ent. L.J. 607, 610 (2016). Cf. Kendra Albert, Imagine a Community: Obscenity’s History and Moderating Speech Online, 25 Yale J.L. & Tech. (Special Issue) 59, 71 (2023) (“Courts and commentators generally agree that the First Amendment protects most pornography.”).
The Miller test is surely harder to satisfy for pornography that only depicts adults, and obscenity prosecutions for pornography depicting adults are indeed rare—but they aren’t nonexistent.353See, e.g., United States v. Stagliano, 693 F. Supp. 2d 25, 27–28 (D.D.C. 2010); United States v. Ragsdale, 426 F.3d 765, 768, 769 & n.2 (5th Cir. 2005) (affirming obscenity convictions for two videos, each depicting rape of a “young woman,” one of whom is described as “about 20 [years] old”). The district court acquitted Stagliano and reportedly “called the government’s case ‘woefully lacking’ or ‘woefully inadequate.’ ” See Judgment of Acquittal, United States v. Stagliano, No. 08-cr-00093 (D.D.C. Sep. 2, 2010); Josh Gerstein, DOJ Stumbles Prompt Porn Purveyor’s Acquittal, Politico (July 16, 2010), https://politico.com/blogs/under-the-radar/2010/07/doj-stumbles-prompt-porn-purveyors-acquittal-028102 [perma.cc/EG7Q-6SFY].
But at least when AI-generated pornography depicts no identifiable person, Miller’s demands are a feature, not a bug. Unlike indexical CSAM, which is abhorrent because of what it records, iconic CSAM is abhorrent because of what it depicts. If AI-generated pornography depicts no recognizable person and the prosecution fails to prove that it is obscene, then the material is no more harmful than the nonobscene drawing in Arthur.35451 F.4th at 570.
Unlike doctrines that address indexical images, obscenity doctrine actually measures the relevant variable for nondeceptive deepfakes with no identifiable subject: a popular consensus that expression has gone beyond the pale.
A distinct risk is that obscenity might be too inclusive of expression that the public unjustly disfavors. Pfefferkorn warns that “increased reliance on obscenity law risks enshrining regressive social norms about sex, sexuality, and sexual orientation.”355 Pfefferkorn, supra note 219, at 21.
This is indeed a hazard, but it is a hazard inherent to the project of regulating the outrageous use of icons. Insofar as they proscribe nondeceptive media, anti-deepfake laws are self-consciously responding to the power that such “regressive social norms” hold over our lives.356Id.
Regressive social norms are what make nudity and sexuality a uniquely sensitive and shameful topic; they are what trigger harmful repercussions for sexual presentations that deviate from socially prescribed standards.357Citron is quite careful to acknowledge this very point. See Citron, supra note 6, at 1898 (“The recognition that intimate activity and nudity can be viewed as discrediting and shameful—and result in discrimination—is not to suggest that intimate behaviors and nudity are discrediting and shameful.”); see also Brenda Dvoskin, Speaking Back to Sexual Privacy Invasions, 99 Wash. L. Rev. 59, 62 (2024) (“[L]aw itself is a discourse that impacts on how society reads unwanted exposures: when the law punishes sexual privacy invasions, it can reify the expectation that unwanted exposures cause corrosive harm.”).
Just as they are premised on the power of regressive social norms, anti-deepfake laws are also premised on the notion that some uses of icons trigger irrational beliefs in reasonable people. A perfectly rational viewer who encounters an obvious deepfake won’t impute a photorealistic avatar’s actions to the person the avatar resembles. If this were the expected reaction to outrageous images, there would be no need for anti-deepfake laws to cover nondeceptive media, just as there would be no need for dilution law to cover tarnishing but nonconfusing uses of trademarks. That anti-deepfake laws do cover nondeceptive media shows that these laws assume that an ordinary person who encounters a deepfake will have an affective response rooted in sexual mores and irrational beliefs about iconography. If anti-deepfake laws’ very theory of harm derives from these widespread social biases and irrational beliefs, how can we expect to administer such laws without reference to those same biases and irrationalities?
Predicting exactly how obscenity law will address nonconsensual, pornographic deepfakes is outside this Article’s scope. My guess is that obscenity law will effectively regulate trafficking in AI-generated pornography depicting children, whether or not those children are identifiable.358See discussion supra note 321.
Obscenity law will probably be less effective at regulating deepfake pornography of identifiable adults.359See State v. VanBuren, 214 A.3d 791, 800–02 (Vt. 2019) (holding that nonconsensual pornography is not necessarily obscene); accord State v. Casillas, 952 N.W.2d 629, 639 (Minn. 2020); People v. Austin, 155 N.E.3d 439, 455 (Ill. 2019); Ex parte Fairchild-Porche, 638 S.W.3d 770, 782–83 (Tex. App. 2021).
In all events, an honest semiotic analysis shows that it is the law of icons, not the law of indices, that we must bring to bear on outrageous, nondeceptive deepfakes.
2. Appropriative and Offensive Deepfakes Are Bad Because They Are Appropriative and Offensive
Even before anti-deepfake laws, we had some legal scaffolding to address non-obscene, pornographic deepfakes: the cause of action for appropriation, discussed in Part I.360For discussion of criminal penalties for speech and strict-scrutiny analysis, see VanBuren, 214 A.3d at 812 and Citron & Franks, supra note 108, at 376–77.
To effectively combat deepfakes, appropriation must encompass noncommercial uses of likeness—as it already may in some instances, and as Citron has proposed361 Citron, supra note 157, at 137.
—and it may need to be buttressed with criminal penalties. This is what anti-deepfake laws, in essence, do: They extend appropriation to all dissemination of certain depictions of identifiable persons, irrespective of whether the dissemination realizes an “advantage” for the defendant. This is a place to which at least some courts, citing the First Amendment, have previously feared to venture.362Indeed, courts have interpreted the First Amendment to protect authors of expressive works irrespective of whether the work is ultimately sold for profit. See, e.g., De Havilland v. FX Networks, LLC, 230 Cal. Rptr. 3d 625, 638 (Cal. Ct. App. 2018); Guglielmi v. Spelling-Goldberg Prods., 603 P.2d 454, 460 (Cal. 1979) (Bird, C.J., concurring); see also supra note 136.
Appropriation regulates images qua icons, not qua indices: For example, Muhammad Ali used New York’s appropriation statute to enjoin Playgirl magazine from publishing “a full frontal nude drawing” purporting to depict him.363Ali v. Playgirl, Inc., 447 F. Supp. 723, 726, 729 (S.D.N.Y. 1978).
And appropriation recognizes that the relevant harm is a hijacking of identity without consent, not the creation of expression that would be objectionable even if consensual. Functionally equivalent is Goldberg and Zipursky’s proposal to extend false light to cover highly offensive speech that is not false.364Goldberg & Zipursky, supra note 39, at 482.
This reform requires not legislation that equates deepfakes with revenge porn, nor legislation that bans “false” depictions of persons, but legislation that bans highly offensive appropriations of likeness. This is exactly what an Australian criminal anti-deepfake statute does overtly; unlike its American counterparts, the law simply states that its prohibition “does not apply if . . . a reasonable person would consider transmitting the material to be acceptable, having regard to the” totality of the circumstances.365Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 (Cth) sch 1 item 5 (Austl.), https://parlinfo.aph.gov.au/parlInfo/download/legislation/bills/r7205_aspassed/toc_pdf/24071b01.pdf;fileType=application%2Fpdf [perma.cc/42VM-9DNZ]. Of course, one should think carefully about who the “reasonable person” contemplated by this law is. See supra note 155 and accompanying text.
State laws may be inching towards a similar assessment of overall aesthetic reasonableness—Oregon’s, for example, extends to “reasonably realistic” depictions.366 Or. Rev. Stat. § 163.472(3)(b)(B) (2026).
Moreover, existing American law already redresses similar harms: The appropriation tort and trademark dilution redress nondeceptive, offensive uses of icons in commerce; and the law of morphed images criminalizes nondeceptive sexual depictions of children even in the absence of physical abuse. If morphed-CSAM and dilution-by-tarnishment laws have thus far avoided constitutional invalidation, then this expanded version of appropriation should be able to coexist with them. And if we can’t abide banning outrageous iconography in the manner required to combat deepfakes, then we should reconsider the laws of morphed CSAM and tarnishment, too.
What, then, about the First Amendment? Almost certainly, at least some nonconsensual, pornographic deepfakes depicting adults are protected speech.367Cf. supra note 266 (discussing revenge porn).
By prohibiting only sexual deepfakes, the typical anti-deepfake law is a content-based restriction of protected speech.368The weight of authority holds that revenge-porn statutes are content-based restrictions on speech. State v. VanBuren, 214 A.3d 791, 811 (Vt. 2019); State v. Katz, 179 N.E.3d 431, 455 (Ind. 2022); Ex parte Fairchild-Porche, 638 S.W.3d 770, 782 (Tex. App. 2021); see also State v. Culver, 918 N.W.2d 103, 108 n.7 (Wis. Ct. App. 2018) (prosecution and defense stipulated that revenge-porn “statute is content-based”). The Supreme Court of Illinois held, over a dissent, that Illinois’s revenge-porn statute was content neutral because it “distinguishes . . . based on whether the disseminator obtained the image under circumstances in which a reasonable person would know that the image was to remain private and knows or should have known that the person in the image has not consented to the dissemination.” People v. Austin, 155 N.E.3d 439, 456–58 (Ill. 2019). This analysis is unpersuasive. “[T]he content of the image is precisely the focus of [the challenged statute].” Id. at 475 (Garman, J., dissenting). Similarly, the Ninth Circuit has held that California’s right-of-publicity law is a content-based speech restriction. Sarver v. Chartier, 813 F.3d 891, 905–06 (9th Cir. 2016). Moreover, a conclusion that anti-deepfake laws are content-based aligns with Supreme Court precedent. Laws that regulate only depictions of sex—as the paradigmatic anti-deepfake law does—are content-based restrictions on speech. United States v. Playboy Ent. Grp., Inc., 529 U.S. 803, 806, 811–12 (2000). So are laws that regulate speech based on its “emotive impact . . . on its audience.” Boos v. Barry, 485 U.S. 312, 321 (1988); Texas v. Johnson, 491 U.S. 397, 412 (1989).
Anti-deepfake laws will force jurists to assess whether cases like Free Speech Coalition, Stevens, Johnson, Tam, and Brunetti truly forbid banning expression simply because it is outrageous.369Cf. Matal v. Tam, 582 U.S. 218, 223 (2017) (stating “a bedrock First Amendment principle: Speech may not be banned on the ground that it expresses ideas that offend”).
Courts thus far have managed to dodge this question when considering deepfake-adjacent media like revenge porn and morphed CSAM. In revenge-porn cases, courts could emphasize that the regulated media indexically documents true, private facts.370See supra notes 108–111 and accompanying text.
In morphed-image cases, courts could make expedient use of the categorical First Amendment exception for “child pornography,” even though the Supreme Court’s rationale for that exception applies only to indexical images, and not really to morphed images.371See United States v. Stevens, 559 U.S. 460, 471 (2010) (discussing New York v. Ferber, 458 U.S. 747 (1982)).
When trademark dilution faces constitutional scrutiny, courts may invoke property rights.372Cf. Lemley & Tushnet, supra note 203, at 107 n.93.
Deepfakes offer none of these offramps.373Perhaps an explicit likeness-as-property regime, which bans on pornographic deepfakes eschew but which some proposed and enacted legislation provides, see, e.g., Tenn. Code Ann. § 47-25-1101 (2024); Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2025, S. 1367, 119th Cong. (introduced in Senate, Apr. 9, 2025), would provide a suitable offramp.
In other words, anti-deepfake laws will force courts to consider whether American law can regulate outrageous, nondeceptive, nonobscene iconography per se—something it has reliably done and continues to do today—even when jurists must admit that this is what the law is doing. Pornographic deepfakes may or may not be a category of speech that has “historic[ally] and traditional[ly]” been unprotected.374Stevens, 559 U.S. at 468 (quoting Simon & Schuster, Inc. v. Members of N.Y. State Crime Victims Bd., 502 U.S. 105, 127 (1991) (Kennedy, J., concurring in judgment)). For a perspicacious analysis of the problems that the “history and tradition” question raises, see generally Rebecca Tushnet, History and Tradition in First Amendment Intellectual Property Cases: A Critique, Marq. Intell. Prop. L. Rev. (forthcoming), http://dx.doi.org/10.2139/ssrn.5386352.
But in all events, deepfakes bear critical resemblances to forms of nondeceptive, iconic signification that have either been held categorically unprotected—such as morphed CSAM and obscenity—or whose regulation has thus far avoided constitutional invalidation—such as dilution by tarnishment and the tort of appropriation of likeness. This resemblance may or may not save the full sweep of a typical anti-deepfake statute from a constitutional challenge, but it does show the government’s established interest in the regulation of outrageous iconography per se. And if anti-deepfake laws’ continuity with historical prohibitions on the outrageous treatment of icons cannot save them, it is doubtful anything else could. At its least useful, then, my analysis gives proponents of anti-deepfake laws an honest way to lose a constitutional challenge they were doomed to lose; at its most useful, it gives them an honest way to win.
Conclusion
The federal government and dozens of states have enacted a welter of brand-new anti-deepfake laws. Existing theories of defamation and information privacy can explain much of these laws’ scope. But information privacy and defamation can’t account for the full breadth of a typical anti-deepfake law. This doctrinal mismatch shows not that anti-deepfake laws are overinclusive, but rather that they redress an injury distinct from the injuries redressed by defamation and information-privacy law. Anti-deepfake laws prohibit nondeceptive, outrageous uses of iconic signs, which cause harm independent of any factual proposition that they might communicate.
Although images are usually regulated for their indexical qualities—such as their ability to record harmful events or reveal private facts—or for deceptively resembling indexical records, several areas of American law regulate icons qua icons. Trademark dilution doctrine proscribes tarnishing uses of marks not because they cause confusion, but because even nondeceptive uses threaten to change observers’ attitudes. The law of morphed CSAM images posits that “a child suffers [harm] from appearing as the purported subject of pornography in a digital image . . . regardless of the image’s verisimilitude.”375United States v. Anderson, 759 F.3d 891, 896 (8th Cir. 2014).
The harm to a brand that is tarnished, or to a child that is depicted in an obviously fake morphed image, is a harm rooted not in detached rationality but instead in the emotional power of images. That doesn’t make the harm any less real.376“If [people] define situations as real, they are real in their consequences.” William I. Thomas & Dorothy Swaine Thomas, The Child in America: Behavior Problems and Programs 572 (1928) (I thank James Grimmelmann for bringing the “Thomas Theorem” to my attention).
Thus, anti-deepfake laws do not challenge us to examine whether our law can have any solicitude at all for irrational beliefs about images. The doctrines of trademark dilution and morphed images, as well as historical regulation of flag desecration and effigy burning, show that our law already extends solicitude to such beliefs. Rather, anti-deepfake laws challenge us to decide whether the law will extend that solicitude to the particular irrational beliefs about images that harm people—women, overwhelmingly377 Ajder et al., supra note 2, at 2.
—who appear in pornographic deepfakes without their consent. If the law can empower Coca-Cola to enjoin a (hypothetical) nonconfusing “Coca-Cola Strip Club,” might the law also empower an individual to enjoin nondeceptive, photorealistic, pornographic deepfakes that depict her? And if the law can’t abide one of these possibilities, should it really abide either of them?
We cannot justify the regulation of photorealistic, AI-generated icons using the same rationales we have used to regulate indexical images. Regulating nondeceptive deepfakes is not about deciding what private facts may be disclosed, or what lies may be told, or what abuse may be recorded—all of which are questions that a law of indices can answer. It is about deciding how our society will tolerate its members to be depicted. This is a question only a law of icons can answer.
* Assistant Professor of Law, University of Wisconsin Law School. The author thanks the editors of the Michigan Law Review, Amy Adler, Kendra Albert, Barton Beebe, Michael Beauvais, Robert Brauneis, Dinis Cheian, Bryan Choi, Brenda Cossman, Daniel Francis, Mary Anne Franks, Kat Geddes, Ruobin Gong, James Grimmelmann, Gautam Hans, Thomas Kadri, Matthew Kugler, Mark Lemley, Ela Leshem, Amanda Levendowski Tepski, Jonathan Masur, Joseph Miller, Helen Nissenbaum, Jacob Noti-Victor, Frank Pasquale, Riana Pfefferkorn, Lisa Ramsey, Blake Reid, Jennifer Rothman, Ted Sichelman, Joel Sobel, Katherine Strandburg, Rebecca Tushnet, Rebecca Wexler, Kathryn Woolard, and Mark Wu. Thanks also to Kris Turner at the University of Wisconsin Law School’s Law Library and to the participants in the Cornell Law School Academic Professionals Workshop, the 2024 Legal Scholars Roundtable on Artificial Intelligence at Emory Law School, the 2024 NYU Academic Careers Program Scholarship Clinic, the 2024 Richmond Junior Faculty Forum, the 2024 Privacy Law Scholars Conference, the 2024 Intellectual Property Scholars Conference, and to interlocutors at Cornell Law School and the Digital Life Initiative, Florida International University Law School, George Washington University Law School, the University of Illinois College of Law, NYU’s Information Law Institute, the University of Wisconsin Law School, and the University of Toronto Faculty of Law. This paper received the Ian Kerr Award for Best Paper at the 2024 Privacy Law Scholars Conference.