Destined to Deceive: The Need to Regulate Deepfakes with a Foreseeable Harm Standard

Political campaigns have always attracted significant attention, and politicians have often been the subjects of controversial—even outlandish—discourse. In the last several years, however, the risk of deception has drastically increased due to the rise of “deepfakes.” Now, practically anyone can make audiovisual media that are both highly believable and highly damaging to a candidate. The threat deepfakes pose to our elections has prompted several states and Congress to seek legislative remedies that ensure recourse for victims and hold bad actors liable. These recent attempts at deepfake laws are open to attack from two different loci. First, there is a question as to whether these laws unconstitutionally infringe on deepfake creators’ First Amendment rights. Second, some worry that these laws do not adequately protect against the most harmful deepfakes. This Note proposes a new approach to regulating deepfakes. By delineating a “foreseeable harm” standard, with a totality-of-the-circumstances test rather than a patchwork system of discrete elements, this Note addresses both major concerns. Not only is a foreseeable harm standard effective, workable, and constitutionally sound; it is also grounded in existing tort law. Moreover, a recent Supreme Court decision pertaining to false statements and the First Amendment, United States v. Alvarez, lends support to such a standard. Adopting this standard will combat the looming threat of politically oriented deepfakes while preserving the constitutional right to free speech.


In the last several years, deepfakes—altered audiovisual media that can make anyone look like they are saying or doing things they have not said or done—have infiltrated the internet.1 Dep’t of Homeland Sec., Increasing Threat of Deepfake Identities, []. Deepfakes are uniquely dangerous in two specific settings that have produced some of the most highly publicized and controversial examples. First, individuals can use “deep learning” technology to create fake pornographic content2 Alisha Anand & Belén Bianco, United Nations Inst. for Disarmament Rsch., The 2021 Innovations Dialogue Conference Report: Deepfakes, Trust & International Security 23–24 (2021).—a form of “revenge porn.”3Chance Carter, An Update on the Legal Landscape of Revenge Porn, Nat’l Ass’n of Att’ys Gen. (Nov. 16, 2021), []. Second, individuals can employ deepfakes in efforts to compromise political candidates and interfere with elections.4Lisa Kaplan, How Campaigns Can Protect Themselves from Deepfakes, Disinformation, and Social Media Manipulation, Brookings Inst. (Jan. 10, 2019), [].

As of October 1, 2023, nine states have adopted laws targeting one or the other of these two exigent threats.5See Va. Code Ann. § 18.2-386.2 (West 2023); Tex. Elec. Code Ann. § 255.004 (West 2023); Cal. Elec. Code § 20010 (West 2023); Ga. Code Ann. § 16-11-90 (West 2023); N.Y. Penal Law § 245.15 (McKinney 2023); Fla. Stat. § 775.0847 (2023); Haw. Rev. Stat. § 711-1110.9 (2021); Minn. Stat. § 609.771 (2023); Act of July 23, 2023, ch. 360, 2023 Wash. Sess. Laws 1892 (to be codified at Wash. Rev. Code § 42). None of the enacted laws have faced challenges, but there is speculation that these laws may violate the First Amendment because they are overbroad to the point of chilling protected speech.6Bradley Waldstreicher, Note, Deeply Fake, Deeply Disturbing, Deeply Constitutional: Why the First Amendment Likely Protects the Creation of Pornographic Deepfakes, 42 Cardozo L. Rev. 729 (2021); Alex Baiocco, Political “Deepfake” Laws Threaten Freedom of Expression, Inst. for Free Speech (Jan. 5, 2022), []. Other states and Congress have proposed, but not yet adopted, similar laws. Though each jurisdiction’s approach to regulating deepfakes is different, there are common features that make these existing and proposed laws both constitutionally troubling and ineffective.

Scholars have been quick to offer policy and constitutional arguments in support of regulating revenge porn.7See, e.g., Rebecca A. Delfino, Pornographic Deepfakes: The Case for Federal Criminalization of Revenge Porn’s Next Tragic Act, 88 Fordham L. Rev. 887 (2019). However, laws targeting politically oriented deepfakes stand on more precarious ground, given the near-sacrosanct position that political discourse holds in our society.8See Lindsey Wilkerson, Note, Still Waters Run Deep(fakes): The Rising Concerns of “Deepfake” Technology and Its Influence on Democracy and the First Amendment, 86 Mo. L. Rev. 407, 429 (2021). Thus, this Note turns its attention to political deepfakes: as Congress and state legislatures tread into these murky waters, it is critical to craft a constitutionally sound legislative response to political deepfakes.

This Note highlights the dangers of unrestricted political deepfakes and the shortcomings of current deepfake laws in an effort to chart a path forward for deterring them: a new framework for attaching liability to creators and publishers. Part I traces the recent proliferation of political deepfakes and the laws trying to keep pace. Part II demonstrates why deepfake laws serve compelling government interests, are necessary to curb harm, and do not unconstitutionally chill speech. It also examines the constitutional obstacles to and policy problems with existing deepfake laws. Part III proposes a simplified approach, arguing for a “foreseeable harm” standard for liability, considered in light of the totality of the circumstances. By avoiding over- and underinclusive tests for liability that can lead to unnatural distinctions among deepfakes, compelled speech, and invalid time, place, and manner restrictions, this standard will ensure that deepfake laws are narrowly tailored to regulate only the most harmful deepfakes and the most culpable perpetrators. Furthermore, deploying a foreseeable harm standard is not a far cry from current practice: it simply shifts the “reasonable person” standard from the deepfake viewer’s perspective to that of the creator or sharer, and Congress has already considered a similar approach, though not in full.

I. The Problem with Political Deepfakes and Current Regulation Attempts

The threat of misinformation has taken center stage in recent elections,9E.g., Gabriel R. Sanchez & Keesha Middlemass, Misinformation Is Eroding the Public’s Confidence in Democracy, Brookings Inst. (July 26, 2022), []; David Klepper, Misinformation and the Midterm Elections: What to Expect, A.P. News (Nov. 3, 2022, 12:50 PM), []. and perhaps the most effective form of misinformation is falsified audiovisual media.10Matt Swayne, Video Fake News Believed More, Shared More than Text and Audio Versions, Pa. State Univ. (Sept. 21, 2021), []. Technology that creates misleading audiovisuals continues to advance, posing an ever-growing threat to the integrity of our elections. This Part examines how deepfakes affect the political realm, the complications that stand in the way of solutions, and the attempted legislative solutions thus far.11E.g., Shannon Bond, As Tech Evolves, Deepfakes Will Become Even Harder to Spot, NPR (July 3, 2022, 7:54 AM), [].

A. Recent Examples of Politically Oriented Deepfakes

Over the last several years, the world has been exposed to a wide variety of deepfakes, with differing levels of sophistication and differing objectives. A few key examples illuminate both the capabilities of deep learning technology and the stakes at play.

In 2019, the Massachusetts Institute of Technology’s “In Event of Moon Disaster” project released a video of President Richard Nixon announcing to the country that the Apollo 11 mission had failed, leaving the astronauts onboard stranded on the moon.12Asher Stockler, MIT Deepfake Video ‘Nixon Announcing Apollo 11 Disaster’ Shows the Power of Disinformation, Newsweek (Dec. 3, 2019, 2:39 PM), []. This startlingly believable video, created as a learning tool on deepfakes, depicted an event that never occurred: MIT used “deep learning” technology coupled with AI software to simulate Nixon’s movements and match the synthetic video to the dialogue.13Id.

In 2020, a video of then-Speaker of the House Nancy Pelosi circulated on the internet. In the video, she appeared to be intoxicated.14Drew Harwell, Faked Pelosi Videos, Slowed to Make Her Appear Drunk, Spread Across Social Media, Wash. Post (May 24, 2019, 4:41 PM), []. The deepfake did not alter the words Pelosi spoke, but it slowed down and distorted the audio itself to give the impression that she was inebriated.15Id. This time, the video was not a learning tool—instead, it was quickly circulated by political activists and viewed over 2.5 million times within the first few days of publication.16Id.; Doctored Nancy Pelosi Video Highlights Threat of “Deepfake” Tech, CBS News (May 26, 2019, 9:26 AM), [].

As tensions grew in 2022 in the ongoing armed conflict between Russia and Ukraine, Ukrainian President Volodymyr Zelenskyy became the target of a deepfake campaign.17Bobby Allyn, Deepfake Video of Zelenskyy Could Be ‘Tip of the Iceberg’ in Info War, Experts Warn, NPR (Mar. 16, 2022, 8:26 PM), []. A hacked Ukrainian broadcasting service’s website briefly contained a video that seemed to depict Zelenskyy telling Ukrainian soldiers to surrender to Russian forces.18Id. In fact, Zelenskyy’s face was technologically inserted into the video, with his mouth movements manufactured to match the overlaid dialogue.19See id. Though the deepfake had flaws, experts still worry about the impact it could have—and in fact already has had—on viewers.20These include both convincing viewers of this video’s veracity as well as raising doubts about any future video of Zelenskyy or other similar officials. Id. The video quickly reached several social media outlets, prompting a response from Zelenskyy himself,21The Ukrainian government had previously released a statement warning its citizens about this sort of deepfake. Id. and it is possible that “lower-quality versions of the video could take on a life of their own in other parts of the world.”22Id.

The above examples are all instances of political deepfakes. Though definitions of deepfakes differ slightly, they can generally be defined as synthetically modified photographs or videos that appear to depict events that did not, in reality, occur.23See Deepfake,, []. The possibilities of what a deepfake can depict are endless. Within the political realm, deepfake creators can insert a candidate somewhere they were not; edit real, preexisting video of a candidate to depict people or objects that were not actually present in the original; force a candidate to say words that they never said; or construct and place a candidate in an entirely fictitious scenario.24For further examples, see Shannon Bond, It Takes a Few Dollars and 8 Minutes to Create a Deepfake. And That’s Only the Start, NPR (Mar. 23, 2023, 5:00 AM), [].

Of course, lies about politicians are not new: false depictions of politicians saying or doing damaging things date back to the nineteenth century.25Elaine Kamarck, A Short History of Campaign Dirty Tricks Before Twitter and Facebook, Brookings Inst. (July 11, 2019), []; see also Political Cartoons Developed Significantly During the Early Nineteenth Century, First Amend. Museum, []; Kareem Gibson, Note, Deepfakes and Involuntary Pornography: Can Our Current Legal Framework Address This Technology?, 66 Wayne L. Rev. 259, 281 (2020). Until recently, these false depictions were more easily sniffed out.26Kamarck, supra note 25. For a discussion of the importance of preserving political cartoons and similar critical speech in relation to a defamation claim brought by a public figure, see Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 53–55 (1988). Notably, though these caricaturized depictions are meant to express specific sentiments about public figures, they do not masquerade as real depictions. Even now, poorly made A.I.-generated media may still fail to deceive viewers.27Jack Nicas & Lucia Cholakian Herrera, Is Argentina the First A.I. Election?, N.Y. Times (Nov. 16, 2023), []. The significant role that deepfakes played in the Argentinian election should serve as a wake-up call to other nations around the world. In recent years, though, deepfake technology has reached new levels of sophistication, making it increasingly difficult to detect which images are real and which are not.28 Anand & Bianco, supra note 2. The examples above were highly publicized, making their falseness more readily discoverable; however, many deepfakes are not widely reported on, leaving individual viewers to decide for themselves what is real.29Thomas Nygren, Mona Guath, Carl-Anton Werner Axelsson & Divina Frau-Meigs, Combatting Visual Fake News with a Professional Fact-Checking Tool in Education in France, Romania, Spain and Sweden, Information, May 2021, at 1. Even highly publicized deepfakes are effective, as supporters of the message conveyed will still hold the image to be indicative of the truth, even after it has been discredited. See Drew Harwell, Doctored Images Have Become a Fact of Life for Political Campaigns. When They’re Disproved, Believers ‘Just Don’t Care.’, Wash. Post (Jan. 14, 2020, 7:00 AM), [] (describing how polarization allows people to insulate themselves from other narratives); see also Gerald G. Ashdown, Distorting Democracy: Campaign Lies in the 21st Century, 20 Wm. & Mary Bill Rts. J. 1085, 1092–94 (2012) (describing the harms of unregulated, false campaign speech). Not only will some viewers believe these false videos, but their existence also undermines the credibility of real videos due to the lingering possibility that a nefarious user manipulated them.30Jack Langa, Note, Deepfakes, Real Consequences: Crafting Legislation to Combat Threats Posed by Deepfakes, 101 B.U. L. Rev. 761, 767 (2021). Langa, writing after the passage of Texas’s and California’s deepfake laws, similarly explored the danger of political deepfakes and how best to legislate against them. In addition to incorporating recent legislative developments, this Note also proposes a different solution. Though Langa highlights the importance of “likelihood to bring about [harm]” in assigning liability, id. at 787, he advocates for many of the features that this Note explicitly condemns, such as mandatory disclaimers, a reasonable viewer standard, and temporal cutoffs for liability. Id. at 789; see infra Part II. Thus, deepfakes’ increasing presence may lead to the delegitimization of news outlets,31Jackson Cote, Deepfakes and Fake News Pose a Growing Threat to Democracy, Experts Warn, Ne. Glob. News (Apr. 1, 2022), []; Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Calif. L. Rev. 1753, 1784–85 (2019). and viewers may selectively believe misinformation tailored to their preconceived notions, leading to further entrenchment of their views.32Cote, supra note 31; Chesney & Citron, supra note 31, at 1768. This is further expounded infra Section II.B.3.

Hand in hand with these concerns is the threat political deepfakes pose to our elections.33Tim Mak & Dina Temple-Raston, Where Are the Deepfakes in This Presidential Election?, NPR (Oct. 1, 2020, 5:05 AM), []. A deepfake creator can seek to “damage the reputation of [a candidate], incite a political base, or undermine trust in the election process.”34 Dep’t of Homeland Sec., supra note 1. Creators can also use deepfakes to shape policy discussions,35See Mak & Temple-Raston, supra note 33. implant fake evidence into the popular discourse on a variety of topics, and stir fierce rhetoric. Creators can even use deepfakes to directly extort politicians or fraudulently gather information from confidential sources.36 Kelley M. Sayler, Cong. Rsch. Serv., IF11333, Deep Fakes and National Security (2022). Though these fears have not yet been fully realized, such persuasive misinformation poses a danger that will likely come to bear on future elections, especially because interfering with the democratic process may be the point.37Mak & Temple-Raston, supra note 33.

For these reasons, mitigating the possible impacts of deepfakes on our elections should be a priority. Yet, techniques for detecting deepfakes have struggled to keep pace with the countervailing techniques to avoid detection, and any long-term technological solution remains elusive.38See Cristian Vaccari & Andrew Chadwick, Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News, Soc. Media & Soc’y, Jan.–Mar. 2020, at 1, 3; see also John A. Barrett, Jr., Free Speech Has Gotten Very Expensive: Rethinking Political Speech Regulation in a Post-Truth World, 94 St. John’s L. Rev. 615, 635–36 (2020). Even when deepfakes are proven inaccurate, trying to discredit a post cannot always “outpace” the original post, and reporting on a deepfake might actually fuel the image’s dissemination. Harwell, supra note 29. Moreover, the time it takes to deploy systems to detect a deepfake often precludes them from being effective on a large scale.39See, e.g., Kara Manke, New Technology Helps Media Detect ‘Deepfakes,UC Berkeley (June 20, 2019), [] (explaining the task of creating an authenticity backstop for five specific, prominent politicians). Therefore, we should expect deepfakes to remain a part of our lives going forward; the best we can hope for are new legislative solutions that deter bad actors, ensure recourse for victims, and protect election integrity.

Effective, workable legislation that assigns liability to deepfake creators and distributors is also essential because internet platforms are not in a position to regulate deepfakes. Not only does Section 230 of the Communications Decency Act dictate that platforms cannot be held liable for users’ posts,4047 U.S.C. § 230(c)(1). but there is also now a possibility that internet companies may not be permitted to police their own platforms of their own volition. In the recent case of NetChoice v. Paxton, a group of large internet and technology businesses challenged a Texas statute that prohibited them from removing posts based on the viewpoint expressed or represented.41NetChoice, LLC v. Paxton, 49 F.4th 439, 444 (5th Cir. 2022). The platforms remain able to remove content in certain circumstances, such as if it directly incites criminality. Id. at 452. The internet companies argued that they were entitled under the First Amendment to engage in editorial discretion and thus ought to retain control over “whether, to what extent, and in what manner to disseminate third-party-created content to the public.”42Id. at 490 (quoting NetChoice v. Att’y Gen., 34 F.4th 1196, 1212 (11th Cir. 2022) (finding that a Florida law restricting platforms’ ability to moderate content violated the platforms’ First Amendment rights)). The Fifth Circuit disagreed and considered the companies’ actions to be censorship unprotected by the Constitution.43NetChoice, 49 F.4th at 494.

The Fifth Circuit’s holding raises questions about the future of content moderation and severely weakens the ability of internet companies to respond to hate speech.44Nithin Venkatraman, NetChoice, L.L.C. v. Paxton: 5th Circuit Sets Up Supreme Court Battle Over Content Moderation Authority of Social Media Giants, JOLT Digest (Oct. 21, 2022), []. Though deepfakes did not play an explicit part in the Fifth Circuit’s opinion, the implications of this holding will make it difficult for social media platforms to combat deepfakes, even those that could have severe repercussions for our democratic system. If internet platforms cannot prevent dangerous misinformation from spreading, then it is critical that the legal system discourage this behavior and punish bad actors.

B. Existing and Proposed Deepfake Laws

Several states—California, Georgia, New York, Texas, Virginia, Florida, Hawaii, Minnesota, and Washington—have enacted bills targeting the dissemination of deepfakes.45 Cal. Elec. Code § 20010 (West 2023); Ga. Code Ann. § 16-11-90 (West 2023); N.Y. Penal Code § 245.15 (McKinney 2023); Tex. Elec. Code Ann. § 255.004(d) (West 2023); Va. Code. Ann. § 18.2-386.2 (West 2023); Fla. Stat. § 775.0847 (2023); Haw. Rev. Stat. § 711-1110.9 (2021); Minn. Stat. § 609.771 (2023); Act of July 23, 2023, ch. 360, 2023 Wash. Sess. Laws 1892 (to be codified at Wash. Rev. Code § 42). Although Georgia, New York, Virginia, Florida, and Hawaii have passed laws targeting certain deepfakes, none of them currently have statutes specifically related to politically oriented deepfakes. Legislators in other states and the U.S. House of Representatives have proposed similar statutes as well. This Section describes their key features.

In defining what constitutes a deepfake, one way in which these statutes differ is with respect to the type of media (audio, visual, or both) and the person depicted (either candidates only, or anyone). A bill proposed in Illinois identifies a type of media as a deepfake if a creator either digitally adds a candidate to an image they were not a part of, digitally alters an image of a candidate to add another person, or “intentionally manipulate[s]” audio or visual media that portrays a candidate’s “appearance, speech, or conduct.”46S.B. 3746, 101st Gen. Assemb. Reg. Sess. (Ill. 2020). This bill, which would have amended Ill. Elec. Code § 5, never made it out of committee. Bill Status of SB3746: 101st General Assembly, Illinois Gen. Assembly, []. It suffered a similar fate after being reintroduced at the start of the next General Assembly in 2021. Bill Status of SB1717: 102nd General Assembly, Illinois Gen. Assembly, []. California’s and Minnesota’s statute and a bill proposed in New Jersey identify “deepfakes” as any audio or visual media that depict a candidate doing or saying something that they did not actually do or say.47 Cal. Elec. § 20010(a); Minn. Stat. § 609.771(1)(c) (2023); Assemb. B. 4985, 219th Leg., Reg. Sess. (N.J. 2020). New Jersey’s bill, which would have amended N.J. Rev. Stat. § 19:34, has not passed to date. New Jersey Assembly Bill 4985 (Prior Session Legislation), LegiScan, []. “Media” here means either technologically altered versions of real audiovisuals or technologically created audiovisuals “substantially derivative” of real audiovisuals.48See N.J. Assemb. B. 4985. Texas’s statute defines “deepfakes” still differently, as videos that falsely depict a person (rather than only a candidate) doing something that they did not do.49 Elec. § 255.004(e). Washington’s statute characterizes a “deepfake” as an image, video, or audio depicting an “individual’s appearance, speech, or conduct that has been intentionally manipulated with the use of generative adversarial network techniques or other digital technology in a manner to create a realistic but false [depiction].”502023 Wash. Sess. Laws 1892 (emphasis added). Finally, the proposed House bill’s definition of “media” encapsulates all audiovisual, visual, and audio records, created without consent, that depict a living or dead person engaged in a “material activity” that they did not engage in.51DEEPFAKES Accountability Act, H.R. 5586, 118th Cong. § 1041(a)(2)(A)–(C), (g)(1) (2023). This bill was introduced by Representative Yvette Clarke; previous iterations were introduced by Representative Clarke in 2019 and 2021. DEEP FAKES Accountability Act, H.R. 3230, 116th Cong. (2019); DEEP FAKES Accountability Act, H.R. 2395, 117th Cong. (2021). It is beyond the scope of this Note to address this element in its entirety, but the House’s definition is the best suited to adapt to dangerous deepfakes. Based on these definitions, the House bill would cover the greatest number of deepfakes while Illinois’s would likely reach the fewest.

The liability standard also varies from statute to statute, based on criteria such as how convincing the deepfake is, the publisher’s intent, and the probability of harm. Under Washington’s, California’s, and Minnesota’s respective statutes, and the proposed statutes in New Jersey and Illinois, liability attaches if a reasonable viewer would believe the audiovisual to be authentic and if their “understanding or impression” of its “expressive content” would be different than if they had seen an unaltered audiovisual.52N.J. Assemb. B. 4985 (denoting exceptions for satirical and parodic media); Cal. Elec. Code § 20010(e)(1)–(2); 2023 Wash. Sess. Laws 1892; Minn. Stat. § 609.771 (2023)(1)(c)(1); S.B. 3746, 101st Gen. Assemb., Reg. Sess. (Ill. 2020). Liability, then, depends on a reasonable viewer rather than a reasonable creator. The proposed House bill would use a reasonable viewer standard, with the added requirement that the deepfake must also be “substantially likely” to improperly interfere with an election when the deepfake depicts a deceased person.53H.R. 5586 § 1041(n)(1)(A)(ii). By contrast, in Texas, a video created with the “intent to injure a candidate or influence the result of an election” may lead to liability for its creator, the susceptibility of the viewer notwithstanding.54 Elec. § 255.004(d) (emphasis added).

The statutes also differ with respect to what actions (creating or distributing) can result in liability and whether intent is an element. They also vary as to when liability can attach for those actions (in the run-up to an election or at any time). Texas’s statute limits liability to a person who creates a deepfake and “causes the [deepfake] to be published” in the run-up to an election.55Id. at (d)(1)–(2). New Jersey’s proposed statute and California’s and Minnesota’s statutes limit liability to those who distribute a deepfake in the run-up56New Jersey’s proposed statute and California’s statute use sixty days as a temporal cutoff for assigning liability. N.J. Assemb. B. 4985; Cal. Elec. Code § 20010(a) (West 2023). Texas’s law, on the other hand, is limited to thirty days before an election. Elec. § 255.004(d)(2). Finally, Minnesota’s statute applies to deepfakes disseminated ninety days before an election. Minn. Stat. § 609.771(2)(1) (2023). to an election with the intent of influencing that election or harming a candidate’s reputation.57N.J. Assemb. B. 4985; Cal. Elec. Code § 20010(a) (West 2023); Minn. § 609.771(2)(3). Illinois’s proposed statute would assign liability to those who produce or publish a deepfake, with no timeframe specified.58Ill. S.B. 3746, 101st Gen. Assemb., Reg. Sess. (Ill. 2020). Under the proposed House bill, criminal liability attaches to any person who produces a deepfake with intent to “interfere in an official proceeding, including an election, provided the advanced technological false personation record did in fact pose a credible threat of instigating or advancing such”; civil liability is limited to any person who knowingly removes or “meaningfully obscure[s]” a disclaimer to a deepfake for the purposes of influencing an election, or anyone who produces a deepfake without the requisite intent for criminal liability—with no timeframe specified.59See DEEPFAKES Accountability Act, H.R. 5586, 118th Cong. § 1041(f)(1)–(2) (2023) (emphasis added). It is beyond the scope of this Note to consider this element in its entirety, but the House bill would most effectively and constitutionally address different actors’ conduct. The presence or absence of a temporal cutoff, as well as whether subsequent sharers of a deepfake can be held liable, greatly affects the number of deepfakes and perpetrators that a law can curtail.

Several of these statutes also incorporate disclaimer requirements to address concerns about the believability and resulting impact of deepfakes. Even these requirements, though, differ in scope. The current and proposed statutes in four states—Washington, California, Illinois, and New Jersey—require that any visual deepfake include an unambiguous textual disclaimer for the duration of the deepfake; any audial deepfake must include an unambiguous audial disclaimer at the beginning, end, and—if the audio is a certain length (two minutes or longer)—throughout the deepfake.60N.J. Assemb. B. 4985; Ill. S.B. 3746; Act of July 23, 2023, ch. 360, 2023 Wash. Sess. Laws 1892 (to be codified at Wash. Rev. Code § 42); Cal. Elec. § 20010(b)(3)(A)–(B). The 2021 version of the House bill required that any visual deepfake continuously display an unambiguous textual disclaimer; that any audial deepfake include an unambiguous audial disclaimer, as well as an additional disclaimer if the deepfake exceeds two minutes; that any deepfake containing both audio and video components display both an audial and unambiguous textual disclaimer; and that any deepfake containing a “moving visual element” display an “embedded digital watermark” disclaimer, distinct from the textual disclaimer.61H.R. 2395 § 1041(b)–(e). The bill states that “[n]ot later than 1 year after the date of enactment of this section, the Attorney General shall issue rules governing the technical specifications of the digital watermarks required.” Id. § 1041(k)(3). The 2023 version of the bill, however, removes specific language about watermarks. See H.R. 5586 § 1041. Texas’s statute does not contain disclaimer requirements.62 Tex. Elec. Code Ann. § 255.004 (West 2023). As developed infra Section II.B.1, any strict disclaimer requirement carries with it both constitutional and policy-based concerns. Not only do more intricate disclaimers, such as digital watermarks, reduce the likelihood that viewers will be deceived by a deepfake, but they also make it more difficult for ill-intentioned sharers to remove them.63Langa, supra note 30, at 789.

Statutes also diverge as to whether they prescribe civil or criminal liability. California’s and Washington’s statutes, as well as New Jersey’s and Illinois’s proposed statutes, assign civil liability through private causes of action.64 Cal. Elec. § 20010(c)(1)–(3); N.J. Assemb. B. 4985; 2023 Wash. Sess. Laws 1892; Ill. S.B. 3746. Candidates who are the subject of a deepfake may seek damages under all four statutes. California’s statute and Illinois’s proposed statute allow for “any registered voter” to seek injunctive relief.65N.J. Assemb. B. 4985; 2023 Wash. Sess. Laws 1892; Cal. Elec. § 20010(c)(1)–(2); Ill. S.B. 3746. California’s current statute, effective until January 1, 2027, does not contain a private right of action for voters. Elec. § 20010(c). Beginning January 1, 2027, the section of the code creating this private right of action will take effect. Id. § 20010(c)(1), (e). As any suit must still satisfy constitutional standing requirements, plaintiffs will still have to demonstrate an “injury in fact” that bears a causal connection to the deepfake and may be redressed by the requested relief. See Lujan v. Defs. of Wildlife, 504 U.S. 555, 560–61 (1992). Texas’s and Minnesota’s statutes assign criminal liability.66 Tex. Elec. Code Ann. § 255.004(c) (West 2023); Minn. Stat. § 609.771(2) (2023). As noted above, the proposed House bill contains both criminal and civil liability provisions, categorizing the production of and intent to distribute a deepfake for the purposes of influencing an election as a criminal offense, whereas deepfake producers with lesser mentes reae and culpable subsequent sharers are subject to only civil liability.67DEEPFAKES Accountability Act, H.R. 5586, 118th Cong. § 1041(f)–(g) (2023). Minnesota’s statute makes injunctive relief, but not civil damages, available against “any person who is reasonably believed to be about to violate or who is in the course of violating” the law; government attorneys, individuals portrayed by a deepfake, and candidates that are or may be hurt by a deepfake may seek injunctive relief. Minn. Stat. § 609.771(4) (2023). There are benefits and drawbacks to both civil and criminal liability schemes, as described infra Section II.A.1.iii; it is beyond the scope of this Note to make a recommendation for future legislation. The private right of action available under the proposed House bill is limited to a person or entity that a deepfake actually depicts, with no recourse available to registered voters harmed by viewing the deepfake.68Id. § 1041(g). In addition to affecting the type of accountability that a defendant can face, this difference also impacts who can seek recourse for their injuries and how they can do so.

As detailed above, there are many similarities across existing legislation targeting politically oriented deepfakes. Common among them is a patchwork system of discrete, dualistic elements, including: whether the deepfake includes the requisite form of disclaimer, whether the defendant was the original publisher or a subsequent sharer, whether the deepfake portrays a candidate or another person, whether the deepfake was posted in the run-up to an election, and whether the defendant acted with the requisite mens rea. Each statute picks a different combination from this menu of on-off switch provisions. Part II highlights the problems with each of these tests—especially in how they interact with each other.

II. Constitutional and Policy Problems with Current Deepfake Laws

This Part demonstrates the need for a new approach to regulating politically oriented deepfakes. It examines common First Amendment concerns with policing deepfakes and distinguishes the speech at issue from that in United States v. Alvarez, a recent and influential Supreme Court case about an attempt to prohibit “stolen valor.” As this Part argues, Xavier Alvarez’s lie was less harmful and more easily disproved than the shadowy, far-reaching lies spread by deepfakes. This Part then assesses how the Alvarez Court’s concerns map onto deepfake laws’ shared features, pointing out shortcomings of existing laws and proposals.

Constitutional Footing of Deepfake Laws

It is no secret that any limitation on free speech will face opposition.69E.g., Steven Lee Myers, Is Spreading Medical Misinformation a Doctor’s Free Speech Right?, N.Y. Times (Nov. 30, 2022), []. Hesitance to “chill” protected speech is particularly heightened for political speech,70See Snyder v. Phelps, 562 U.S. 443, 452–53 (2011) (noting that “speech on public issues . . . is entitled to special protection” and describing public issues as “any matter of political, social, or other concern to the community” (internal citations omitted) (quoting Connick v. Myers, 461 U.S. 138 (1983))). even decidedly false speech.71See Rickert v. State, Pub. Disclosure Comm’n, 168 P.3d 826 (Wash. 2007) (finding that a political candidate could not be held liable even for false statements about her competitor). In United States v. Alvarez, the Supreme Court found unconstitutional a statute that criminalized falsely claiming to have received the Congressional Medal of Honor.72United States v. Alvarez, 567 U.S. 709 (2012). Xavier Alvarez was indicted under the Stolen Valor Act for holding himself out as a Medal of Honor recipient at a public meeting.73Id. at 713–14. Congress had not in fact awarded Alvarez the Medal of Honor, and the Court acknowledged that his statement was nothing but an “intended, undoubted lie.”74Id. at 714–15.

Alvarez challenged his indictment on the grounds that the Stolen Valor Act violated his First Amendment right to free speech, and the Court agreed.75Id. at 714. The purpose of the Act was to safeguard the reputation of the Medal of Honor, an award created “so the Nation c[ould] hold in its highest respect and esteem” individuals who defended the safety of this country “with extraordinary honor.”76Id. at 715. The Court stated this was “a legitimate Government objective” that Congress was “right and proper” to pursue.77Id. Nevertheless, the Court concluded that, when subjected to the “sometimes inconvenient principles of the First Amendment,” the Act impermissibly infringed upon Alvarez’s constitutional rights.78Id. at 715–16. “[C]ontent-based restrictions on speech [must] be presumed invalid,” and due to the “probable, and adverse, effect of the Act on freedom of expression,” the Court held that the government had not overcome this presumption.79Id. at 716–17, 722–23 (quoting Ashcroft v. ACLU, 542 U.S. 656, 660 (2004)). Because there was no principle limiting when the government could punish defendants for making such false statements, the government’s “censorial power” would chill the fundamental freedoms of “speech, thought, and discourse,” which are necessary to the operation of a democracy.80Id. at 723.

Although the Alvarez Court did not say that all false statements fall within the ambit of the right to free speech,81Rather, the Court recognized that “there are instances in which the falsity of speech bears upon whether it is protected” and merely rejected “the notion that false speech should be in a general category that is presumptively unprotected.” Id. at 721–22. the Court did consider several constitutional obstacles that must be overcome before limiting even false speech.82Id. at 725–29 (analyzing when speech restrictions may be constitutional and how the Stolen Valor Act mapped onto these considerations). Some of the obstacles that the justices raised arguably do not apply to deepfakes, given their unique nature and characteristics. Still, some of the constitutional limits loom large in the debate surrounding current deepfake laws.

1. Concerns from the Alvarez Court That Are Resolved by the Nature of Deepfakes

i. Counterspeech

Justice Kennedy, writing for a plurality of the Court, offered what has become a common argument for disallowing limitations on speech: “The remedy for speech that is false is speech that is true.”83Id. at 727. The “counterspeech doctrine” traces its roots back to Justice Brandeis’s concurring opinion in Whitney v. California.84David L. Hudson Jr., Counterspeech Doctrine, First Amend. Encyclopedia (Dec. 2017), []. Whitney, a member of the Communist Labor Party, was convicted of assembling to advocate for a violent overthrow of the government.85Whitney v. California, 274 U.S. 357, 363–66 (1927), overruled by Brandenburg v. Ohio, 395 U.S. 444 (1969). Despite concurring with the outcome—which Brandenburg v. Ohio later overruled86Brandenburg v. Ohio, 395 U.S. 444, 449 (1969).—Justice Brandeis argued that free speech extended to critiques of the government, even those that may be proven untrue.87Whitney, 274 U.S. at 376–77 (Brandeis, J., concurring). Justice Brandeis wrote that “[i]f there be time to expose through discussion the falsehoods and fallacies . . . the remedy to be applied is more speech, not enforced silence.”88Id. at 377.

In Alvarez, the Court built on this reasoning and stated that, if Congress’s interest in passing the Stolen Valor Act was to make clear who had been awarded military honors, the government could maintain a database that lists every recipient.89United States v. Alvarez, 567 U.S. 709, 729 (2012). This, the Court said, would counteract fictional claims like Alvarez’s.90Id. However, while the counterspeech doctrine constitutionally protected Alvarez’s false statements, such counterspeech is beyond the realm of possibility for deepfakes.91Cass R. Sunstein, Falsehoods and the First Amendment, 33 Harv. J.L. & Tech. 387, 421 (2020). While it would be hardly conceivable for the government to maintain a database of every portrayal of a candidate as a fact-checking backstop, it is unfathomable for it to maintain a database of every image or video into which a candidate could possibly be inserted. Therefore, there is simply no way to ensure the accuracy of an image or video in the same way that the Court envisioned in Alvarez.92These differences suggest that there is no less restrictive means for combatting deepfakes; therefore, civil or criminal penalties are necessary. Protecting the integrity of our elections also satisfies the “compelling interest” element of strict scrutiny. See infra Section II.A.2. It is beyond the scope of this Note to suggest specific language to ensure that these statutes are sufficiently “narrowly tailored” to satisfy strict scrutiny, but this Note argues that a foreseeable harm standard will be more likely to satisfy this requirement than the approaches adopted by current deepfake laws.

In other ways, too, counterspeech is not nearly as effective in defending against deepfakes.93Matthew B. Kugler & Carly Pace, Deepfake Privacy: Attitudes & Regulation, 116 Nw. U. L. Rev. 611, 669–70 (2021). First, the speed with which deepfakes can spread makes it difficult to adequately rebut each misrepresentation.94Shannon Reid, Comment, The Deepfake Dilemma: Reconciling Privacy and First Amendment Protections, 23 U. Pa. J. Const. L. 209, 219 (2021). Second, people are more likely to believe a deepfake depicts the truth than they are other false statements.95Nils C. Köbis, Barbora Doležalová & Ivan Soraperra, Fooled Twice: People Cannot Detect Deepfakes but Think They Can, iScience, Nov. 19, 2021, at 1, 4 (citing Ilana B. Witten & Eric I. Knudsen, Why Seeing Is Believing: Merging Auditory and Visual Worlds, 48 Neuron 489 (2005); Doris A. Graber, Seeing Is Remembering: How Visuals Contribute to Learning from Television News, J. Commc’n, Sept. 1990, at 134). As the saying goes: seeing is believing. Moreover, even when confronted with a deepfake’s inconsistencies, our brains resist entirely discounting its reality.96Sunstein, supra note 91, at 422. This may be attributable to people’s tendency to be overly trusting of a video’s accuracy, even in the face of clear warning signs to the contrary. Köbis et al., supra note 95, at 10–11 (finding that people’s bias toward authenticity led them to guess that nearly 70 percent of videos were authentic even after being told that only 50 percent were authentic). There are thus serious obstacles to effective counterspeech in the deepfake context.

Additionally, the Alvarez Court raised the role of public backlash in countering false speech: Alvarez was ridiculed for lying, and other false claimants would be too.97United States v. Alvarez, 567 U.S. 709, 726–27 (2012). According to the Court, this public condemnation would serve to reduce the deceptive impact of misinformation. Alvarez spread false claims about himself; his speech was visibly connected with him.98Id. at 713, 727. Deepfake creators and subsequent sharers, however, often keep their identities secret, with the help of sophisticated technology.99Delfino, supra note 7, at 899. The creator of a deepfake—disguised by a false name or no name at all, hidden behind layers of sophisticated protection from identification, and empowered by subsequent sharers that need not attach the creator’s name to their own posts—will often not face disrepute in the way the Court imagined.

ii. “Other Legally Cognizable Harm”

In Alvarez, the government argued that there was no First Amendment protection for false statements.100Alvarez, 567 U.S. at 719. The Court disagreed: the state could not condemn dishonesty alone. Liability for false statements requires the presence of “defamation, fraud, or some other legally cognizable harm associated with a false statement.”101See id. Political deepfakes cause just that. They cannot be considered “falsity and nothing more.”102Id.At best, they are falsities accompanied by reputational damage to a candidate. At worst, they are falsities that unduly influence an election. Either way, deepfakes fall squarely within the examples that the Court distinguished from Alvarez’s protected speech.103See Jessica Ice, Note, Defamatory Political Deepfakes and the First Amendment, 70 Case W. Rsrv. L. Rev. 417, 419, 437 (2019) (highlighting both the reputational damage to a candidate and their campaign and the “societal damages caused by the video if it is allowed to persist in the public sphere”). Others have argued that deepfakes fall into the existing misappropriation of likeness or invasion of privacy torts, see Zahra Takhshid, Retrievable Images on Social Media Platforms: A Call for a New Privacy Tort, 68 Buff. L. Rev. 139, 157 (2020), or that deepfakes are not speech at all, see generally Marc Jonathan Blitz, Deepfakes and Other Non-Testimonial Falsehoods: When Is Belief Manipulation (Not) First Amendment Speech?, 23 Yale J.L. & Tech. 160 (2020). It is beyond the scope of this Note to analyze these arguments in their entirety, but their success is unlikely. Kavyasri Nagumotu, Deepfakes Are Taking over Social Media: Can the Law Keep Up?, 62 IDEA: L. Rev. Franklin Pierce Ctr. for Intell. Prop. 102, 128 (2022).

iii. Policing Truth and Chilling True Statements

Beyond Alvarez, First Amendment principles limiting government policing of speech are inapplicable to deepfakes. An oft-cited concern with any limitation on First Amendment protections is that the government should not be the “arbiter of truth.”104Alvarez, 567 U.S. at 752 (Alito, J., dissenting). This consideration was a major factor in Rickert v. State, Public Disclosure Commission, in which the Washington Supreme Court did not extend liability to a political candidate who made false statements about her opponent.105Rickert v. State, Pub. Disclosure Comm’n, 168 P.3d 826 (Wash. 2007). The Court was hesitant to “assume[] that the government is capable of correctly and consistently negotiating the thin line between fact and opinion in political speech.”106Id. at 829. If the government could censor statements it determined to be false, there could be partisan manipulation of “facts.”107For other noted concerns, see Katrina Geddes, Ocularcentrism and Deepfakes: Should Seeing Be Believing?, 31 Fordham Intell. Prop. Media & Ent. L.J. 1042, 1076 (2021). See also Sunstein, supra note 91, at 398 (noting that the government could also make genuine mistakes about what is true and what is false). Greater restrictions on deepfakes, however, do not implicate this concern. By definition, deepfakes present inaccurate information and, therefore, are not even arguably factual.108Ice, supra note 103, at 439. For the same reason, there is no concern that statutes aimed at curbing deepfakes will chill individuals from making or publishing true statements.109It could be argued that the fear of inadvertently sharing a deepfake would chill subsequent sharers from reposting online content; however, as discussed infra, a lack of proof of reckless or intentional conduct in sharing a deepfake—such as altering or removing a disclaimer—would protect against inappropriately attaching liability. See infra Section III.A.4. Chilled speech was at issue in New York Times v. Sullivan, the landmark First Amendment case that established “actual malice” as a requisite element of defamation claims brought by public figures.110N.Y. Times Co. v. Sullivan, 376 U.S. 254 (1964). For an argument that New York Times v. Sullivan’s guidelines are no longer serviceable, see Sunstein, supra note 91, at 406–12. In that case, the Court made clear that First Amendment protections extend to “erroneous statements honestly made” as a result of “negligence or carelessness.”111Sullivan, 376 U.S. at 278, 283 n.24. The Court emphasized the need for “breathing space” for free speech in general and for political speech specifically.112Id. at 271–72. Deepfakes, though, are decidedly not “honestly made.” Assigning liability to creators for harmful deepfakes does not carry the same danger of discouraging innocent speech.

2. Satisfying Scrutiny

The plurality advocated for an “exacting scrutiny” approach in Alvarez, requiring a compelling government interest to restrict speech.113See United States v. Alvarez, 567 U.S. 709, 715 (2012). Justice Breyer argued instead that intermediate scrutiny should apply. Id. at 725, 730 (Breyer, J., concurring). The restriction must also be narrowly tailored to serve that interest.114Id. at 737–38 (Breyer, J., concurring). Though the government’s interest in protecting the “value and meaning” of the Medal of Honor was compelling,115Id. at 726 (majority opinion). the Court deemed the Stolen Valor Act unnecessary to serve this interest because there was no evidence that “the public’s general perception of military awards is diluted by false claims” of attainment.116Id. The other impacts of Alvarez’s and similar false claims—such as offending actual recipients of the Medal of Honor—also could not justify the speech restriction.

The same cannot be said for the impacts of deepfakes. First, deepfakes have already changed the public perception of candidates.117See Langa, supra note 30, at 766 n.28, 767. As compared to the need to protect the “general perception of military awards,” the need to remedy defamatory impacts on a candidate is much more tangible. Second, deepfakes threaten to irreparably alter future elections. Notably, Justice Breyer’s concurrence in Alvarez, which plays a central role in supporting the standard proposed by this Note, recognized that false speech related to political campaigns is “more likely to make a behavioral difference” among viewers than are other forms of false speech.118Alvarez, 567 U.S. at 738 (Breyer, J., concurring). Thus, the necessity of legislation to protect this interest is much clearer for deepfakes than for the type of false speech in Alvarez.

As to the interest itself, one could argue that there exists a freestanding compelling interest in ensuring free and fair elections.119Ex parte Stafford, 667 S.W.3d 517, 525 (Tex. App. 2023) (recognizing a compelling interest in limiting the impact of fraudulent statements on elections and, therefore, the population in a case challenging a portion of Texas’s statute criminalizing certain campaign communications—separate from its deepfake provisions); see also Ice, supra note 103, at 439; Langa, supra note 30, at 781. For discussion of this compelling interest in the context of gerrymandering, see Daunt v. Benson, 956 F.3d 396, 426 (6th Cir. 2020) (Readler, J., concurring) (quoting Amicus Curiae Brief of Brennan Ctr. for Just. in Support of Defendant-Appellee and Affirmance at 24, Daunt, 956 F.3d 396 (No. 19-2377)). Thus, given the unprecedented threat that deepfakes pose to our elections, the government would be justified in taking steps to curb their deleterious impact. Yet, the Court has often snubbed this interest when it comes to limiting First Amendment rights. For example, in Citizens United v. FEC, the Court did not recognize this broad interest in protecting elections. Rather, the Court made clear that “laws that burden political speech” will rarely and only narrowly pass constitutional muster.120Citizens United v. FEC, 558 U.S. 310, 340 (2010). The government could not censor on the basis of speaker or viewpoint and could only restrict speech if it would interfere with the proper functioning of government entities.121Id. at 341. The documentary in question, which critiqued a presidential candidate, could not be construed to have such an effect and, therefore, deserved First Amendment protection.122Id.

Similar arguments will no doubt be made regarding deepfakes, given their ostensible lack of concrete interference with a specific government entity’s functioning. However, the potential for deepfakes to be used to extort politicians, gain access to confidential information, and infuse misinformation into policy debates presents real threats to any number of government functions, as employees may be prevented from doing their job or influenced to take negative action.123See supra notes 33–34 and accompanying text. Therefore, even if the Court does not think that ensuring free and fair elections is enough, alone, to justify First Amendment restrictions, political deepfakes could surpass even the high bar set by Citizens United. It is also worth noting the inconsistency in the Court’s approach to “protecting” our elections: in order to prevent what it views as too much government influence on the electoral process, the Court is willing to allow malignant influences to fester when private actors are behind them.

The Court’s other reasons for protecting the speech in Citizens United may actually support the case for regulating deepfakes. First, the Court cited Buckley v. Valeo for the notion that “the ability of the citizenry to make informed choices” is essential to the operation of our democratic republic.124Citizens United, 558 U.S. at 339 (quoting Buckley v. Valeo, 424 U.S. 1, 14 (1976)). Corporations, the Court said, may be uniquely well-suited to publicize important information and aid voters’ decisionmaking.125Id. at 364. Conversely, deepfakes expressly undermine informed decisionmaking. Deepfakes, by design, muddy the waters of public discourse. Second, the Court stated that “voters must be free to obtain information from diverse sources in order to determine how to cast their votes.”126Id. at 341. If this right to information is unconditional, it would appear to cut against any form of political speech limitation. But this right is not unconditional. For example, all fifty states have laws restricting electioneering activities near polling places, and the Court upheld Tennessee’s version of this law because the state had a compelling interest in “protecting voters from confusion and undue influence.”127Burson v. Freeman, 504 U.S. 191, 199, 206–07 (1992). Evidently, the Court acknowledges that certain information sources harm, rather than help, voters. The danger of deepfakes is more similar to the danger posed by the electioneering activities in Burson than the campaign funding in Citizens United. In fact, deepfakes are an even clearer example of “confusion and undue influence”128Id. at 199. than campaigning at polling places, which is limited across the nation.

3. Looming Issues

i. Mens Rea

One constitutional obstacle that may still plague deepfake laws is the mens rea requirement. The Supreme Court has repeatedly pointed out that the cases in which it condemned false speech involved false statements made knowingly or with reckless disregard for the statement’s veracity.129Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 52 (1988) (pertaining to fictional admissions that minister Jerry Falwell engaged in incestuous behavior with his mother in a nationally circulated magazine); United States v. Alvarez, 567 U.S. 709, 719 (2012). As discussed, the creator of a deepfake undoubtedly makes a false statement “knowingly.”130Ice, supra note 103, at 434. But the mens rea of a subsequent sharer is less clear. If a statute assigns liability to everyone that shares a deepfake, it may ensnare the “careless” speaker about whom the Court was worried.131See N.Y. Times v. Sullivan, 376 U.S. 254, 283 (1964). Rather, it is necessary to consider the individualized circumstances of each post in order to determine whether the sharer was aware of the consequences that would follow, and thus the sharer’s culpability.132See infra Section III.A.3.

ii. Context-Blind Restrictions

In Alvarez, the Court was concerned that the Stolen Valor Act applied regardless of context: the Act criminalized “false statement[s] made at any time, in any place, to any person,” reaching false speech “in almost limitless times and settings.”133Alvarez, 567 U.S. at 722–23. Notably, the Alvarez Court’s concerns are distinguishable from the concerns animating the Court’s doctrine for traditional “time, place, and manner restrictions” (TPM) for speech in public fora.134Though seemingly easy to distinguish, the differences between the two threads of concern may be relevant if large social media platforms are considered public spaces, as in NetChoice, LLC v. Paxton, 49 F.4th 429 (5th Cir. 2022). See supra Section I.A. In Ward v. Rock Against Racism, Justice Kennedy—again writing for the Court—discussed requirements for the latter: the government may limit speech in a specific setting if the limitation (1) is content-neutral, (2) is narrowly tailored to serve a significant government purpose, and (3) leaves open alternative ways to communicate the speaker’s message.135Ward v. Rock Against Racism, 491 U.S. 781, 791 (1989) (citing Clark v. Cmty. for Creative Nonviolence, 468 U.S. 288, 293 (1984)). If the government wishes to prohibit speech in a public forum in a specific context, it must sufficiently justify the prohibition and cannot selectively enforce it.

Thus, in Alvarez, the Court expressed a different concern: that the statute’s “sweeping” breadth and indifference to context were in tension with First Amendment rights.136Alvarez, 567 U.S. at 722. Specifically, the Court noted that the Stolen Valor Act would treat statements made in a public meeting and “whispered conversations within a home” equally, suppressing both.137Id. Restrictions cannot be absolute and context-blind. They must be limited to the situations in which the speech would result in a concrete harm.138Id. at 723.

Some of the current drafts of deepfake legislation have attempted to comply with this constitutional prerequisite by imposing liability only during certain time periods, such as sixty days before a general election.139See Wilkerson, supra note 8, at 424. This strategy treads into the realm of TPM restrictions discussed above.140See supra notes 134–135 and accompanying text. It also resembles previously invalidated provisions of statutes limiting political speech, like the Bipartisan Campaign Reform Act (BCRA), which prohibited corporate “electioneering communications” in the run-up to elections.141Bipartisan Campaign Reform Act of 2002, Pub. L. No. 107-155, §§ 201–04, 116 Stat. 81, 88–92; Citizens United v. FEC, 558 U.S. 310 (2010). It has been argued that the fundamental differences between corporate-funded political speech and deepfakes render this similarity irrelevant. See Anna Pesetski, Note, Deepfakes: A New Content Category for a Digital Age, 29 Wm. & Mary Bill Rts. J. 503, 518 (2020). However, this does not go far to suggest that the temporal limitations serve to remedy the Court’s fears unless deepfake laws are accepted more broadly as constitutional. Rigid temporal cutoffs like this draw arbitrary distinctions between punishable and nonpunishable deepfakes: how does a deepfake posted sixty-one days before an election cause significantly less harm than a deepfake posted one day later, irrespective of their other features? A better strategy for regulating deepfakes is to avoid wading into the treacherous waters of TPM altogether. A focus on foreseeable harm rather than TPM can sidestep this constitutional bind.142See infra Section III.A.2.

iii. Discriminatory Enforcement

Finally, an ongoing fear that accompanies any attempt to limit political speech is discriminatory enforcement.143See e.g., Alvarez, 567 U.S. at 734 (Breyer, J., concurring). For deepfake regulations that assign criminal liability, one could argue that, because the determination of whether harm is foreseeable enough to trigger liability is unavoidably subjective, prosecutors will be able to choose whom to pursue charges against on a strictly partisan basis. This could, in turn, give rise to discriminatory enforcement of deepfake laws. However, greater oversight and sanctions can confine such prosecutorial misconduct.144See Rod J. Rosenstein, Deputy U.S. Att’y Gen, Dep’t of Just., A Constitution Day Address Hosted by the Heritage Foundation (Sept. 14, 2017) (transcript available at []); Bruce A. Green & Samuel J. Levine, Disciplinary Regulation of Prosecutors as a Remedy for Abuses of Prosecutorial Discretion, 14 Ohio St. J. Crim. L. 143 (2016). Additionally, assigning civil liability could mitigate this problem by creating a private right of action, which would allow victims of deepfakes—people who suffer either reputational or electoral harm caused by a deepfake—to seek justice for themselves.145For a discussion of the implications of a private right of action on addressing deepfakes, see generally Eric Kocsis, Note, Deepfakes, Shallowfakes, and the Need for a Private Right of Action, 126 Dick. L. Rev. 621 (2022). Undoubtedly, the parties to these lawsuits will fall along partisan lines (for example, a Democratic plaintiff suing a Republican deepfake distributor or vice versa), but the ability for all injured parties to bring a claim may help eliminate the risk of selective enforcement by the party in power.146Of course, these statutes may bring with them other complications, such as frivolous lawsuits against satirists and the like in order to curb messages that the claimants disagree with. Cf. Langa, supra note 30, at 798. If this standard is deployed correctly, however, claims should fail when they are not genuine. A private right of action does raise standing questions. Although candidates whose reputations are damaged by a deepfake have a straightforward injury in fact, voters who are harmed because of a deepfake’s larger impact on an election may face difficulty establishing an individualized injury as opposed to a generalized grievance.147See United States v. Hays, 515 U.S. 737 (1995) (holding that voters bringing racial discrimination claims, but not living in a majority-minority district, needed to show specific evidence of harm). However, that is not to say that the Supreme Court has never thought of election-based claims as individualized injuries.148Saul Zipkin, Democratic Standing, 26 J.L. & Pol. 179, 197–203 (2011) (explaining contexts in which the Court found standing in election-based claims). See FEC v. Akins, 524 U.S. 11, 24-25 (1998).

There may also be concerns of partisan bias among judges and juries hearing these cases; perhaps the political persuasion of the defendant will impact the arbiter’s determination of liability or guilt.149See, e.g., Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 55 (1988). Of course, this is not the only category of cases that could implicate partisan bias.150James P. Brady, Fair and Impartial Railroad: The Jury, the Media, and Political Trials, 11 J. Crim. Just. 241 (1983). To a certain extent, bias is inevitable.151See Shamena Anwar, Patrick Bayer & Randi Hjalmarsson, Politics in the Courtroom: Political Ideology and Jury Decision Making, 17 J. Eur. Econ. Ass’n 834 (2018) (describing the effect of partisan bias in Swedish juries, in which some of the jurors are politically affiliated officials). Some have argued that the current safeguards of jury selection and appellate review are, if not satisfactory, the closest we can come to a solution.152See Brady, supra note 150; Cassandra Burke Robertson, Judicial Impartiality in a Partisan Era, 70 Fla. L. Rev. 739 (2018). Additional procedural measures to protect against partisan bias include jury instructions on potential bias153See Anona Su, A Proposal to Properly Address Implicit Bias in the Jury, 31 Hastings Women’s L.J. 79, 98–99 (2020). and the exclusion of substantially prejudicial evidence.154See generally David Sonenshein & Charles Fitzpatrick, The Problem of Partisan Experts and the Potential for Reform Through Concurrent Evidence, 32 Rev. Litig. 1 (2013) (proposing court-appointed, neutral experts akin to civil law jurisdictions or cooperation between competing experts). Though it is likely implausible to entirely eliminate bias in politically charged trials, these existing safeguards allow legislatures to address harmful deepfakes and produce more neutral verdicts.

B. Shared Features of Existing Laws and Their Drawbacks

1. Disclaimers

One common point of discussion for deepfake liability is the effect of disclaimers. The presence of a disclaimer may drastically reduce the chance that a reasonable viewer is deceived.155Ice, supra note 103, at 434. Therefore, under a reasonable viewer standard, including a disclaimer that a video has been modified frees its creator of liability.156Id.; Douglas Harris, Note, Deepfakes: False Pornography Is Here and the Law Cannot Protect You, 17 Duke L. & Tech. Rev. 99, 117 (2019). Several states even explicitly waive liability for publishers that attach a disclaimer to a deepfake.157See supra Section I.B.

Including a disclaimer, however, precludes neither reputational harm to a candidate nor impacts on an election. Even if a deepfake’s falsehood is made apparent, evidence suggests that certain viewers will still internalize and amplify the message it conveys.158See Harwell, supra note 29. Thus, a disclaimer alone is insufficient to render a deepfake harmless when other features of its publication and distribution increase its danger. Furthermore, subsequent bad actors can easily remove many forms of disclaimers, such as a simple text caption.159See Harris, supra note 156, at 117. Therefore, though the original post may declare that a deepfake contains falsehoods, subsequent posts—hundreds or even thousands of which may reach a dramatically larger audience than that of the original poster160See infra note 166 and accompanying text.—may not. For this reason, many statutes require stricter forms of disclaimers, such as digital watermarks, which are both more successful in preventing deception and more difficult to remove.161See Langa, supra note 30, at 789. Yet, even more sophisticated watermarks may be removed by subsequent sharers.162Devin Coldewey, DEEPFAKES Accountability Act Would Impose Unenforceable Rules—but It’s a Start, TechCrunch (June 13, 2019, 3:25 PM), []. Additionally, people that truly seek to deceive will likely not add a disclaimer at all. Ellen P. Goodman, Digital Fidelity and Friction, 21 Nev. L.J. 623, 636 (2021). Therefore, stricter disclaimer requirements do not solve all problems.163Additionally, by priming viewers to believe that deepfakes will be accompanied by a standardized disclaimer, these requirements may actually have a negative impact: viewers will believe that any content without said disclaimer is legitimate, making deepfakes that are not accompanied by a disclaimer even more influential. Langa, supra note 30, at 789. Balancing this concern with the concern that fostering skepticism will lead people to distrust true media will likely prove difficult. But, hopefully, by assigning liability when appropriate and deterring bad actors, some of the worst effects of this dilemma will be mitigated.

Finally, under some states’ laws, deepfake creators can essentially free themselves of liability, despite the likely negative effects of their content, simply by adding a disclaimer.164See supra notes 57–58 and accompanying text. Following that, subsequent sharers often have no responsibility to declare the inaccuracy of a deepfake they repost.165See e.g., Act of July 23, 2023, ch. 360, 2023 Wash. Sess. Laws 1892 (to be codified at Wash. Rev. Code § 42). Even the expansive House bill, which attaches liability to those who “alter” a disclaimer and therefore reaches culpable subsequent sharers, has its oversights. By not amending or removing the disclaimer, the subsequent sharer can satisfy their burden to avoid liability despite knowingly reposting a harmful deepfake.166Langa, supra note 30, at 789; Kocsis, supra note 145, at 643. Therefore, a creator can post a deepfake with a disclaimer, a subsequent sharer can knowingly repost it, and neither would be subject to liability. Yet, even when a deepfake has a disclaimer, there is still real damage from its spread, since evidence shows that people will continue to believe a discredited deepfake if it affirms their preconceived notions.167See Harwell, supra note 29.

2. Subsequent Sharers

Another policy consideration is whether to limit liability to the original publisher of a deepfake or extend liability to subsequent sharers. The House’s bill covers those who publish deepfakes without the necessary disclaimers and those who alter a deepfake to “remove or meaningfully obscure” the disclaimers.168See DEEPFAKES Accountability Act, H.R. 2395, 117th Cong. § 1041(f)(1)–(2) (2021). This is in stark contrast to some state statutes that have already been enacted, which typically assign liability only to the creator.169See e.g., Tex. Elec. Code § 255.004(d) (West 2023).

The case for extending liability beyond the creator is clear. Subsequent sharers can have a significantly larger deleterious impact than the original poster, spreading a deepfake to exponentially more people.170See Sapna Maheshwari, How Fake News Goes Viral: A Case Study, N.Y. Times (Nov. 20, 2016), []. Nagumotu, supra note 103, at 114–15; Viktoriia Formaniuk et al., Protection of Personal Non-Property Rights in the Field of Information Communications: A Comparative Approach, J. Pol. & L., no. 3, 2020, at 226, 230. Furthermore, when combined with other ill-advised elements, these statutes may unintentionally give rise to situations that attach no liability to anyone at all. Some statutes waive liability if a disclaimer accompanies a deepfake. If liability is limited to initial posters, subsequent sharers are left free of legal consequences for posting a deepfake without one. As a result, if the original creator attaches a disclaimer but subsequent sharers do not include it, the victim of the deepfake would have no recourse even though the damage caused could still be expansive.

Conversely, one benefit of limiting liability is avoiding the complication of trying to rein in culpable subsequent sharers without chilling innocent sharers’ speech. Another is avoiding the exceedingly arduous task of pursuing all sharers, an additional burden on top of the existing difficulty of tracking deepfake publishers.171Langa, supra note 30, at 793. Though this Note’s approach addresses the innocent-sharer problem, the pursuit of subsequent sharers is likely to remain an issue given the ever-improving strategies to avoid detection.172See Sara Ashley O’Brien, Deepfakes Are Coming. Is Big Tech Ready?, CNN (Aug. 8, 2018, 11:16 AM), []. Additionally, an individual bringing claims against subsequent sharers may also face an uphill battle to establish the causation and redressability sufficient to establish standing.173Cf. Zipkin, supra note 148, at 194 n.67 (discussing standing in the context of gerrymandering and Elections Clause claims).

3. Reasonable Viewer

Last, baked into many of the statutes’ definitions of deepfake liability is a reasonable viewer standard: would the modified content in question deceive a “reasonable” person?174See supra Section I.B; Lauren Renaud, Note, Will You Believe It When You See It? How and Why the Press Should Prepare for Deepfakes, 4 Geo. L. Tech. Rev. 241, 250 (2019). If not, then the publisher is not liable. But there is good reason to think that such a standard is insufficient, especially in the context of politically oriented deepfakes.175It is also unavoidable that what is considered reasonable for a viewer will evolve over time with changing technology and increased awareness of that technology. Ice, supra note 103, at 437.

First and foremost, this normative approach ignores the fact that any change in behavior at the ballot box is a harm worth curtailing.176See Langa, supra note 30, at 781. Why should it matter whether a viewer is not considered a reasonable person if they are deceived into changing their vote or convincing others to do so? After all, their vote counts just the same.

Furthermore, creators of politically oriented deepfakes may intentionally aim their work at unreasonable viewers. Studies have shown that an online community based around a particular political ideology tends to insulate its members from information that contradicts that ideology and feed members more information that confirms their beliefs.177Cédric Batailler, Skylar M. Brannon, Paul E. Teas & Bertram Gawronski, A Signal Detection Approach to Understanding the Identification of Fake News, 17 Persps. on Psych. Sci. 78 (2022); Jared Schroeder, Fixing False Truths: Rethinking Truth Assumptions and Free-Expression Rationales in the Networked Era, 29 Wm. & Mary Bill Rts. J. 1097, 1099–1100 (2021) (citing Ana Lucía Schmidt et al., Polarization of the Vaccination Debate on Facebook, 36 Vaccine 3606, 3610 (2018)). In other words, “intentionally false information . . . is often accepted and circulated.”178Schroeder, supra note 177, at 1099–1100. Therefore, what might not deceive a reasonable audience may readily permeate such a partisan (and, perhaps, unreasonable) community. For example, scholars have documented the potential for extremist groups to harness deepfakes to gain support for violent movements or perpetuate anti-immigrant sentiments.179See Europol, Facing Reality? Law Enforcement and the Challenge of Deepfakes (2022); Michael Hameleers, Toni G.L.A. van der Meer & Tom Dobber, You Won’t Believe What They Just Said! The Effects of Political Deepfakes Embedded as Vox Populi on Social Media, Soc. Media + Soc’y, July–Sept. 2022, at 1, [].

Current deepfake laws are far from ideal. These laws go both too far—to the point of chilling protected speech—and not far enough, failing to address the effects of some of the most harmful deepfakes. However, this Note does not advocate for a return to a landscape without any deepfake regulations. Rather, this discussion is meant to highlight the weaknesses of current laws and encourage reforms that strengthen enforcement capabilities and secure solid constitutional footing. Part III proposes one such reform: a foreseeable harm standard.

III. The Benefits of a Foreseeable Harm Approach

In order to address many of the looming constitutional and policy concerns surrounding deepfake laws, future legislation should incorporate a foreseeable harm standard of liability, measured via a totality-of-the-circumstances test.180Several formulations of this standard exist. Cf. infra Section III.B.2. Future legislation should, as a starting point, adopt some form of the following: “A defendant will be liable for any harm that, at the time of the defendant’s acts which constitute publication, would have been a foreseeable result to a reasonable person in the defendant’s position.” This approach allows for the consideration of several factors that may contribute to a deepfake’s harmful effects, rather than relying on discrete, poorly fitting rules. Among the factors that triers of fact could consider are: (1) the quality of the deepfake; (2) where the deepfake is distributed, such as via text message versus a social media post; (3) who receives the deepfake, keeping in mind specifically susceptible populations; (4) who posts the deepfake, because a newscaster or politician may have more credibility than, for example, a Redditor; (5) when the deepfake is published; (6) the presence of a disclaimer; (7) the type of disclaimer; and (8) whom the deepfake is meant to depict—directly or indirectly—and what it depicts about that person. Ultimately, the goal of a foreseeable harm standard is to determine whether the person that posted the deepfake should have foreseen the consequences of their actions. This approach would provide more protection for potential victims of deepfakes while also avoiding infringements of creators’ and sharers’ First Amendment rights.181Cf. Langa, supra note 30, at 781.

A. Benefits of a Foreseeable Harm Standard

1. Reasonable Viewer

Perhaps most importantly, a foreseeable harm standard would not have the same blind spots as the reasonable viewer standard. As discussed, not only is there a possibility that even crude deepfakes will deceive “unreasonable” audiences,182See Harwell, supra note 29 (“[W]hen [the truth of the alteration] comes to light, people just don’t care . . . . They say ‘it could have been true’ or ‘nonetheless, it reflects who the person really is.’ ” (alteration in original) (quoting Stanford University researcher Becca Lewis)). politically oriented deepfakes are oftentimes intended to deceive these very audiences.183Supra Part II; see also Edward Lee, Moderating Content Moderation: A Framework for Nonpartisanship in Online Governance, 70 Am. U. L. Rev. 913, 1015–17 (2021) (demonstrating how politically oriented subreddits allow discriminatory content moderation, likely leading to bias). Therefore, attaching liability only when a deepfake would deceive a reasonable viewer ignores the fact that harm still follows from the dissemination of a deepfake that may not pass this standard.184Cf. Anne Pechenik Gieseke, Note, “The New Weapon of Choice”: Law’s Current Inability to Properly Address Deepfake Pornography, 73 Vand. L. Rev. 1479, 1484–86 (2020) (showing that, by putting deepfakes in a group like the “r/deepfake” group, they will be spread elsewhere and may be more likely to deceive viewers in that new context).

Unlike the reasonable viewer standard, the foreseeable harm standard would capture a creator or publisher of a deepfake who intentionally targets susceptible viewers and thus could predict the effect that their content would have.185For example, after posting a deepfake that depicted President Obama shaking hands with a foreign leader, which was reposted by several thousand others, Representative Paul Gosar’s response was that it was the fault of the “dim witted” people that believed it, and that “[n]o one said this wasn’t photoshopped.” Harwell, supra note 29. Furthermore, as scholars have noted, these publishers may not have any expectation of convincing someone who does not already believe the message of the deepfake; rather, the goal may be to simply “reinforce existing beliefs and get people more entrenched in those beliefs.”186Id. (quoting Darren Linvill, a professor at Clemson University). Thus, the publisher need not deceive a reasonable viewer to achieve their intended outcome: “the falseness barely matters . . . . [P]eople who believe the message already aren’t looking for a counternarrative: They just want confirmation that they were right all along.”187Id. Studies showing that “familiarity” increases believability further support this point.188Vaccari & Chadwick, supra note 38, at 2 (arguing that viewers are more likely to accept misinformation if it conforms with familiar statements). By seeking out politically persuadable and ideologically insular subpopulations, the deepfake publisher is more likely to find a susceptible audience. An objective of reinforcing messaging to certain populations would still make this harm, and any secondary harm resulting from it,189See, e.g., Pechenik Gieseke, supra note 184, at 1485–86 (describing how deepfakes spread from Reddit to other places). worth remedying. Thus, a foreseeable harm standard would attach liability to these publishers.

2. Context-Blind Restrictions

A foreseeable harm standard would also avoid the danger of prohibiting a certain type of speech by any person, at any time, in any place. Rather than the categorical, context-blind rules that the Court warned against in Alvarez,190United States v. Alvarez, 567 U.S. 709, 722 (2012); see also supra note 136 and accompanying text. a foreseeable harm standard is flexible enough to consider each of these factors, among others.

For example, where a deepfake is posted would affect how foreseeable the downstream harm would be to a candidate’s reputation or to a voter’s behavior. A creator texting a deepfake to a friend, arguably equivalent to “whisper[ing] within a home,”191Alvarez, 567 U.S. at 722. should foresee less harm than they would if they posted the same deepfake to an online extremist political group with thousands of members.192See Dhruva Krishna, Deepfakes, Online Platforms, and a Novel Proposal for Transparency, Collaboration and Education, Rich. J.L. & Tech., Spring 2021, at 1, 34. And the “where” can extend to specific communities within a single platform.

Likewise, who posts a deepfake would clearly enhance the foreseeable harm, given the poster’s credibility and the size of their audience.193Barrett, Jr., supra note 38, at 635. We have already seen how highly visible people sharing deepfakes—even if poorly made—can increase their impact, such as the deepfakes posted by Congressman Gosar,194Harwell, supra note 29. former President Trump,195David Frum, The Very Real Threat of Trump’s Deepfake, Atlantic (Apr. 27, 2020), []. or Governor DeSantis.196Nicholas Nehamas, DeSantis Campaign Uses Apparently Fake Images to Attack Trump on Twitter, N.Y. Times (June 8, 2023), [].

Foreseeable harm would also take into account when someone posts a deepfake. Here, too, a totality test avoids adopting a hardline rule, like those employed in the BCRA and some existing deepfake laws.197Supra notes 55–59, 139–142 and accompanying text (describing statutes that regulate speech within a specified number of days before a general election). While it is true that the closer to an election a creator posts a deepfake, the more weight it may carry in the balancing of circumstances,198And even this proposition is likely not as black-and-white as one would think, at least in the context of changing voter behavior. A deepfake posted hours or even days before an election may in some instances have less damage than those posted further in advance if its false message didn’t have time to reach a mass audience. the timing would not be dispositive, avoiding arbitrary distinctions. Instead, courts would weigh the timing of a deepfake’s distribution in the context of other factors. For example, a deepfake posted sixty-one days before an election, but otherwise particularly deceptive and harmful, may still give rise to liability.

In fact, considered together, Justice Kennedy’s concern for context-blind restrictions and Justice Breyer’s recognition of the potential for false speech related to political campaigns (a category that inherently encompasses political deepfakes) to impact voter behavior199Supra note 118 and accompanying text. illustrate why the current patchwork approaches are an attempt to fill a circular hole with a square peg. The current laws are primarily aimed at determining how convincing the deepfake is. However, the more important question to ask is whether the person posting the deepfake acted in a way which they could reasonably have foreseen was likely to harm a candidate’s reputation or impact an election. Take, for example, MIT’s deepfake of President Nixon. Using the best available technology, the MIT researchers created a deepfake that could have deceived a reasonable viewer more readily than most any other deepfake.200Stockler, supra note 12. Thus, this standard is both overinclusive in some instances and underinclusive in others. Yet, MIT’s efforts were part of a project to educate the community on the extent of new AI capabilities. The way it distributed this deepfake, therefore, was less likely to have a harmful impact, and the creators knew as much.201This is also relevant to satirical programs.

That does not rule out the possibility of other people sharing MIT’s video in a different context and potentially deceiving audiences. Though this could create liability for those sharers, a court would probably not retroactively hold MIT liable, too, under the standard this Note proposes. To be clear, a deepfake can still give rise to liability even if it is unlikely to deceive by itself. Each assessment is hyper-contextual, asking when the user posted the deepfake and whether its nefarious purposes and effects were foreseeable.

3. Mens Rea

Deepfake laws should not impose liability on the “careless” but innocent speaker.202See supra Section II.A.2.iii. Again, the intent of an individual who creates a deepfake is not debatable; certainly, after synthetically modifying an image or video, the creator did not genuinely believe that the content they shared was accurate.203Ice, supra note 103, at 434. If the statute extends liability to all subsequent sharers who post a deepfake that would deceive a reasonable viewer, however, there is no guarantee that every individual will have had the same culpability.204Supra Part II. A foreseeable harm standard would allow for consideration of the specific circumstances in which each user shared the deepfake. For example, a careless distributor who does not realize they are sharing a deepfake does not foresee that their post could cause harm.205Thus, functionally, the “actual malice” requirement chosen by some legislatures will still be at play. The intentional or reckless behavior of any given sharer may be considered, along with the totality of their conduct, in order to determine liability. See Matthew Bodi, The First Amendment Implications of Regulating Political Deepfakes, 47 Rutgers Comput. & Tech. L.J. 143, 162 (2021). Meanwhile, someone who purposefully removes a disclaimer before sharing a deepfake is more likely to foresee a deceptive and harmful effect.206Deepfake laws are arguably a more realistic path to recourse for politicians than defamation, given the high bar to win such a claim. See Chesney & Citron, supra note 31, at 1793–94. The extent to which this remains true under a foreseeable harm approach, though, is unclear.

4. Subsequent Sharers

Another related consideration is whether to hold subsequent sharers liable or limit liability to the original publisher of a deepfake.207Compare, e.g., Tex. Elec. Code Ann. § 255.004 (West 2023) (only reaching the creator of a deepfake), with DEEPFAKES Accountability Act, H.R. 5586, 118th Congress (2023) (attaching liability to both deepfake creators and those subsequent sharers who “remove or meaningfully obscure” disclaimers). A totality-of-the-circumstances test for foreseeable harm would not have to make this categorical call. Rather, it would take into account the knowledge of the subsequent sharer when evaluating the foreseeability of downstream harm. For example, if there is limited evidence that a subsequent sharer was aware the content was fake, the harm caused by their post would, naturally, be less foreseeable. This standard allows for flexibility in pursuing legal action against subsequent sharers who are particularly culpable—such as those who intentionally remove watermarks, publish deepfakes to susceptible populations, or take other steps to increase the likelihood of harm—and avoids situations where victims have no recourse despite significant harm. But the standard does so without chilling innocent speech by the careless sharer.

Of course, there would still be problems of proof in demonstrating what a sharer knew or what actions they took—for example, did they see a disclaimer and remove it?208Ice, supra note 103, at 433. However, rather than viewing this as a bug, it can be seen as a feature: a check on chilling speech. As explained above, if there is no proof of the sharer’s culpability, a court will not hold that sharer liable.209See Section III.A.3. But with sufficiently serious evidence of culpability in the totality of the circumstances, subsequent sharers would not be able to wipe their hands clean of wrongdoing.

5. Disclaimers

A foreseeable harm standard would also allow for a more flexible approach to disclaimers by considering both whether a disclaimer was attached and the type of disclaimer attached.210Another possible criticism of a system of liability based on the mere presence or absence of a disclaimer is that requiring a disclaimer is a form of compelled speech. Cf. Quentin J. Ullrich, Note, Is This Video Real? The Principal Mischief of Deepfakes and How the Lanham Act Can Address It, 55 Colum. J.L. & Soc. Probs. 1, 30 (2021) (arguing that this criticism is misplaced if the requirement leaves exceptions for types of speech that are less likely to be harmful). The flexibility of a foreseeable harm standard allows for a publisher’s inclusion of a disclaimer to reduce the chance that they are found liable without mandating a disclaimer in order to avoid liability. For example, a text caption can be easily removed by subsequent sharers down the line.211Pesetski, supra note 141, at 528; Coldewey, supra note 162. By contrast, a digital watermark is harder to simply crop out of an image.212See Theo Golden, A British TV Network Is Facing Criticism for Airing a Deepfake Version of the Queen’s Christmas Speech, Where She Mocks Harry and Meghan for Moving to Canada, Bus. Insider India (Dec. 26, 2020, 12:25 AM), []. Thus, it would likely limit deception even after subsequent sharing and is more likely to cause a viewer to heed the manipulated nature of the content. The type of disclaimer a creator chooses to attach, and whether the creator chooses to attach a disclaimer at all, would change the assessment of foreseeability.213The ability to consider what type of disclaimer a deepfake publisher attached, if they attached one at all, is another benefit to pursuing a claim under a deepfake law as opposed to a traditional defamation claim. Any declaration that the content is false would render a defamation claim moot. See Kugler & Pace, supra note 93, at 630. Meanwhile, liability may still attach under a defamation law if the disclaimer is not sufficient to dispel harm down the line. Rather than a binary distinction, between the presence or absence of a disclaimer or between a textual disclaimer and a digital watermark, to determine liability, a foreseeable harm standard would consider the existence or nature of the disclaimer as only one aspect in the totality of the circumstances.

Even identical disclaimers will not always have an equivalent impact. In MIT’s Apollo project, a digital watermark or audial disclaimer was not necessary to convey the video’s authenticity (or lack thereof) due to the surrounding context of its creation.214See supra notes 12–13 and accompanying text. Meanwhile, a deepfake that is accompanied by a watermark disclaimer, but that is shared to a subgroup known to possess the ability to remove such watermarks215See Coldewey, supra note 162; see also supra notes 208–212 and accompanying text. and spread the manipulated content to other communities, would have greater foreseeable harm. This would be true even if the creator included what is normally a highly effective disclaimer.

6. Who and What Are Depicted?

A foreseeability test would also consider what the deepfake depicts and whom it targets. The notoriety of the candidate, the stage of the campaign, and the relative importance of the election are all factors that could increase the audience of deepfake targets. Certain candidates also inspire more targeted partisan opposition, which in conjunction with extremism and the familiarity effect may increase the likelihood of deleterious outcomes.216For example, Representative Pelosi has long been the center of Republican hatred and misinformation campaigns. See Annie Karni, Catie Edmondson & Carl Hulse, Pelosi, Vilified by Republicans for Years, Is a Top Target of Threats, N.Y. Times (Nov. 10, 2022), []; Annie Karni, Malika Khurana & Stuart A. Thompson, How Republicans Fed a Misinformation Loop About the Pelosi Attack, N.Y. Times (Nov. 5, 2022), []; Harwell, supra note 14.

Additionally, this approach could account for whether the deepfake depicts the candidates themselves, someone associated with a campaign, or something else entirely but that is still intended to impact viewers’ impression of a campaign. Presumably, if a creator makes a deepfake to directly target a specific candidate, they should foresee its harmful impact. There is certainly still a possibility, however, that a deepfake that manipulates something other than a candidate or their campaign directly could still significantly harm that candidate.217See Ice, supra note 108, at 439 (discussing how there are more damages than just reputational ones to the person depicted). It is beyond the scope of this Note to wholly address the necessity of broadening the scope of deepfakes for which victims may seek recourse, but it is worth addressing how such an expansion would fit into the framework of a foreseeable harm standard. Of the current laws in place, some appear to restrict liability only to sharers of deepfakes that depict candidates themselves, while others do not require that a candidate be depicted so long as the deepfake is meant to alter a viewer’s impression of the campaign or election.218See supra Section I.B. Thus, a deepfake placing a candidate’s spouse or campaign manager in a compromising context would escape the grasp of some current laws.219California’s statute and Illinois’s and New Jersey’s proposed statutes only target deepfakes that portray a candidate. Cal. Elec. Code § 20010 (West 2023); Ill. S.B. 3746, 101st Gen. Assemb., Reg. Sess. (2020); Assemb. B. 4985, 219th Leg., Reg. Sess. (N.J. 2020). At the same time, all of the current laws require that the deepfake depict a person.220See supra Section I.B. A deepfake falsely portraying an object or event221Deepfakes of objects or events directly targeted at a campaign could hypothetically depict, to give a few examples, controversial documents, campaign officials destroying property, or a campaign vehicle engaging in illegal conduct. More indirect impacts could stem from deepfakes that negatively impact an incumbent administration, such as a series of recent deepfakes that ostensibly depicted explosions near the Pentagon and the White House. See Shannon Bond, Fake Viral Images of an Explosion at the Pentagon Were Probably Created by AI, NPR (May 22, 2023, 6:19 PM), []. In addition to undercutting the reputation of our national security apparatus, these images also temporarily caused stock market dips. Id. would fall outside their ambits. Yet, the harm that misinformation like this could cause—including, in the most extreme case, altering an election—should be foreseeable.

B. Support for a Foreseeable Harm Approach

Using United States v. Alvarez as a barometer, this Section explains the constitutional support for the foreseeable harm standard. In addition to sidestepping some of the pitfalls of the Stolen Valor Act, this approach also finds support in Justice Breyer’s concurrence. Finally, this Section draws analogies between different “reasonable person” standards and identifies the proper positioning of deepfakes in this framework.

1. The Alvarez Concurrence

In his concurrence, Justice Breyer discussed some types of justified speech restrictions, as well as the limitations that allowed them to satisfy the requisite level of scrutiny.222See supra note 103 for a discussion of Justice Breyer’s and Justice Kennedy’s differing views on the appropriate tier of scrutiny applied in these cases. One example is false claims of terrorist attacks. In order for liability to attach in that context, there must be “proof that substantial public harm [was] directly foreseeable, or, if not, [the claim must] involve false statements that [we]re very likely to bring about that harm.”223United States v. Alvarez, 567 U.S. 709, 735 (2012) (Breyer, J., concurring). Justice Breyer also briefly touched on claims of trademark infringement; these claims require a finding that the infringement is “likely to dilute the value of a mark” or cause confusion.224Id. at 735–36. Building upon the requirement that speech restrictions be limited to contexts in which the greatest degree of harm may arise, this use of a foreseeable harm liability standard suggests that it also matters how foreseeable the specific harm in question was to the perpetrator.

Trademark infringement claims, along with impersonations of a public official, which Justice Breyer also considered, are surprisingly analogous to deepfakes.225This is a different similarity than that which Justice Breyer considered between trademark infringement and the Stolen Valor Act. In both contexts, the tortfeasor tries to capitalize on the inherent authority and believability of the brand they are attempting to resemble, or the role they are pretending to hold, to alter the target audience’s perception of them.226See Sunstein, supra note 91, at 422; Ice, supra note 103, at 434 (describing the authority and believability garnered by deepfakes); Alvarez, 567 U.S. at 735 (Breyer, J., concurring) (explaining the role of seeking credibility when impersonating a public official). As Justice Breyer noted, this is more than “mere speech.”227Alvarez, 567 U.S. at 735 (Breyer, J., concurring). As for how Justice Breyer distinguished between conduct that can be prohibited and “mere speech,” in the context of impersonating a public official, he stated that these statutes “typically focus on acts of impersonation.” Id. The act of creating a deepfake or altering its disclaimer would fall comfortably into the same category. Impersonating a public official, like publishing a deepfake, is an act meant to gain credibility—more credibility than is attainable by simply making a false statement. And this attempt to misappropriate credibility should have consequences. If a creator invests in making a deepfake believable, they should be able to predict that a viewer will actually act on that deepfake and change their behavior. Thus, the more likely a speaker’s conduct is to change their audience’s behavior in accordance with the false statement, the more appropriate liability is for the perpetrator.

2. Other Applications of the “Reasonable Person” Standard

Beyond the First Amendment examples listed in Justice Breyer’s Alvarez concurrence, it is important to recognize that foreseeable harm is far from a novel legal standard. Foreseeability has a well-established history in tort and contract law.228Foreseeable harm is considered in a variety of tort and contract claims, such as harmful battery, see Battery, Legal Info. Inst.: Wex, []; intentional and negligent infliction of emotional distress, see Intentional Infliction of Emotional Distress, Legal Info. Inst.: Wex, [], and Negligent Infliction of Emotional Distress, Legal Info. Inst.: Wex, []; and for determining when a party that breaches a contract is liable for consequential damages. Hadley v. Baxendale, 156 Eng. Rep. 145 (Ex. 1854). Specifically, in the First Amendment context, foreseeability is considered for determining the characterization of “fighting words” and incitement. Gersh v. Anglin, 2018 WL 4901243, at *3 (D. Mont. May 3, 2018). Notably, Gersh was decided after Alvarez and referenced Alvarez in its discussion of exceptions to the First Amendment right to free speech. Id. (quoting Alvarez, 567 U.S. at 717). Notably, though the reasonable person standard advocated for here is not the same as the reasonable person standard incorporated into current deepfake laws, it is still a form of the reasonable person standard. In fact, it simply flips the current standard on its head. Rather than asking if a reasonable person—the viewer—would be deceived by the deepfake, my proposal asks whether a reasonable person—this time, the publisher or sharer of the deepfake—would have known that harm would result.229For a discussion of a similar approach to assigning liability in defamation cases, see Alex B. Long, All I Really Need to Know About Defamation Law in the Twenty-First Century I Learned from Watching Hulk Hogan, 57 Wake Forest L. Rev. 413, 459 (2022) (discussing how the Texas Supreme Court in the case of New Times, Inc. v. Isaacks, 146 S.W.3d 144, 162 (Tex. 2004), asked if “the publisher either kn[e]w or ha[d] reckless disregard for whether the article could reasonably be interpreted as stating actual facts”). This Note argues that, in the context of politically oriented deepfakes, the Texas Supreme Court’s analysis should take the next step to exclude the requirement of a reasonable viewer so long as the publisher could themselves reasonably foresee the harm to follow. This version of the reasonable person standard is meant to ascertain whether “an ordinary person in [the] same circumstance would have reasonably acted in the same way.”230Foreseeability, Legal Info. Inst.: Wex, []. It is an evaluation of the tortfeasor’s behavior, rather than a measure of the victim’s susceptibility.

Current laws, which ask if the deepfake would deceive a reasonable viewer, are aligned with the common features of harassment claims, which ask if the defendant’s conduct would have made a reasonable person feel harassed.231See, e.g., Harris v. Forklift Sys., Inc., 510 U.S. 17, 21 (1993) (reaffirming the standard that, to be actionable, workplace discrimination must be both subjectively and objectively “hostile or abusive” (citing Meritor Sav. Bank v. Vinson, 477 U.S. 57, 64–65, 67 (1986))). Harm arising from a deepfake, however, is distinguishable from harassment claims and should not be governed by the same liability standard. In harassment cases, in addition to determining whether the claimant actually found the conduct offensive, courts ask whether a reasonable person would have found the conduct offensive.232See Harassment, Legal Info. Inst.: Wex, [].

On the other hand, when determining a deepfake’s harm under a reasonable foreseeability standard, we would not ask whether the viewer—the deceived—was reasonably offended. Nor would it matter whether the deceived reasonably feared a future harm to themselves, which is a part of establishing an assault claim (another type of tort that utilizes the equivalent of a reasonable viewer, rather than reasonable perpetrator, standard).233Assault, Legal Info. Inst.: Wex, []. We still consider future harm in the deepfake context when assessing whether the deception of the viewer harmed a candidate’s reputation or unduly influenced an election. But this is not harm to the deceived. Rather, the deceived is used as an agent to harm a candidate or the election process. The deceived is not seeking redress for themselves. Instead, the person whose reputation was harmed, or who was impacted by an election interference scheme, is seeking redress for the harm done to them by the perpetrator—using the deceived as an agent. Thus, whether the deceived was a reasonable person should not impact liability; either way, the ultimate victim, the subject or target of the deepfake, was still harmed. So, much like Justice Breyer’s impersonation and infringement examples, courts should be asking if the perpetrator acted reasonably—if they should have foreseen the result of their actions regardless of the reasonableness of the deepfake viewers.

Another important distinction is that, in harassment and assault, we must consider whether a reasonable person would feel harassed or threatened to determine if there was an injury; otherwise, people could make a claim at any time and subject defendants to punishment. In the deepfake context—much like incitement to crime234Foreseeable harm is also a recurring test in criminal law. For example, those states that determine coconspirator liability under the Pinkerton v. United States standard ask whether the harm was “reasonably foreseen as a necessary or natural consequence” of their actions. Pinkerton v. United States, 328 U.S. 640, 647–48 (1946). Likewise, when considering whether a subsequent event severs the causal link from a defendant’s actions to the harm suffered, we ask whether the intervening cause was reasonably foreseeable to the defendant at the time of their misconduct. See, e.g., People v. Rideout, 727 N.W.2d 630, 633 (Mich. Ct. App. 2006) (citing People v. Schaefer, 703 N.W.2d 774 (Mich. 2005)).—the injury is not dependent on the viewer’s feelings being reasonable. Whether the deceived viewer was reasonable or not, the victim was harmed. Therefore, just as we ask whether a defendant charged with incitement should have foreseen the consequences of their words,235Herndon v. Lowry, 301 U.S. 242, 262-63 (1937). we should ask whether a deepfake creator or distributor foresaw the harm, instead of asking whether the deception was reasonable.

The proposed House bill does contemplate a form of the foreseeable harm requirement. In the case of a deepfake depicting a deceased person, one of the requirements for liability is that the deepfake is “substantially likely to either further a criminal act or result in improper interference in an official proceeding, public policy debate, or election.”236DEEPFAKES Accountability Act, H.R. 5586, 118th Cong. § 1041(n)(1) (2023). However, this provision is limited to one narrow category of deepfakes, and the proposed bill also retains several of the aforementioned troublesome elements, including the reasonable viewer standard.237Id. Even within this narrow category of deepfakes subject to the foreseeable harm standard, the bill speaks to foreseeable harm in terms of election interference, but not in terms of reputational harm to a candidate or campaign.238Id. Though the House’s proposed language differs from this Note’s solution, it demonstrates that Congress is aware of and open to a foreseeable harm test for deepfake liability down the road.


As deepfake technology continues to develop and cases percolate in the courts, there must be an effective, workable, and constitutional approach to assigning liability for the detrimental effects of politically oriented deepfakes. In place of the current patchwork system of often ill-fitting and potentially unconstitutional rules, a foreseeable harm, totality-of-the-circumstances test will allow for flexibility without unnecessarily curtailing protected First Amendment speech. Using foreseeable harm as the standard is both more narrowly tailored than current statutes and grounded in existing Supreme Court precedent. The House of Representatives’ proposed bill to address deepfakes takes the first steps toward a foreseeable harm standard, with, hopefully, a more full-scale adoption of this approach still yet to come.

* J.D. Candidate, May 2024, University of Michigan Law School. I would like to thank my parents and siblings for their support throughout the writing process, Professor Don Herzog for pushing me to think critically about this problem and my proposed solution, and the many people that I discussed this idea with or that helped edit the piece. Any remaining mistakes are my own.