By Paul Coble
Chair of the Intellectual Property Department
Rose Law Group
On May 21, 2024, Gov. Katie Hobbs signed emergency House Bill 2394 into law in an attempt to curb the growing threat of digital impersonations created by Artificial Intelligence. The statute, A.R.S. § 16-1023 (the “Anti-Deepfake Statute”), is aimed at protecting Arizonans from being damaged by fake images of themselves as well as restricting the use of damaging impersonations in elections. These are worthy, and indeed necessary, protections with the growing ubiquity of generative-AI tools. There’s a problem, it won’t work as intended. Several gigantic gaps in the Anti-Deepfake Statute make it ineffective for a large majority of the cases it was intended to help.
Most victims won’t get any actual relief. The Anti-Deepfake Statute provides three categories of relief: (1) declaratory relief (2) injunctive relief; and (3) damages. In most cases –including impersonations of political candidates and those showing the subjects nude or engaged in crimes, mild sexual acts, and reputation-damaging activities, only declaratory relief is available. (A.R.S. § 16-1023(F).) That means no injunction, no damages, and no real relief.
The last two categories–injunctive relief and damages–are available only for deepfakes of people who are not public figures, that are excessively sexual, and for which the publisher did not take “reasonable corrective action” after learning the impersonation was unauthorized. (A.R.S. § 16-1023(I).)
Under the Anti-Deepfake Statute, digital impersonations of public figures or those that do not include excessively sexual acts will only be entitled to a declaration acknowledging the image is an impersonation or deepfake.
Even non-public victims of offensive digital impersonations are unlikely to get effective relief because of provisions that seem to thwart or delay justice. Injunctive or monetary relief may only be awarded if the publisher had “actual knowledge” at the time of publication that the image was a digital impersonation or failed to take “reasonable corrective action” within 21 days of having such actual knowledge. (A.R.S. § 16-1023(I)(3).) In other words, a publisher could bury its head in the sand about a heinous sexual image, learn that it is an unauthorized impersonation within minutes of its publication, and still wait nearly 3 weeks to do anything about it before incurring liability under the Anti-Deepfake Statute. There are also provisions that protect publishers who admit label the impersonation as false or disputed.
A main target of the Anti-Deepfake Statute is digital impersonations of political candidates spreading misinformation in elections. If a digital impersonation is used in a paid advertisement, however, a cause of action under the Anti-Deepfake Statute may be brought only against those who “originated, ordered, placed or paid for the advertisement.” (A.R.S. § 16-1023(B).) Publishers who are paid by third parties to distribute the deepfake advertisement will face little or no consequences.
Bad actors can simply prepay a publisher to run an ad with a known deepfake and the publisher would not face any meaningful penalties under the Arizona Anti-Deepfake Statute.
The Anti-Deepfake Statute takes aim at serious harms, but the statute as drafted does not adequately address those harms. Changes need to be made to make sure all victims of deepfake technology have REAL protections.