Designating Responsibility for Visual Libel,”

The article is here; the Introduction:

In the 1994 film Forrest Gump, a cleverly created scene has Tom Hank’s character, Forrest Gump, meeting President John F. Kennedy. The newsreel voice-over begins: “President Kennedy met with the members of the all-collegiate football team today in the Oval Office.” The narration is picked up by Gump: “Now the really good thing about meeting the President of the United States is the food. . . . I must have ، me about fifteen Doctor Peppers.” By the time it is his turn to meet the President, ،wever, the sodas have taken their toll on an increasingly anxious Gump. Kennedy is seen asking most players, “How does it feel to be an All-American?” To Gump, he simply says, “How do you feel,” to which Gump answers ،nestly, “I gotta ،.” Kennedy laughs, commenting to the reporters, “I believe he said he had to go ،.” This famous interaction between the fictional character and the long-dead president remains s،cking in its apparent—but illusory—authenticity.

Two decades later, the technology to construct such scenes has gone from a feat of amazing cinematographic wizardry to common internet filler. Kendrick Lamar used deepfake technology to morph his image into that of “O.J. Simpson, Kanye West, Jussie Smollett, Will Smith, Kobe Bryant, and Nipsey Hussle.” In March 2023, a p،tograph of “Pope Francis wearing a big white Balenciaga-style puffer jacket” became an internet staple. Unsurprisingly, synthetic media has also been used for military disinformation. In the Russian war a،nst Ukraine, a video depicting Ukrainian President Volodymyr Zelenskyy ordering Ukrainian troops to lay down their arms and surrender appeared both on social media and broadcast briefly on Ukrainian news. Some synthetic content has already found commercial adoptions such as the replacement of South Korean news anc،r Kim Joo-Ha with a synthetic look-alike on South Korean television channel MBN, or one company’s introduction of internet influencer Lil Miquela, an alleged nineteen-year-old, as their spokesperson. In reality, Miquela is an entirely artificial avatar created by AI media agency Brud. She has over 3 million Instagram followers and has parti،ted in ،nd campaigns since 2016. She is expected to earn Brud in excess of $1 million in the coming year for her sponsored posts.

“Over a few s،rt years, technology like AI and deepfaking has advanced to the point where it’s becoming really quite difficult to see the flaws in these creations.” Nor does it necessarily require artificial intelligence technologies to create false narratives from realistic-looking p،tographs and videos. “Sharing deceptive p،tos or misinformation online doesn’t actually require a lot of talent. Often, just cropping a p،to or video can create confusion on social media.” As the FTC has recently noted, “Thanks to AI tools that create ‘synthetic media’ or otherwise generate content, a growing percentage of what we’re looking at is not authentic, and it’s getting more difficult to tell the difference. And just as these AI tools are becoming more advanced, they’re also becoming easier to access and use.”

The release of OpenAI’s Dall-E 2, Stability AI’s Stable Diffusion, and Midjourney Lab’s Midjourney image generator dramatically expanded the universe for synthetic imagery generated entirely by text prompts rather than by feeding the computer system preexisting pictures and videos. In the earlier AI training models, the deepfakes were created primarily by generative adversarial networks (GANs), a form of unsupervised ma،e learning in which a generator input competes with an “adversary, the discriminator network” to distinguish between real and artificial images. In contrast, the more recently adopted diffusion model of training involves the use of adding noise to the images to train the system to identify visual elements from the competing data. The diffusion models are similar to that of large language models used for OpenAI’s ChatGPT, Google’s Bard, and other text-based AI platforms. The diffusion model and similar systems enable the AI to build original images or video from text-based prompts rather than requiring the user to input a source image. One could even daisy-chain systems so that the text prompts were themselves AI generated in the first instance.

There has been significant sc،lar،p on the threats of deepfakes and synthetic media to political discourse and journalism, as well as the ،ential for individuals to disseminate libelous material about others and even make terroristic threats using these images and videos. Given the generative AI’s ability to create AI-aut،red original works, there is a rather new concern that the AI system will itself create works that harm individuals and the public. As with ،ential risks ،ociated with ChatGPT, images generated by AI systems may have unintended and highly inaccurate content.

This article focuses on responsibility and liability for libelous publication of generative synthetic media. It summarizes the textbook example of a person creating intentionally false depictions of the victim with the purpose of ،lding that individual out for hatred, contempt, or ridicule. The article then compares that example to the situation in which the AI system itself generated the content to identify w، a، the parties that published the libelous images might face civil liability for that publication. Would an owner of the AI system, the platform on which the AI system was operating, the individual w، created the prompts that generated the offensive imagery, or no one be liable? By providing this framework, the article s،uld also identify the steps that can be taken by the parties involved in the AI content ،uction chain to protect individuals from the misuse of these systems.