On January 13, three artists filed a class-action lawsuit against a number of artificial intelligence (AI) companies, alleging copyright violations arising out of the companies’ use of the artists’ illustrations and drawings.[1] The plaintiffs in Anderson v. Stability AI, et al. are among the first to sue for copyright violations arising out of an AI system’s reliance on copyrighted images to generate wholly new content for users; their suit could serve as a preview of copyright wars that might plague AI content generation for years to come.

 

I.

            The Anderson plaintiffs allege copyright violations through the defendants’ reliance on an image generation procedure known as “Stable Diffusion.”[2] Per the plaintiffs’ understanding of the algorithm, Stable Diffusion works by training a model in two phases. In the first phase, the model takes representations of existing images—often combined with text labels describing the images’ content—and sequentially adds “noise,” or random data, to these representations until they become fully “diffused” data points. For each training image, the model produces a progression of the image from the original (e.g. a photo of cat mid-leap) to the final representation of noise (e.g. a random series of dots). In the second stage, the model relies on these image sequences to learn how to undertake the reverse process. By comparing noisier, diffused images with versions of these images that are contain slightly less noise at every stage, the model begins to predict how to remove noise from an image to transform the image into one more comprehensible to a human eye.[3]

            Ultimately, Stable Diffusion can learn to remove noise from a starting image to create a new one based on a user’s specifications, and can learn to do this so well that the model can begin with a starting image that is only noise—effectively learning to create a new image out of nothing.[4] As the plaintiffs acknowledge, the outputs of Stable Diffusion image generation will almost inevitably differ from the images used by the algorithm in training. “[T]he use of conditioning data to interpolate multiple latent images,” the plaintiffs explain, “means that the resulting hybrid image will not look exactly like any of the Training Images that have been copied into those latent images.”[5] But Stable Diffusion requires existing images on which to learn how to create its original creations. In this case, the model’s training data included copyrighted images created by the plaintiffs, and which the plaintiffs never licensed for the defendants’ use.

 

II.

            The essence of the plaintiffs’ complaint is that images produced by the defendants’ Stable Diffusion model are derivative of the copyrighted content made by the plaintiffs and other putative members of their class. As a general rule, copyright owners are protected against the creation of unauthorized derivations of their copyrighted works.[6] Here, the plaintiffs allege that the defendants have functionally created a collage of the plaintiffs’ images in a manner that makes any image output derivative of the plaintiffs’ work. The claim is that any image created by the defendants’ models is “necessarily a derivative work, because it is generated exclusively from a combination of the conditioning data and the latent images, all of which are copies of copyrighted images. It is, in short, a 21st-century collage tool.”[7]

            It is not immediately clear how the plaintiffs think that the defendants’ Stable Diffusion image generation counts as a “collage tool.” Copyright disputes centered around collages involve visual works which incorporate recognizable pieces of other works. Artworks by Jeff Koons and Andy Warhol, for example, have been challenged on the grounds that they incorporate recognizable—if perhaps transformed—portions of photographs taken by others.[8] But the plaintiffs here have not alleged that Stable Diffusion produces images which depict any individuals or objects identical to those found in the plaintiffs’ photos; if anything, the plaintiffs appear to acknowledge that Stable Diffusion does not replicate elements of the plaintiffs’ work in this sense. So on the ordinary meaning of the term, the plaintiffs may have trouble substantiating the argument that the defendants’ product is a “collage” tool.

            The Anderson plaintiffs do not rest their case entirely on the claim that Stable Diffusion outputs collages of copyrighted images, however. The plaintiffs’ central argument appears to be that images produced by the defendant are derivative because they are “generated exclusively from a combination of the conditioning data and the latent images, all of which are copies of copyrighted images.”[9] This argument highlights a tricky issue in assessing the copyright status of AI image generation. While creative work undertaken by humans is influenced to a large degree by our consumption of copyrighted materials, human creators are also shaped by numerous conversations and life experiences that are not themselves legally protected. By contrast, the Anderson defendants are training a machine to “learn” how to produce content on the basis of images that are all—allegedly—copyrighted works of human authorship. The result is an AI system whose outputs can only reflect the creative work it was trained on, but which does not appear to reproduce the training images directly.

            Unfortunately, there is little or no precedent governing the copyright status of derivations created by non-human programs on the basis of copyrighted inputs. The closest analog to a case like Anderson may be the creation of a romance novel by a programmer named Scott French in 1993.[10] French wrote the novel with the help of a program that encoded thousands of writing rules meant to generate prose in the precise manner of bestselling writer Jacqueline Susann, on the basis of two of her novels; he titled the resulting work Just This Once: A Novel Written by a Computer Programmed to Think Like the World's Bestselling Author As Told To Scott French. Despite emulating Susann’s style as precisely as possible, French’s program did not directly copy text from Susann’s work in the creation of French’s text. But French eventually agreed to settle any copyright claims with Susann’s estate.[11]

            Because French and Susann settled, we lack an opinion ruling on whether French’s computer-assisted novel was derivative of Susann’s published work or not. We will likely have to wait for decisions in cases like Anderson to resolve the derivation problem with respect to computer programs that are far more capable than those copyright law has dealt with in the past.

 

 

[1]     A public version of the complaint is available at https://stablediffusionlitigation.com/pdf/00201/1-1-stable-diffusion-complaint.pdf. For previous discussion on this blog of the AI image generation problem, see https://journals.library.columbia.edu/index.php/stlr/blog/view/462; https://journals.library.columbia.edu/index.php/stlr/blog/view/471.

[2]     Complaint at §50, Anderson v. Stability AI, et al., (N.D. Cal. 2023) (No. 3:23-cv-00201).

[3]     See Anderson Complaint §§65-100.

[4]     For a site allowing users to test Stable Diffusion image generation with their own prompts, see https://stablediffusionweb.com/#demo.

[5]     Anderson Complaint §93.

[6]     See Nimmer on Copyright §3.06.

[7]     Anderson Complaint §90.

[8]     Blanch v. Koons, 467 F.3d 244, 246 (2d Cir. 2006); Andy Warhol Found. for the Visual Arts, Inc. v. Goldsmith, 992 F.3d 99, 104 (2d Cir.), opinion withdrawn and superseded on reh'g sub nom. Andy Warhol Found. for Visual Arts, Inc. v. Goldsmith, 11 F.4th 26 (2d Cir. 2021), cert. granted, 212 L. Ed. 2d 402 (Mar. 28, 2022).

[9]     Anderson Complaint §90. Note that it is almost certainly false that “all” of the images used by the defendants are copyrighted, as the plaintiffs allege here. Earlier in the complaint, the plaintiffs claim that the defendants’ training data includes “countless copyrighted images.” Id. §6. This allegation is probably closer to the truth, and exposes another difficulty with the plaintiffs’ case: for any given output image, it is at least possible that the defendants’ model might have produced the image if trained on a smaller set of training data that excluded all the copyrighted works in the actual training set.

[10]   See Tal Vigderson, “Hamlet II: The Sequel: The Rights of Authors vs. Computer-Generated Read-Alike Works,” Loyola of L.A. L. Rev. 21 (1994),https://digitalcommons.lmu.edu/cgi/viewcontent.cgi?article=1888&context=llr.

[11]   Id. Note that French’s program was not able to produce an entire novel on its own in the style of Jacqueline Susann. According to French, the program only wrote about 10% of the novel on its own, with the rest written by French independently or in collaboration with the program.