Some artists have begun a legal fight against the alleged theft of billions of copyrighted images used to train AI art generators to reproduce unique styles without compensating artists or seeking their consent.
A group of artists represented by the law firm Joseph Saveri have filed a US federal class action lawsuit in San Francisco against artificial intelligence art companies Stability AI, Midjourney and DeviantArt for alleged violations of copyright law. of the digital millennium, violations of the right to publicity and illegal competition.
The artists taking action – Sarah Andersen, Kelly McKernan, Karla Ortiz – “seek to put an end to this gross and gross violation of their rights before their professions are wiped out by a computer program driven entirely by their hard work,” according to the Official text of the complaint filed with the court.
Using tools like Stability AI’s Stable Diffusion, Midjourney, or the DreamUp generator on DeviantArt, people can write phrases to create works of art similar to those of living artists. Since the widespread rise of AI image synthesis in the past year, AI-generated artworks have been highly controversial among artists, sparking protests and culture wars on social media.

Notably absent from the list of companies listed in the complaint is OpenAI, creator of the DALL-E image synthesis model that arguably kick-started mainstream AI generative art in April 2022. Unlike Stability AI, OpenAI does not has publicly disclosed the exact contents of its training dataset and has commercially licensed some of its training data from companies like Shutterstock.
Despite the controversy over stable spread, the legality of how AI image generators work has not been tested in court, although law firm Joesph Saveri is no stranger to legal action against generative AI. In November 2022, the same company filed a lawsuit against GitHub over its Copilot AI programming tool for alleged copyright violations.
Thin arguments, ethics violations

Alex Champandard, an AI analyst who has defended by artists’ rights without ruling out AI technology entirely, criticized the new lawsuit in various threads on Twitter, writing, “I do not trust the lawyers who filed this complaint, based on the content + how it is written. The case could do more harm than good because of this.” Still, Champandard thinks the lawsuit could be detrimental to potential defendants: “Whatever the companies say to defend thewe ourselves will be used against them.”
As for Champandard’s point, we’ve noted that the complaint includes several statements that potentially misrepresent how AI image synthesis technology works. For example, the fourth paragraph of section I states: “When used to produce images based on input from its users, Stable Diffusion uses Training Images to produce apparently new images through a mathematical software process. These ‘new’ The images are based entirely on the Training Images and are derivative works of the particular images that Stable Diffusion draws from when assembling a given output. Ultimately, it’s just a complex collage tool.”
In another section attempting to describe how latent diffusion image synthesis works, the plaintiffs incorrectly compare the trained AI model to “having a directory on your computer of billions of JPEG image files”, stating that “a model of trained broadcast can produce a copy of any of its training images”.
During the training process, Stable Diffusion relied on a large library of millions of scraped images. Using this data, your neural network statistically “learned” how certain image styles appear without storing exact copies of the images you’ve viewed. Although in the rare cases of over-represented images in the data set (such as the Mona Lisa), a kind of “overfitting” can occur that allows Stable Diffusion to output a close representation of the original image.
Ultimately, if properly trained, latent diffusion models always generate novel images and do not collage or duplicate existing work, a technical reality that potentially undermines plaintiffs’ copyright infringement argument, even though their arguments over “derivative works” created by AI image generators is an open question with no clear legal precedent that we know of.
Some of the other points in the complaint, such as illegal competition (by duplicating an artist’s style and using a machine to replicate it) and publicity infringement (by allowing people to request artwork “in the style” of existing artists without permission), are less technical and might have legs on the court.
Despite its problems, the lawsuit comes after a wave of anger over a lack of consent from artists who feel threatened by AI art generators. By their admission, the tech companies behind the AI image synthesis have obtained intellectual property to train their models without the consent of the artists. They are already on trial in the court of public opinion, even if they ultimately comply with established jurisprudence regarding the excessive harvesting of public internet data.
“Companies that build large models that rely on copyrighted data can get away with doing it privately.” tweeted Champandard, “but doing it openly *and* legally is very difficult, if not impossible.”
Should the claim go to trial, the courts will have to resolve the differences between ethical violations and alleged legal violations. The plaintiffs hope to show that AI companies benefit commercially and make huge profits from the use of copyrighted images; They have called for substantial damages and permanent injunctive relief to prevent the allegedly infringing companies from committing further violations.
When contacted for comment, Stability AI CEO Emad Mostaque responded that the company had not received any information about the lawsuit as of press time.
#Artists #File #Class #Action #Lawsuit #Imaging #Companies