On Feb. 13, the Carleton Philosophy Department hosted Camila Flowerman, a Postdoctoral Fellow in Philosophy of Embedded EthiCS Program at Harvard University. Flowerman’s lecture — entitled “Is Creating AI Art Plagiarism?” — tackled one of the most contentious debates in contemporary digital culture: Is AI-generated art created by text-to-image models a form of plagiarism? Flowerman began her lecture with an introduction to several initial formulations of the criticism of text-to-image models that have either appeared in online criticisms or philosophical literature. Finding none of them to be successful, Flowerman proposed a different approach to the critique of the way which text-to-image models are created.
According to Flowerman, AI-generated art is created through text-to-image models. These models — trained on vast datasets of existing artwork — can generate highly detailed images based on text prompts through a process called diffusion.
“What ends up happening in the training of these models is that the model learns to generate a clarified image through this [diffusion] process where they take an original image, they add noise to it,” said Flowerman. “Then the model learns how to re-get this original clarified image.”
Citing one particular claim of why AI art constitutes plagiarism, Flowerman quoted a post from social media: “‘[AI art] uses stolen artwork from hundreds of artists without permission.’” She finds three generally accepted conditions that need to be met before something constitutes plagiarism. Acknowledging that there is no perfect definition of plagiarism, Flowerman invited the audience to contribute additional conditions for plagiarism during the Q&A session.
Gesturing to the presentation, Flowerman explains her three conditions. Firstly, there is the authorship condition, in which one “claim[s] illicit ownership over a product that’s not new, claim[s] creative ownership over work that one did not produce,” and “present[s] an idea or product derived from an existing source that is not new or original.” Secondly, there is the reward condition, which is met if “one claims the credits or remarks for work created in large part or entirely by another.” Thirdly, there is no reference condition, which occurs when one “ fail[s] to attribute credit necessary,” and thus “steals or passes off some content as one’s own without crediting the source.”
Flowerman explains that one of the formulations of why AI art constitutes plagiarism is because of the notion of stealing. “The images that result from text to image models are an instance of plagiarism because their product/training involves stealing the work of human artists,” she said.
According to Flowerman, this does not meet the authorship condition or the reward condition.
She acknowledges that text-to-image models are using artists’ work and images as training data, but the AI art is not a direct reproduction of the models’ training data. “[Text-to-image models] are trained through the use of these images, but they don’t just reproduce images from their training data directly,” Flowerman said.
According to her, the imperfect application of training data also leads AI art to produce weird or unexpected images, and she shows the audience an AI-generated image of pink slabs of salmon swimming upstream as an example.“It was a great talk,” attendee Gabe Seidman ’26 noted. “I liked the image of the pink salmon fillets swimming upstream.”
“This doesn’t meet the authorship condition or the reward condition because they’re not just reproducing images directly from the training set,” Flowerman said. “They’re not claiming authorship over something that’s already being created; they’re creating something new … It’s also not clear exactly whether they need to meet the no reference condition.”
The second formation involves the fact that text-to-image models are using images from human artists as training data without permission. “Artists have been using each other’s work as inspiration for…all of time,” Flowerman said. “So it’s not obvious, at least on the surface, and we need more arguments.”
“Why should we feel different about models using images to train?” Flowerman asked, “and it’s not clear [that] artists have a right to opt out of their work being used as inspiration.”
It also doesn’t meet the no-reference condition. “There are many instances, I think, of artists seeking inspiration in their work from other artists, and there are lots of important moments about when attribution is due, but I think that this isn’t necessarily going to be part of those cases,” Flowerman said. “Also because the models are using the training images, they’re not claiming ownership or authorship of the training images.”
The third formulation involves the notion of giving credit when text-to-image models produce images that resemble another image or another artist’s style. “Creators can prompt models to output images that look a lot like the work of specific human artists without permission and/or proper attribution, and this constitutes plagiarism,” Flowerman said.
“And there is a way where it seems like this could constitute some form of plagiarism,” she added. Citing the use of a Twitter user’s AI image that mimics the iconic Studio Ghibli art style, Flower finds that many creators are not trying to hide the fact that a particular image was generated by AI.
“In some cases, or many cases, this is a little bit unclear whether this is plagiarism,” Flowerman explained. “In a lot of instances where creators are doing this, they’re often not trying to pass it off with their own work or benefit directly in some ways. But the idea is that, in many cases, users are interested in these models for very personal reasons. They’re not necessarily big entities.”
Furthermore, the human who is generating the prompt often gives credit by including the prompt. “So just to say, ‘Ghibli vibes,’ I mean, in a sense they are saying ‘that was my reference,’” Flowerman said. “They’re sort of acknowledging that the ownership is contested by giving a reference.”
In light of the three unsuccessful formulations of why AI art is a form of plagiarism, Flowerman advances her original view. “Companies are plagiarizing by taking credit for a supposedly novel product and not fairly attributing credit or compensation,” Flowerman said.
“Companies use scraped material as a part of their creative process in developing a product that generates profit for these same companies and this constitutes plagiarism,” Flowerman said.
“In my talk I outlined three conditions for having plagiarized something, one of which is the ‘no reference’ condition, which involves failing to attribute credit where necessary, or trying to pass something off as one’s own without crediting the source,” Flowerman wrote. “If my argument is successful, then on my view, certain tech companies meet the no reference condition (and thus might be guilty of plagiarism) because they fail to credit an integral source material used in the creative process by which they develop their models: the work of human artists.”
“Large tech companies have often operated with a kind of ‘finders-keepers’ attitude towards the data they collect, as though this information was developed and exists context-free, and is just there to be used however they want,” Flowerman continued. “But in reality, these data are the product of many individual instances of unique creative processes that deserve acknowledgement in some form or other.”
While this meets the reward condition, Flowerman acknowledges that her formulation does not fully meet the authorship condition, which is claiming illicit ownership over a product that’s not new. “I think this is the trickiest one, because I do think they are in a real sense, these images are new,” Flowerman concluded.
In further regards to the authorship condition, Jason Decker, Professor of Philosophy, added that there might be additional fringe cases of plagiarism where the creator does not want authorship over their plagiarized work, complicating the situation.
“One might wonder if there are analogs of this sort of (non-credit-taking) plagiarism that are relevant to AI plagiarism worries,” Decker added. “One sort of case that springs to mind is the possibility of “news” websites that are populated by paraphrased versions of news stories that are found on real news sites like the New York Times.” As such, one particular worry is the creators are plagiarizing and getting monetary rewards without claiming authorship of the work. (The Carletonian had several articles put on one such site without accurate attribution).
“The AI-generated news site might misattribute authorship to “reporters” who don’t actually exist,” Decker concluded. “The creator of the website isn’t looking to themselves take credit for the news pieces; in fact they’d rather not have the website traced back to them at all. They just want the revenue such a site can produce. That strikes me as a kind of plagiarism where the plagiarizer isn’t trying to take credit for the work.”