Carleton College's student newspaper since 1877

The Carletonian

The Carletonian

The Carletonian

AI lawsuits pose complicated threats to creators

On Sept. 19, in a federal court in New York, a lawsuit was filed against Open AI by the Authors’ Guild, representing 17 authors with specific complaints. The authors include Jodi Picoult, George R. R. Martin, George Saunders and more — all of whom believe ChatGPT is systematically stealing their work. The New York Times covered the story, with quotes from several authors about joining the Authors’ Guild due to concerns about technological intrusions into the publication world.

The creators of OpenAI have since made a statement in response, claiming they want to support writers with their concerns about AI (artificial intelligence). Over the last few months, a number of these lawsuits have been filed, some by multiple authors, others by a single writer or comedian. Ultimately, they all focus on the same issue — the line between copyright infringement and fair use.

The US Copyright Office defines copyright infringement as the “reproduction, distribution, public display or making of derivative works of a copyrighted work without the permission of the owner or creator.” 

Fair use is a common defense against copyright infringement; it allows people to copy work for a “transformative” purpose. Fair use is what enables any commentary, reactionary or criticism-based content to exist and be monetized. On the other hand, any video using copyrighted work as background music can receive a copyright strike, even if the video is not monetized. Unfortunately, the term “transformative” is pretty vague. While most long-form transformative works are parodies, and there are previous cases that help inform later decisions, there are no real previous examples of what AI is doing and no prior legal discourse on whether it can be considered fair use. 

Within the most recently filed case, each author has a specific legal complaint. According to the Associated Press, Martin’s issue has to do with the use of ChatGPT to generate a prequel to the “Game of Thrones” books, using his setting and characters. This does seem like it categorically falls into copyright infringement — a prequel is clearly a derivative work. 

While there may not be a legal background for these cases, there are definitely previous examples. Using AI to generate more content for a specific fictional setting is really just a few steps away from fanfiction, which is, generally speaking, not illegal. Fanfiction creators are exempt from legal prosecution for two reasons: their work is not monetized and their work is transformative. While nobody is selling their D.I.Yed “Game of Thrones” prequel, the main topic of the lawsuit will likely be the question of whether or not AI is capable of doing transformative work. Is a prequel based on the same characters, using preexisting backstory, and still leading up to the same starting point transformative? Most people — including myself — would likely say it’s not. But what if the AI prompt adds a new character? Or adds a relationship between preexisting ones? 

The New York Times published an article on the author’s lawsuit in September, and the suit is ongoing. A few weeks ago, on Dec. 27,  the Times filed its own separate lawsuit against OpenAI and Microsoft. Their complaint is similar; they believe the company has been training AI with their articles. Their argument also hinges on the belief that AI had been used in the creation of competing media sources, detracting from the Times’ audience. This would be a violation of their copyright, and there’s a more concrete issue if this AI is being used for profit by other news organizations. 

Fair use applies to transformative works, and in the case of the authors’ lawsuit, it does seem like there are potentially transformative approaches, if the transformation is being done by the person writing the AI prompt. When it comes to plagiarizing the Times, however, there’s no question of transformation. The specific complaints are about the repetition of information from the Times, including sentences and ideas that are otherwise stuck behind a paywall. The AI is accessing and resharing subscriber-only information, creating competition for the Times and potentially damaging its profits. 

Over the course of the last year, AI and journalism have both been frequently discussed and questioned, in and outside of the context of each other. Between September and November, as the actor’s strike was being negotiated, it became public knowledge that AI was one of the key points of conflict.Hollywood studios were vying for the rights to digitally reproduce actors’ likeness at will without consent or fair compensation. While this was ultimately resolved, and actors will be fairly compensated for the consensual use of their likeness, Rolling Stone reported in November that many actors are scared that the consent aspect is nominal only — in other words, that actors unwilling to be digitally reproduced with AI will have a significantly harder time finding jobs.

And while it’s not a result of AI technology, last summer, National Geographic laid off all its staff writers, opting to only contract with freelance journalists instead. It’s been a bad few years for journalists, so while there’s hope for the New York Times case against OpenAI, since it’s the first filed by a major media company, there is also a very real risk that this could become another nail in the coffin for journalism.

Leave a Comment
More to Discover

Comments (0)

All The Carletonian Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *