Since popular awareness of the abilities of Chat GPT and several A.I. image generators has spread, the public now struggles with the implications of this technology – will it be a tool, make us obsolete, or a bit of both? Unless and until Congress can figure out how to legislate on complex, nuanced issues again to address A.I., the answer for visual artists and writers lies in copyright law.
While there are many world-changing implications of artificial intelligence, one of its harshest effects is already apparent in copyright law, or rather, in subverting it. A human mind pulls from the Zeitgeist in a way that is still mysterious. We synthesize this information into a new, copyrightable work of art – even when a new work stands in contrast to a previous one, it often is understood in relation to another work or within a cultural context and has touchstones involving shared symbolism. However, should an author use too much of any one work, they could infringe on the first work’s copyright and be sued, losing money and/or the right to use the infringing work.
A machine producing artwork or writing, pulling a tiny bit from millions of sources, could essentially digitally scrub any claim of infringement from a piece of work that has no human imagination – it is entirely a product of an algorithm taking parts of (likely mostly-copyrighted) material with no contributions from a human besides a verbal prompt. This threatens the rights of the authors of the works referenced by the A.I., and both visual artists and writers in general, as A.I. improves its abilities.
To better understand where we are likely heading, let’s review some current legal guidelines that will need to evolve to meet the challenges that regulating A.I. represents.
Copyright Law & Fair Use
Copyright automatically exists in the author of an original work (though registration is a good idea), and copyright law protects the rights of human creators to their original works. Some of the most interesting and nuanced decisions within this area of law have been made regarding how much input is required for copyright and how much of another’s work an artist or writer can utilize while still calling it their own.
The amount of another’s work you can utilize while not infringing on another’s work, or “fair use,” is a highly complex topic. It is judged along four elements:
- Purpose and character of use: Is it for-profit? Has the work been altered enough to be considered transformational? Is the purpose of the taking from the original work for news, parody, critique, or another purpose that would require a degree of taking from the original?
- Nature of the copyrighted work: Is it more of an original, expressive, artful creation, or more factual that may have expressed creativity merely in choosing how to display the facts?
- Amount and substantiality of the portion taken: Was the taking of small, discrete pieces, or large portions and/or important, central, and unique themes?
- Effect of the use on the potential market for the original: Will it drive the prices down for the original work?
Currently, how much an artist can appropriate under fair use is being argued at the Supreme Court in Andy Warhol Foundation for the Arts v. Goldsmith (click here for my breakdown of the case). The short version is that artists need to be able to borrow at least a small amount of inspiration from other works, but the amount allowed is limited, and court decisions don’t always provide much context.
The amount allowed to be taken can vary depending on the purpose of the secondary work. Some works, such as parody or news, require borrowing at least some amount of an original work to associate the statement being made with the original work. However, what qualifies as parody or news can be subject to interpretation
For example, Sky Pirates Funnies, a sexually explicit, drug-laden parody comic book of Disney characters, was taken to court by Disney. There, the court acknowledged the importance of fair use. However, without drawing a particularly clear line, they said that some copying is allowable. Still, Sky Pirates took too much and was therefore infringing.
The only clear rule for fair use currently is similar to shooting at the king: don’t miss.
A History of Automated Generation
Automated processes in the arts are nothing new – books and other written materials have been printed with increasing frequency in Western society since Medieval printing guilds in Germany, and a best-selling fan fiction of Don Quixote had Miguel de Cervantes scrambling to put out his own sequel (in which he brutally criticizes the printing guilds for what would be described as copyright infringement today).
In 1884, the US courts determined that a photograph qualifies for copyright in Burrow-Giles Lithographic Co. v. Sarony. A lithographer was selling prints of a photo without the photographer’s permission, and the photographer claimed copyright infringement. The court held that even though there is an automatic process to mix the chemicals and produce a photo once the button is pushed, setting it up required enough artistic input from the photographer that they receive the copyright for the photo. However, in Naruto v. Slater, when a monkey took a selfie, nobody was able to claim the authorship because no human was involved; it was a work whose creation was 100% split between a non-human primate (copyright rule: No monkeys!) and a device that created the photo through a mechanical process with no human input.
The copyright office summed it up once, in essence, as whether you used the device (i.e. a camera or an A.I.) to assist you in creating a work of original authorship, or was the entire thing conceived and executed by a device (occasionally in conjunction with a monkey)?
While there are obvious parallels between Burrow-Giles and the current case of emerging A.I. technology (creating a prompt and setting up a photo), the photographer’s camera in Burrow-Giles did not build an image from several previous photographers. The real issue with A.I. and copyright is the ability to pull less than an infringing amount of many works, essentially copyright-scrubbing a final product.
While the copyright office (not a legislative body) has done what it can to clarify, this leaves many questions still being debated in society (and are likely years away from being addressed legislatively or in the courts). For starters, are the prompts fed into a generative A.I. for art or writing, similar to setting up a photograph? Does the copyright partially belong to the coder who made the A.I., or will coders be like a company that made a camera? Who bears liability for infringement or other legal issues? Will A.I. be detectable in finished images and then held to a higher infringement standard than human artists to discourage increasingly-modest or subtle takings by an A.I.?
A.I. Applications Thus Far
We’ve all interacted with A.I. at some point, like Siri or Google Voice. This goes from the mundane, like a chatbot on a lawyer’s blog, to the imperative, like an A.I. that assists doctors with diagnoses, to the oddly intimate, like the digital girlfriends that take a stunning amount of gender-specific abuse from incels.
One example of how this gets complex in a professional setting is the case of Kris Kashtanova. Kris is an artist who generated a piece using A.I., arranged said art for a comic book story she wrote, and created a comic book using the resulting art for the panels. Kashtova is the first artist to receive a copyright for work involving A.I. This meets the threshold as a compilation — as selecting, coordinating, and arranging non-copyrightable elements (i.e. creating a phonebook) can create a copyrightable whole of elements that would not independently be copyrightable (i.e. a collage.) The copyright office approved this without deciding whether the generative images are independently copyrightable. While this decision does not affect whether A.I.-generated art is copyrightable, the copyright office has shown that it is not rejecting art because it incorporates A.I.-generated artwork.
Further professional uses include communications directors using Chat GPT to generate the raw product, which they then edit. In my own life, in the dregs of the legal profession doing document review (searching documents for language required to be handed over as part of a subpoena), the A.I. has grown by leaps and bounds and affected employment patterns.
Now, let’s be sure not to glorify this job – it’s the intellectual equivalent of an assembly line where one marks whether a document, such as an email, website, text message, etc., contains the language sought. On the bright side, it forms a safety net for the legal profession as document review firms are mostly looking for warm bodies belonging to certified attorneys that they can filter responsibility through. In any given room of 50 reviewers, 35 are listening to podcasts or YouTube, 7 are daydreaming, 3 are either storming out screaming about this job, not being why they took law school loans, or editing their suicide note. Maybe five are actually engaged enough to be promoted to a quality control review where they clean up the mess left by the other 45.
Over a year or so, I saw the markings made by artificial intelligence become, well, more intelligent. I often have my ear to the ground for opportunities and the status of my profession’s safety net – the income has steadily dwindled despite many reviews allowing for work from home and having less overhead without the need to rent office space in Manhattan.
Also uncomfortable from my perspective: writers of every kind are starting to use AI to do their assignments for them and just going in to edit it, an A.I. has passed the Bar exam, and an A.I. was set to start defending people in court before the bar stepped in on the unlicensed practice of law. While I expect the Bar to continue to protect the profession (and part of my salary), how long before an attorney just signs off on an A.I.’s work?
This would be a boon for consumers who no longer need the time and attention of a highly-paid professional (who has about $100,000 in student loans and cannot afford to drop their prices). Algorithms are already better at diagnosing patients than doctors – should we outlaw A.I. for medical purposes to protect medical careers, or would the patient’s needs trump this, resulting in at least some work being taken out of the medical field? To add to the discomfort, much of the time, an A.I. cannot explain how it has arrived at a conclusion in a way that a human can understand and replicate – the logical language that they operate on is partially not interpretable to any human language, and the difference in abilities between humans and A.I. will only continue to grow. At some point, we will have created the intellectual equivalent of a god – but for now, Skynet is mostly making over-sexualized images of women and John Oliver.
As most professions will face dramatic changes over the next few years due to A.I. outsourcing, we will see if workers become more akin to pilots requiring difficult-to-obtain certifications signing off on an A.I., or are all out replaced. Either way, there’s no way to put this genie back in the bottle, and government regulation will be a major part of defining the contours of what may be the most disruptive technology since the internet… if legislators ever stop bickering in front of cameras about culture war issues, that is.
P.S.: This article was edited with the assistance of the A.I. from Grammarly.