8/12/2025

Hey everyone. So, let's talk about something that's been on a LOT of creative minds lately: AI & copyright. It feels like every day there's a new AI tool that can write a story, compose a song, or create a stunning piece of art. Pretty cool, right? But it also opens up this massive can of worms for creators. Your work, your style, your intellectual property... where does it all fit in this new world?
Honestly, it’s a messy, complicated, & fascinating puzzle. We've got artists suing major tech companies, the U.S. Copyright Office trying to draw lines in the sand, & a general sense of "what the heck is going on?" rippling through the creative community. It’s what I’m calling the Creator’s Dilemma.
I’ve been diving deep into this, reading up on the court cases, the legal arguments, & what experts are saying. I wanted to put together a comprehensive guide to help you navigate this. We'll break down what the law says (and doesn't say), what's happening in the courts, & most importantly, what you can actually do to protect your work.

The Core of the Problem: Can a Machine Be an Author?

Here's the thing: at the heart of all this is a really fundamental question. To get a copyright, a work needs to be created by a human. That’s been the rule for a long, long time. The U.S. Copyright Office has been SUPER clear on this. They've explicitly stated that works "created by a human being" are what the law protects. This goes back to cases like the famous "monkey selfie" incident, where a court ruled that an animal can't hold a copyright.
So, when a computer program, an AI, generates an image or a piece of text, who’s the author? Is it the person who wrote the prompt? The developers who built the AI? Or the AI itself?
Turns out, the courts & the Copyright Office are sticking to their guns: no human, no copyright.
In a really important case, a computer scientist named Stephen Thaler tried to copyright an image his AI system, called the "Creativity Machine," generated on its own. He argued the AI was the author. The U.S. Copyright Office said no, & in August 2023, a federal court backed them up, reaffirming that "human authorship is a bedrock requirement of copyright."
This means that a piece of art or text generated entirely by an AI, without significant human creative input, is likely in the public domain. Anyone can use it. This is a HUGE deal.

The Gray Area: Human-AI Collaboration

Okay, so if AI on its own can't be an author, what about when a human is heavily involved? This is where it gets murky. Most creators aren't just letting an AI run wild; they're using it as a tool, much like Photoshop or a synthesizer.
The Copyright Office has provided some guidance here. They've said that if a human creatively selects, arranges, or modifies AI-generated material, the human's contribution could be copyrightable. It's not about just typing a simple prompt. It’s about the level of creative control the human exerts.
A perfect example of this is the case of the graphic novel Zarya of the Dawn. The author, Kristina Kashtanova, wrote the story & arranged the layout, but used the AI image generator Midjourney to create the images. Initially, the Copyright Office granted a copyright for the whole book. But then they took a second look.
They ended up issuing a partial cancellation. Kashtanova kept the copyright for the text she wrote & the creative arrangement of the elements in the book. But the individual images themselves? No copyright. The Office decided they were "not the product of human authorship" because the AI's output from a text prompt is too unpredictable. They also said her edits to the images were too "minor and imperceptible" to qualify for protection.
This sets a tricky precedent. It means if you use AI in your work, you have to be able to point to what you did that was creative. Simply prompting an AI might not be enough. The Copyright Office now requires applicants to disclose the use of AI in their work, so they can assess the level of human authorship on a case-by-case basis.

The Elephant in the Room: AI Training Data & Fair Use

This is probably the BIGGEST, most contentious issue right now. How do these incredible AI models learn to create? They are "trained" on VAST amounts of data—text, images, music, code—scraped from the internet. And, you guessed it, a huge chunk of that data is copyrighted material.
This has led to a flurry of high-profile lawsuits.
  • Artists vs. AI Art Generators: A group of artists, including Sarah Andersen, Kelly McKernan, & Karla Ortiz, filed a class-action lawsuit against Stability AI (the makers of Stable Diffusion) & Midjourney. They claim these companies scraped billions of images from the web, including their art, without their consent or compensation to train their models. One artist, Kelly McKernan, found her name had been used as a prompt over 12,000 times on Midjourney's Discord server, leading to AI art that was eerily similar to her style.
  • The New York Times vs. OpenAI & Microsoft: In a blockbuster case, The New York Times sued OpenAI & Microsoft, alleging that millions of their articles were used to train ChatGPT. They argue this directly competes with their business, as the chatbot can now provide answers based on their reporting, reducing the need for users to visit their site.
  • Authors vs. Big Tech: The Authors Guild, along with famous authors like George R.R. Martin & John Grisham, are suing OpenAI for using their books to train its language models without permission.
The defense from the AI companies almost always hinges on one key legal concept: fair use.
The fair use doctrine is a part of U.S. copyright law that allows the limited use of copyrighted material without permission for purposes like criticism, comment, news reporting, teaching, or research. It’s a balancing act, and courts look at four main factors:
  1. The purpose & character of the use: Is it commercial? Is it "transformative" (meaning, does it add a new meaning or message to the original work)?
  2. The nature of the copyrighted work: Is it factual or highly creative?
  3. The amount & substantiality of the portion used: How much of the original work was used?
  4. The effect of the use on the potential market for the original work: Does the new work harm the original's market value?
AI companies argue that training their models is a transformative use. They say they aren't republishing the books or photos; they're just using them to learn patterns, much like a human student would. They also argue that this innovation is for the public good.
Creators & copyright holders, on the other hand, argue that it’s wholesale copying for commercial gain. They point out that the AI's output can be "substantially similar" to the original works & can directly compete with them, harming their ability to make a living.

What the Courts Are Saying (So Far)

These cases are still working their way through the legal system, but we're starting to see some early rulings that are giving us clues about how courts are thinking.
One of the first major decisions came in February 2025 in the case of Thomson Reuters v. Ross Intelligence. Ross, an AI company, used copyrighted "headnotes" (summaries of legal cases) from Westlaw, a Thomson Reuters product, to train its own legal search engine. The court ruled that this was NOT fair use.
The judge pointed out that Ross’s product was a direct commercial competitor to Westlaw & that the use wasn't transformative because it served the same purpose as the original work. This was a big win for copyright owners, but the judge was careful to note that the case was about a non-generative AI. The question of whether training a generative AI is fair use is still very much open.
In the artists' lawsuit against Stability AI, a judge has allowed the case to move forward, suggesting that the claims of copyright infringement are plausible. The court found the artists’ allegations that their work was copied to train the model were sufficient to be considered direct infringement.
It's clear that the "fair use" argument is going to be the central battlefield. A recent Supreme Court decision in Andy Warhol Foundation v. Goldsmith, which narrowed the scope of transformative use when a work is used for a similar commercial purpose, could make it harder for AI companies to win this argument.

So, What Can a Creator Actually DO?

Okay, the legal stuff is a headache, I know. It feels like a battle of giants happening way above our heads. But that doesn't mean you're powerless. Here are some practical steps you can take to protect your work in this new AI-driven landscape.

1. Technical Protections: Cloaking & Watermarking

This is where things get really clever. Researchers & artists are developing tools to fight back against unauthorized scraping.
  • Glaze & Nightshade: Developed by a team at the University of Chicago, these tools are game-changers. Glaze works by adding a very subtle layer of "noise" to your images. It’s invisible to the human eye, but it "cloaks" your artistic style from AI models, making it much harder for them to mimic your work. Nightshade is even more aggressive; it "poisons" the training data. It tricks AI models into misinterpreting the image—for example, learning that an image of a dog is actually a cat. If enough artists use this, it could seriously degrade the quality & reliability of future AI models.
  • Watermarking & Metadata: While not foolproof, visible watermarks are still a deterrent. But you can also embed invisible watermarks & metadata in your files that contain your copyright information. This creates a digital fingerprint that can help you prove ownership if a dispute arises.
You don't need to be a lawyer to take some basic legal precautions.
  • "Do Not Train" Requests: Some platforms & initiatives are starting to emerge that allow creators to attach a "Do Not Train" credential to their work. Adobe, for instance, is pushing for this through its Content Authenticity Initiative. The idea is to create a universally recognized signal that tells AI companies not to use your content for training.
  • Website Disclaimers & robots.txt: You can add a clear disclaimer to your website or portfolio stating that your work is not to be used for AI training. The Authors Guild provides sample language for this. For the more tech-savvy, you can edit your website's
    1 robots.txt
    file to block specific AI web crawlers, like OpenAI's GPTBot, from scraping your site.
  • Copyright Registration: This is still fundamental. Registering your work with the U.S. Copyright Office is the strongest form of protection you can have. It gives you the legal standing to sue for infringement & to claim statutory damages.

3. Proactive Monitoring & Licensing

Don't just post your work & hope for the best. Be proactive.
  • Reverse Image Search: Regularly use tools like Google Images & TinEye to search for your artwork online. This can help you find unauthorized uses that you can then report.
  • Clear Licensing: Use clear licensing agreements for your work. Creative Commons licenses can specify whether your work can be used commercially or if derivative works are allowed. For high-value work, custom licenses are even better.
  • Report Infringement: If you find someone selling AI-generated art on a platform like Etsy that is clearly a rip-off of your style, file an intellectual property infringement report. Remember, the AI-generated art itself can't be copyrighted, so your claim as the original human artist holds more weight.

What About Businesses? The Automation Angle

This dilemma isn't just for individual artists. Businesses are also navigating how to use AI ethically & effectively. Many are looking at AI not for generating creative content, but for automating processes & improving customer engagement.
Here’s the thing, when it comes to business communication, you want AI that is trained on your data, not the entire internet. You need control & precision. For instance, businesses are now building custom AI chatbots for their websites. These aren't creative writers; they're customer service powerhouses.
This is where a platform like Arsturn comes in. It helps businesses create no-code AI chatbots that are trained specifically on their own secure data—like their help documents, product information, & internal knowledge bases. This means the chatbot provides instant, accurate answers to customer questions 24/7, based on information the business has provided & approved. It’s a way to leverage the power of AI for automation & engagement without wading into the messy copyright battles over creative works. It’s about building a meaningful connection with your audience through personalized, instant support, which can seriously boost conversions & customer satisfaction.

The Road Ahead

Look, this is all VERY new. The legal & technological landscape is changing at lightning speed. The lawsuits currently in the courts will set major precedents, & we'll likely see new legislation, like the proposed Generative AI Copyright Disclosure Act, which would require AI companies to be transparent about their training data.
For creators, the dilemma is real. These tools offer incredible new avenues for creativity, but they also pose an existential threat to our livelihoods. The key is not to bury our heads in the sand. We need to stay informed, use the protection tools available to us, & advocate for our rights.
The three C's that the artists' lawyers are arguing for—Consent, Compensation, & Credit—are at the heart of this. That feels like a pretty fair framework for the future, one where innovation can thrive, but not at the expense of the human creators whose work makes it all possible.
Hope this was helpful in untangling this complicated topic. It's a conversation we all need to be a part of. Let me know what you think.

Copyright © Arsturn 2025