I’ve been thinking about the clash between generative AI and copyright law, particularly in light of OpenAI’s claim that restricting access to copyrighted material could end the AI race altogether.
It’s a provocative statement. But it’s also a false binary. Because we’ve been here before – at the edge of a new medium, a new method of creation. And time and again, copyright has stretched to meet the moment.
At its heart, this isn’t just a legal dilemma. It’s a cultural one. Aesthetic innovation almost always arrives ahead of legal consensus. And it almost always provokes the same questions:
- What is authorship?
- What is originality?
- Who owns meaning when meaning is made collaboratively, implicitly, or through machines?
In 1884, the Supreme Court ruled in Burrow-Giles Lithographic Co. v. Sarony that photographs could be copyrighted because the photographer exercised creative control over composition and expression. That was a radical idea at the time. Photography was seen as mechanical, impersonal, too automated to be “authored.” Sound familiar?
The same debate surfaced with conceptual art, generative music, and collaborative software development – fields where authorship became more layered, and creative agency less linear. And yet the law, slowly, awkwardly, but steadily, has kept pace.
If we reflect on the historical arc of copyright, this is what we learn:
Mediums change. Principles endure. The tools evolve, but the core ideas, about attribution, fairness, contribution, and access, don’t go away. Copyright has never required total originality (an impossible standard). It protects expression, not ideas. That leaves room for shared influence and recombination, so long as it’s thoughtful.
Authorship is expanding, not collapsing. Generative AI doesn’t eliminate human authorship, it complicates it. We become editors, prompt writers, curators, architects of influence. We’re not losing authorship; we’re gaining a new grammar for it. And copyright, if handled well, can help us translate that shift into fair outcomes.
Control is shifting from gatekeepers to systems designers. Where once publishers, labels, and studios managed rights at scale, we now see power consolidating among platform owners, model trainers, and data architects. That shift demands new governance tools, not necessarily new laws, but smarter infrastructure. Consent standards. Transparent training data protocols. Creators’ rights built into large language models.
What makes this moment feel so precarious isn’t just the technology. It’s the power asymmetry.
You have Sam Altman at the helm of a multi-billion-dollar company, claiming that training on copyrighted material should be protected as fair use without meaningful compensation, attribution, or consent from the people whose work fuels the system. And you have artists, musicians, writers, many of whom never benefited all that much from copyright to begin with, now watching their creative labor get scraped into models they can’t see or challenge.
That’s not a fair exchange. And it shouldn’t be legal.
The good news is this: we don’t have to throw out copyright to solve this. We can use it. We can build legal, technical, and cultural pathways that hold space for innovation without erasing the people who make it possible. That might mean:
- Licensing frameworks for training data
- Attribution standards with real teeth
- Model transparency and audit rights
- Compensation mechanisms for creators whose work meaningfully shapes AI output
We’ve done this before. When we wanted to protect photographers. When we had to rethink sampling in hip hop music. When we needed new categories for code. Copyright is a common law system. It breathes.
And so the task ahead isn’t to choose between AI and artists. It’s to design systems where both can exist, and to ensure the power to define those systems isn’t concentrated in too few hands.
Until next time,
Fatimeh