AI POLICY
The AI industry has, to a significant degree, been built on other people’s work without permission and without payment.
That’s a provocative framing, and I want to be precise about what I mean. The large language models and image generators that underpin modern AI were trained on text, images, audio, and video scraped from the internet — much of it created by humans who did not consent to that use and have not been compensated for it.
The legal questions this raises are genuinely unresolved, and the industry has largely adopted a strategy of moving fast, making the technology commercially indispensable, and hoping that by the time the courts catch up, the answer will be politically inconvenient to change.
The Fair Use Argument
The primary legal defence offered by AI companies is that training on copyrighted material constitutes fair use — analogous to a human reading books to learn, rather than copying and reselling them. It’s not an unreasonable argument. It has some genuine legal grounding in US fair use doctrine.
“The industry has adopted a strategy of moving fast, making the technology indispensable, and hoping the courts catch up too late to matter.”
The Problem With the Argument
The fair use argument works better for some use cases than others. A language model that generates generic text is a harder case to make against than an image generator that produces work in the distinctive style of a living artist who never consented to have their style appropriated. The legal and ethical dimensions diverge in interesting ways here.
What’s clear is that the current situation — AI companies profiting from creative work they did not pay for, while creators watch their markets erode — is not a stable equilibrium. Something will have to give, either through litigation, legislation, or negotiated licensing arrangements. The creative economy deserves a clearer answer than it’s currently getting.
Tags: Artificial Intelligence • Opinion • Technology & Society • 192.168.1.22/