OPINION
There’s a pattern in how people talk about AI, and it’s starting to bother me.
Every week a benchmark shatters. Every month a capability that was “years away” arrives early. AI coverage has become breathless incrementalism — always zooming in on what just became possible, never pausing at what remains impossible.
1. Care About the Outcome
AI optimises for a target. What it cannot do is care whether the target was the right one. A lawyer who cares notices the question behind the question. A doctor who cares notices the patient’s face. AI responds to what you give it. It doesn’t have skin in the game.
2. Navigate a Room
Real-time social intelligence — reading the room, knowing now is not the moment to push — remains stubbornly difficult. Language models are trained on text. Text is what people decided to write down. It is, almost by definition, not the full picture.
“Text is what people decided to write down. It is, almost by definition, not the full picture.”
3. Be Wrong in a Useful Way
When a human expert is wrong, their errors are diagnostic — they reveal assumptions and blind spots you can interrogate. AI hallucinations are confident, plausible, and structurally unrelated to the truth. Human mistakes are data. AI hallucinations are mostly noise.
4. Have a Reputation to Protect
Trust is built through accountability. A professional who gives bad advice suffers consequences. AI cannot be embarrassed. It won’t lose sleep over a bad call. The humans who sign their name at the bottom are not redundant — they are the accountability layer.
5. Want Something New
AI can recombine and synthesise. What it cannot do is want something that isn’t implicit in its training. The truly disruptive ideas in history came from people obsessed with problems nobody else thought worth solving. That kind of motivated originality still has a distinctly human signature.
The race isn’t to outrun the machine. It’s to become more deeply, irreducibly human.
Tags: Artificial Intelligence • Opinion • Technology & Society • 192.168.1.22/