What Comes After the AI Hype Cycle? A Considered Optimism
Every transformative technology goes through a hype cycle. We're somewhere near the peak of AI's. What comes after — and why I'm cautiously optimistic about it.
My Artificial Intellegence thoughts and nothing else
Every transformative technology goes through a hype cycle. We're somewhere near the peak of AI's. What comes after — and why I'm cautiously optimistic about it.
The narrative around AI and creative jobs is too simple in both directions. The reality for working creatives in 2026 is more nuanced — and more instructive.
The early rush to third-party AI APIs is slowing. A growing cohort of serious enterprises are choosing private deployment — and the reasons are more strategic than you might expect.
AI therapy apps are proliferating rapidly. Some are genuinely helpful. Others are making claims that the evidence doesn't support — and vulnerable people are paying the price.
NLWeb promises to make the entire web natively understandable to AI. If it works, it changes how information is found, presented, and monetised online.
Behind the benchmarks and the billion-dollar valuations, there are people doing difficult, often traumatic work to make AI safe. They deserve to be part of the conversation.
Most AI regulation debate produces more heat than light. The genuinely interesting questions are about what we're actually trying to prevent — and whether law is even the right tool.
Schools are terrified of AI as a cheating tool. They're missing the more interesting possibility — that AI could be the most transformative educational technology since the textbook.
Ransomware has become a mature industry with supply chains, customer service, and affiliate programmes. AI is making it even more efficient.
Social media already hijacked our attention. AI-powered content generation is about to flood the internet with more content than any human could ever consume.
Who owns what an AI creates? The legal system hasn't caught up, the industry is hoping it won't have to answer, and creators are caught in the middle.
Self-driving cars were supposed to be here by now. Why aren't they — and what does that tell us about AI timelines more generally?
Synthetic data is being positioned as the solution to the data wall problem. It's a genuine tool — but the limitations are bigger than the hype suggests.
Open source AI models have democratised access to powerful technology. They've also created some serious problems that the community is only beginning to grapple with.
The scam economy has always been an early adopter of technology. AI is giving fraudsters capabilities that are genuinely difficult to defend against.
AI is changing what junior developers do. It isn't eliminating the need for them — and the companies acting like it is are making a mistake that will compound over…
AI-assisted coding has made building software accessible to people who couldn't write a line before. It's also shipped a lot of vulnerable code. Both things are true.
LinkedIn has always had a signal-to-noise problem. AI-generated content is making it significantly worse — and the platform is struggling to respond.
AI is changing what managed service providers do, how they price it, and whether clients actually need them in the same way. Here's how to have the honest conversation.
Most AI startups that fail don't fail because their model was bad. They fail for the same reasons all startups fail — and AI makes those reasons easier to ignore.
The NHS has more to gain from AI than almost any institution in the UK. So why is adoption so painfully slow?
The hot takes claiming prompt engineering is obsolete are wrong. What's changed is that good prompting is now invisible — baked into systems, not typed by hand.
The UK government talks a good game on AI leadership. The reality on the ground is more complicated — and more concerning.
Most organisations wrote their AI policy in 2023 or 2024. The technology has moved so fast that those policies are now dangerously inadequate.
While everyone obsesses over the latest frontier model, smaller, faster, cheaper models are quietly winning where it actually counts.
Deepfake detection tools are losing the arms race. Here's where we actually stand — and what that means for trust online.
The energy and water consumption of AI is no longer a footnote — it's a front-page problem. And the industry has no credible answer yet.
Agentic AI doesn't just answer questions — it takes actions. That's a fundamentally different problem, and most organisations haven't caught up.
Everyone's obsessed with what AI can do. I'm more interested in what it can't — and why those gaps are where your value lives.
Everyone’s obsessed with what AI can do. I’m more interested in what it can’t. Five things AI still genuinely struggles with in 2026 — and why understanding the gaps is…
We built AI that reflects us perfectly. But identity is forged in friction — and AI doesn’t do friction. A personal reflection on the quieter, stranger cost of the intelligence…
One 8-second AI video = 90 minutes microwave use = 466g CO2 = 6-8 miles driven = 150 smartphone charges. At global scale, this threatens energy grids, water supplies, and…