IT STRATEGY
I’ve reviewed a lot of company AI policies over the last year. Almost all of them have the same problem: they were written for a technology that no longer exists.
The AI landscape of 2023, when most enterprise policies were drafted, was largely about chatbots and content generation. The policies reflect that — they address plagiarism risks, data input concerns, and output quality checking. Sensible for the time.
But 2026 is a fundamentally different environment, and most policies haven’t kept up.
What Your 2023 Policy Doesn’t Cover
Agentic AI — systems that take autonomous actions across your infrastructure — didn’t feature in most early policies because it barely existed. Now it’s being deployed in sales, finance, IT, and HR workflows. The governance questions are completely different: who authorises what actions, how do you audit what an agent did, what’s the escalation path when it does something unexpected?
Shadow AI has also exploded. Employees are using personal accounts on commercial AI platforms to process work data, bypassing every control your policy assumes is in place. A 2023 policy that says “use approved tools only” is not a governance framework — it’s a wishful thinking document.
“A policy that says ‘use approved tools only’ is not a governance framework. It’s a wishful thinking document.”
What a 2026 Policy Needs
At minimum, your AI policy needs to address: data classification and what can go where, agentic system approval and audit requirements, employee personal-use boundaries, output verification requirements by risk level, and incident response procedures specific to AI failures.
If yours doesn’t cover all of those, you have a gap. And gaps in AI governance aren’t theoretical — they’re regulatory and reputational risks that are already being realised at organisations that moved fast and thought about governance later.
Tags: Artificial Intelligence • Opinion • Technology & Society • 192.168.1.22/