AI POLICY
The AI regulation debate has a problem: most participants haven’t agreed on what they’re trying to prevent.
You can have entirely coherent conversations about AI regulation in which everyone nods along — and then discover, when you get to specifics, that you’re actually arguing about completely different things. Some people are worried about near-term harms: fraud, discrimination, privacy violations. Others are worried about medium-term disruption: labour displacement, misinformation ecosystems, concentration of power. A vocal minority are worried about long-term existential risks: systems that pursue goals misaligned with human welfare at a scale that could be catastrophic.
The regulatory tools appropriate for each of these concerns are almost entirely different.
The EU Approach
The EU AI Act is, in my assessment, a reasonable attempt at a hard problem that will inevitably look somewhat dated by the time it’s fully implemented. The risk-tiering approach — high-risk applications get more scrutiny, low-risk ones get less — is sensible in principle. The challenge is that the category of “high-risk” is defined by application domain rather than by capability level, which creates some odd results as the technology evolves.
“Regulating AI by application domain rather than capability level creates odd results as the technology evolves faster than the legislation.”
What Good Regulation Looks Like
The most effective AI governance I’ve seen operates at the level of outcomes rather than mechanisms. Rather than specifying what AI can or can’t do, it specifies what results are unacceptable — discrimination in hiring, manipulation in advertising, opacity in consequential decisions — and requires organisations to demonstrate that their systems don’t produce those outcomes. That approach ages better than technology-specific rules, because it regulates what matters regardless of how the underlying technology changes.
Tags: Artificial Intelligence • Opinion • Technology & Society • 192.168.1.22/