TECHNOLOGY & SOCIETY

There are people whose job it is to look at the worst content on the internet, all day, so that AI models don’t learn to reproduce it.

Content moderators and data labellers — often contracted through outsourcing firms in Kenya, the Philippines, and other lower-wage markets — are an essential but largely invisible part of the AI supply chain. The models that seem so clean and capable have been shaped, in part, by people reviewing violent extremism, child sexual abuse material, and graphic violence in order to mark it for exclusion.

The psychological cost of that work is documented, serious, and largely unaddressed by the industry that depends on it.

Why This Matters Beyond Ethics

The treatment of the people at the bottom of the AI supply chain is not just an ethical concern, though it is emphatically that. It’s also a systemic risk. The quality and consistency of human feedback is a crucial input to model training. Workers who are traumatised, under-resourced, and not given adequate mental health support will produce lower-quality annotations. Lower-quality annotations produce worse models.

“The companies paying for this work have, for the most part, not voluntarily improved conditions. That’s a market failure with a regulatory solution.”

The Industry’s Response

The companies paying for this work have, for the most part, not voluntarily improved conditions significantly. The competitive pressure to keep annotation costs low is real and structural. That suggests this is not a problem the market will solve on its own — it requires external pressure, whether from regulation, from consumer awareness, or from the growing organising efforts among content moderation workers themselves.


Tags: Artificial Intelligence • Opinion • Technology & Society • 192.168.1.22/

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Appliance - Powered by TurnKey Linux