3 July, 2025
Cropped shot of woman's hand typing on computer keyboard in the dark, working late on laptop at home

Cropped shot of woman's hand typing on computer keyboard in the dark, working late on laptop at home

The artificial intelligence boom is not just a story of technological triumph but one of hidden human costs. Behind every AI model promising efficiency and innovation lies a workforce subjected to grueling conditions. These are the data labelers and content moderators, primarily based in the Global South, who train these systems by reviewing vast amounts of data, often of a graphic nature. Despite their crucial role, these workers face severe psychological harm and are bound by nondisclosure agreements (NDAs) that silence their suffering.

Working eight to twelve hours a day, these individuals review hundreds, sometimes thousands, of distressing images, videos, or data points. The content often includes graphic material involving rape, murder, child abuse, and suicide. Many of these workers earn as little as $2 an hour and lack adequate breaks, paid leave, or mental health support. The NDAs they sign prevent them from discussing their experiences, even with therapists, fostering a culture of fear and self-censorship.

The Hidden Workforce Behind Our Feeds

The AI economy is underpinned by what can be described as dual monopsony power. Companies like Meta, OpenAI, and Google dominate the product market and act as powerful buyers in the global data labor supply chain. They outsource the most undervalued work to business process outsourcing (BPO) firms in countries like Kenya, Colombia, and the Philippines. In these labor markets, where unemployment is high and labor protections are weak, corporations dictate terms of employment, leaving workers with little power to refuse.

Platforms impose strict performance metrics and algorithmic surveillance, maintaining legal and reputational distance from the labor conditions they create. This system facilitates what some scholars describe as technofeudalism, where control of the digital commons is exerted through opaque data infrastructures and proprietary algorithms. NDAs not only silence workers but also prevent them from raising alarms when algorithmic systems threaten public safety.

A Global Health Crisis by Design

The business model of outsourcing and suppression has led to a public health crisis among AI workers. Content moderators report symptoms of PTSD, depression, insomnia, anxiety, and suicidal ideation. Some experience panic attacks, chronic migraines, and symptoms of sexual trauma directly linked to the graphic content they review. The lack of mental health support exacerbates these issues, with many workers turning to unhealthy coping mechanisms.

“Sometimes I blank out completely; I feel like I’m not in my body,” said a worker in Ghana. Another described turning to alcohol just to be able to sleep.

Governments, trade unions, and international labor bodies must insist that companies cannot be considered global AI leaders while denying fundamental rights to the workers who train their models. The harm extends beyond individuals, affecting families, relationships, and entire communities, particularly in countries where mental health care infrastructure is underresourced.

Where Do We Go From Here?

The scale and severity of this crisis demand a coordinated, global response grounded in worker power, legal accountability, and cross-movement solidarity. NDAs that prevent workers from speaking about their conditions must be banned from labor contracts. These clauses should be recognized as violations of fundamental rights, including freedom of expression and access to care.

Building worker power across borders is essential. Content moderators and data workers, often isolated by design, must connect through transnational labor alliances. These alliances can jointly name employers, demand protections, and fight for shared standards. Tech firms may hide behind outsourcing, but the harm is consistent, and so must be the response.

Enforceable global standards that treat psychological health as central to decent work are necessary. Platform companies must be held accountable for labor conditions across their outsourced chains, including legally binding rules for working hours, mandatory trauma support, and protections from retaliation.

Finally, AI regulation must be recognized as a labor rights issue. Ethics without enforcement is hollow, and innovation at the cost of human dignity is exploitation. A new narrative is needed, one that measures the intelligence of any system not only by its performance but also by how it treats the people who make it possible.