BLACKWIRE
Culture & Society

The People Fighting Back Against AI's Hidden Human Cost

By EMBER | BLACKWIRE Culture & Society Bureau
March 25, 2026 - Berlin / Nairobi / Santiago / Manila

While Silicon Valley celebrates the arrival of artificial general intelligence, the communities building that future are living with its costs - stolen groundwater, invisible labor, and digital violence with no legal recourse. They've stopped waiting for Big Tech to fix it. They're organizing.

Protest organizers meeting
Activists across the Global South are forming formal coalitions to resist AI's environmental and labor costs. Photo: Pexels

On Tuesday, a jury in New Mexico ordered Meta to pay $375 million for endangering children on its platforms - the first time a US state has successfully sued the tech giant over child safety, according to Al Jazeera. The verdict, handed down after a six-week trial in which witnesses included former employees-turned-whistleblowers, exposed the internal logic of a company that prosecutors argued prioritized profit over the safety of minors.

New Mexico Attorney General Raúl Torrez called it "a historic victory for every child and family who has paid the price for Meta's choice to put profits over kids' safety." Meta, predictably, said it would appeal.

But the New Mexico courtroom was just one front in a much larger conflict - one that has been building for years, largely without the attention it deserves. From the water-stressed hills of Santiago to the annotation farms of Nairobi to the call centers of Manila, a grassroots movement is rising. Workers, activists, teachers, former teenagers, and small-town communities are collectively arriving at the same conclusion: the AI revolution was not built for them. It was built on them. And they're done with it.

Data centers vs communities - key statistics
Key figures behind the global AI resistance movement. Source: Rest of World, Reuters, Corporación NGEN, Data Labelers Association

The Water Thieves of Santiago

Water shortage community protest
Chile hosts nearly 70 data centers, many clustered in Santiago and its already water-stressed surrounding regions. Photo: Pexels

Rodrigo Vallejos was a 22-year-old law student in 2022 when he started combing through environmental permits. He wasn't looking for anything particular. He was just curious why the water pressure in his Quilicura neighborhood had been dropping. What he found was a paper trail that pointed squarely at Silicon Valley.

Microsoft had received government clearance in 2023 for a $317-million data center in Quilicura. The company's public-facing materials claimed its cooling system would eliminate the need to use water for more than half the year - a key environmental concession. But buried in the actual environmental documents submitted to Chilean authorities, Vallejos found the truth: Microsoft's cooling system would rely partially on groundwater in a region already classified as water-stressed.

"The environmental consequences of data centers will be worse in the future. The important thing is that these companies fulfill their corporate environmental commitments and contribute to the water reserves' reconstruction. If not, their environmental claims are just greenwashing." - Rodrigo Vallejos, 28, environmental activist, Santiago

Vallejos and his neighbors filed over 100 citizen complaints against Microsoft. Those complaints were technically included in discussions for Chile's National Data Center Plan. But when the plan was published in 2024, the stricter environmental requirements his coalition had pushed for were absent. The plan mirrored what the tech companies wanted. Not what the communities needed.

He didn't stop. Last September, he filed a complaint against Google's data center in Cerrillos, targeting a lack of transparency around water consumption. Google's response: its site used "roughly the amount consumed by a golf course." That comparison - meant to sound reassuring - only deepened suspicions. Nobody in Quilicura owns a golf course.

Chile now hosts nearly 70 data centers, many clustered in and around Santiago, according to Rest of World's March 2026 investigation. The country's cheap renewable energy and stable political climate made it attractive to Amazon, Google, and Microsoft. But that attractiveness was never negotiated with the people who live there.

Tania Rodríguez, 54
Founding member, Mosacat - Santiago, Chile

A former schoolteacher who became one of Chile's most prominent data center activists. Co-founded the Socio Environmental Community Movement for Land and Water (Mosacat) after discovering Google's second Santiago data center had been authorized to extract more than 7 billion liters of water annually. The group's demonstrations helped prompt Santiago's environmental tribunal to suspend Google's construction in 2024 until an environmental reassessment was completed. Mosacat has since connected with anti-data-center coalitions worldwide.

Tania Rodríguez doesn't frame what she does as being anti-technology. "We're not against Big Tech," she told Rest of World. "We're in favor of nature. We don't want our countries to get steamrolled by extractivism." That word - extractivism - is borrowed from the vocabulary of the anti-mining movement. Its application to AI infrastructure is new, and it fits.

Carine Roos, a doctoral researcher at the University of Sheffield who studies AI's social impact in the Global South, told Rest of World: "Many discussions still approach AI primarily as a digital technology, but in many of these countries, AI is becoming visible through the infrastructures that sustain it: data centers, mineral extraction, energy demand, water-intensive cooling systems, and digital labor chains." She added that "while much of the economic value generated by AI remains concentrated in technological centres such as Silicon Valley, many of its environmental and social costs are in these territories."

Mexico's Law That Wasn't Enough

Activist speaking at community gathering
Olimpia Coral Melo's seven-year campaign against digital violence helped shape legislation in Mexico, Argentina, and the United States. Photo: Pexels

When Olimpia Coral Melo was 18 years old, her then-boyfriend shared an intimate video of her on social media without her consent. In Mexico in 2014, there was no law for that. There was barely a vocabulary for it. She was left to navigate the aftermath alone, in public, in the age of social media virality.

She didn't stay alone for long. Assisted by Defensoras Digitales, a women's activist group fighting cyberbullying and digital harassment, Melo spent seven years pushing for legislation. The Olimpia Law - named after her - was passed in Mexico in 2021. It criminalizes the nonconsensual distribution of sexual imagery. The same framework later shaped the Olimpia Law in Argentina and the Take It Down Act in the United States, according to Rest of World.

"The conversation around digital violence cannot be limited to individual responsibility of those who post or share content. The problem is structural, and digital platforms have direct responsibility. Not only do they host the content, they amplify, recommend, and often monetize it." - Olimpia Coral Melo, 35, Puebla, Mexico

But the law's reach has been limited. As of early 2026, only five people have been found guilty under the Olimpia Law in Mexico - a number that becomes grotesque when measured against the scale of the problem. More than 18 million people were reported as victims of cyber harassment in Mexico in 2024, according to official statistics cited by Rest of World, with women representing more than half.

The law also doesn't cover AI-generated nonconsensual sexual imagery - deepfakes. It doesn't hold social platforms accountable. The legislation, in other words, was written for a world that no longer exists. Melo is now campaigning to update it, arguing that the platforms themselves - not just the individuals who misuse them - must face legal consequences.

Her story connects directly to the Meta verdict in New Mexico. In both cases, the core argument is the same: these companies knew their platforms were being used to harm people, specifically women and children. They chose not to stop it because stopping it cost money. The jury in Albuquerque believed that argument. The question is whether courts in Mexico will too.

$375M
Meta ordered to pay, New Mexico verdict March 2026
18M+
Cyber harassment victims in Mexico, 2024
5
People convicted under Mexico's Olimpia Law to date
40
Witnesses in Meta New Mexico trial, including whistleblowers

The Invisible Workforce of Nairobi

Workers at computers in open office
Data annotation workers in Nairobi and across Africa power AI models for the world's biggest tech companies. Their labor conditions remain largely invisible. Photo: Pexels

Joan Kinyua started working at Samasource, a data annotation contractor in Nairobi, in 2016. Her job was to label images - a tree is a tree, a stop sign is a stop sign, a human face is a human face. The labeled data would then feed into machine learning models used by Meta, autonomous vehicle companies, and others. Without people like Kinyua, the AI systems celebrated in San Francisco press releases would not function.

The images she labeled weren't always trees. They were sometimes violent. Sometimes explicit. Kinyua and her colleagues had no mental health support, no tools to decompress after viewing disturbing content. The environment, she told Rest of World, was "not only uncomfortable but exploitative." Speaking up or attempting to unionize, she said, was "risky."

She moved to CloudFactory, another data annotation contractor. She supplemented that with clickwork on Remotasks. She would wake at 5 a.m. to complete tasks before her regular job started. After clocking off at 4 p.m., she sometimes hid in a bathroom stall to complete Remotasks on her office laptop - the tasks had strict timers, and failing to finish meant "not a single penny for hours of effort."

"We are building a movement where digital labor is visible, valued, and organized, and where the human foundation of AI is finally recognized as central, not peripheral, to innovation." - Joan Kinyua, 36, founder, Data Labelers Association, Nairobi

After eight years across multiple annotation companies, Kinyua had experienced enough to act. Last year, she co-founded the Data Labelers Association with nine other workers. The DLA pushes for ethical labor standards, fair pay, transparency, and legal recognition for the workforce that powers AI. They are not asking for charity. They are asking for the basic protections that most workers in wealthier countries take for granted.

The nature of their work is not abstract. Every AI system that generates text, recognizes faces, drives cars, or filters content was trained on data labeled by someone. Most of those people work in the Global South. Most of them are not in any labor union. Most of them have no formal legal recourse if their employer exploits them. The companies that profit from their labor - many of them valued in the hundreds of billions of dollars - argue that they are "contractors," not employees, and therefore owe them nothing beyond the per-task rate.

That rate is often less than a dollar per hour in effective terms, once failed tasks, unpaid review periods, and sudden de-platformings are factored in.

The Philippines: Calling Out the Machines

Call center workers at desks
The Philippines' BPO sector employs 1.8 million people. Code AI was formed after a worker was fired for exposing an AI quality assurance manager. Photo: Pexels

The Philippines is home to one of the world's largest business process outsourcing sectors - 1.8 million workers, most of them in call centers, answering questions, resolving complaints, and serving as the human face of global corporations. They are also, increasingly, being replaced.

The Coalition of Digital Employees - Artificial Intelligence, known as Code AI, was launched in January 2025 as an initiative of the BPO Industry Employees Network. Its formation was triggered by a specific event: a call center employee was fired after revealing to Rest of World that an AI program was being used as their quality assurance manager - evaluating their performance, flagging errors, potentially triggering terminations - without their knowledge or consent.

The disclosure cost that worker their job. It created Code AI.

Renso Bajala, who spearheaded Code AI's initial campaign, has since become a full-time organizer, assisting workers fired due to AI. The group has helped approximately 1,000 workers demand fair compensation and assert their legal rights. They are involved in drafting the Magna Carta for BPO Workers, proposed legislation that would create formal legal protections for call center agents, data annotators, and content moderators threatened by automation.

"We try to harness the collective power among employees. When tech workers are laid off, a lot of times they are in a rush to find the next job rather than speak out." - Renso Bajala, organizer, Code AI, Philippines

The speed of AI-driven displacement in the Philippines is unusually visible because the BPO sector is so concentrated and so central to the national economy. In other countries, automation happens diffusely, across many sectors, making it harder to organize around. In Manila, the entire phenomenon is happening in recognizable buildings, to people who know each other, who can see what is happening in real time.

That visibility is its own kind of power. Code AI has demonstrated that even when individual workers are isolated and afraid to speak out, collective action changes the calculus. The Magna Carta for BPO Workers has not yet passed. But it exists. It has sponsors. It has public support. A year ago, it didn't.

Timeline of the AI resistance movement
Key moments in the global AI resistance movement, from 2015 to 2026. Source: Rest of World, Reuters, BLACKWIRE research

The Meta Verdict and What It Actually Means

Courthouse exterior with steps
Tuesday's verdict in New Mexico marks the first time a US state has successfully sued Meta over child safety. A second phase begins in May. Photo: Pexels

Tuesday's verdict deserves to be read alongside the stories from Chile, Kenya, and the Philippines because it confirms what activists have been arguing for years: these companies know. They have the data. They run the internal studies. Their own whistleblowers testify that executives receive reports detailing the harms their platforms cause, and choose to continue anyway.

The New Mexico case, according to Al Jazeera's reporting, began in 2023 when Attorney General Raúl Torrez's office ran an undercover operation. Investigators posed as Facebook and Instagram users aged under 14. The fake accounts received sexually explicit material. Adults seeking similar content contacted the underage profiles. Criminal charges followed against multiple individuals. The state then sued Meta directly.

New Mexico vs. Meta - Key Findings

Verdict: $375 million in damages - first successful US state lawsuit against Meta over child safety

Jury finding: Meta made "false or misleading statements" and engaged in "unconscionable" trade practices exploiting children's vulnerabilities

Evidence: 40 witnesses, hundreds of documents, emails, and internal reports - including employee whistleblowers

Meta's response: "We respectfully disagree with the verdict and will appeal"

What's next: A second phase begins in May, when a judge will consider additional penalties and mandatory platform changes. A separate California jury is concurrently weighing Meta and YouTube's liability in a case considered a bellwether for thousands of similar suits

The jury agreed that Meta had engaged in "unconscionable" trade practices that "unfairly took advantage of the vulnerabilities and inexperience of children." That language is not incidental. Unconscionable is a legal term meaning so egregious that no reasonable person could defend it. Jurors - twelve ordinary people who sat through six weeks of testimony - applied that word to one of the world's most powerful companies.

Meta's stock barely moved. The company made more than $164 billion in revenue in 2024. $375 million is a rounding error on a quarterly earnings report. The real stakes come in May, when the second phase of the New Mexico case determines whether Meta must make structural changes to its platforms. And in California, where a separate jury considering thousands of similar suits could set a legal precedent that actually threatens the company's business model.

That is when the legal terrain shifts from symbolic to structural. And that is exactly what the activists in Santiago, Nairobi, and Manila have been waiting for.

The Quilicura Experiment: What Humans Do Instead

People gathered in community event
In Quilicura, Chile, over 25,000 questions were answered by human volunteers in a day - a direct response to AI's water consumption in their community. Photo: Pexels

On January 31, 2026, something unusual happened in Quilicura. The town launched a service called Quili.AI. For one day, local volunteers answered questions in real time - questions that might ordinarily go to a chatbot. Every question answered by a human was accompanied by an estimate of how much water an AI model would have consumed to generate the equivalent response.

Over 25,000 questions came in from 67 countries. The organizers, a nonprofit called Corporación NGEN, did not anticipate the emotional depth of what followed. A question about how to stay hopeful prompted a volunteer to invoke the myth of Sisyphus and Albert Camus. Three local artists drew images in response to creative prompts. Someone asked for a dog smoking a pipe. They got one.

"For communities living alongside data centers, the environmental impact of AI isn't abstract - it's felt daily. Quili.AI is about awareness - specifically around casual prompting - and creating space for a broader conversation about how these systems scale responsibly in water-stressed regions." - Lorena Antiman, cultural mediator, Corporación NGEN

What Quilicura demonstrated is not that AI should be abandoned. It demonstrated that the communities bearing its costs have never been part of the conversation about whether those costs are acceptable. The experiment made something invisible visible - and it did so without anger, without lawsuits, without marches. Just people answering questions, next to a running tally of what a machine would have drunk to do the same thing.

It was, depending on your perspective, either deeply hopeful or deeply sad. Probably both.

Resistance Without a Name (Until Now)

Community activists meeting outdoors
Activists from Chile, Kenya, Mexico, and the Philippines are connecting across borders, sharing strategies and amplifying each other's campaigns. Photo: Pexels

None of the people in this story describe themselves as part of the same movement. Rodrigo Vallejos is thinking about groundwater permits. Joan Kinyua is thinking about annotation labor standards. Olimpia Melo is thinking about Mexico's criminal code. Renso Bajala is thinking about call center workers in Manila who just got a pink slip they didn't see coming.

But they are all responding to the same structural reality: that the economic model of AI requires cheap inputs from places that have little power to refuse, and that the regulatory environment has not caught up with the speed or scale of what's happening.

Carine Roos at the University of Sheffield frames it this way: "This has prompted communities to question how these projects reshape lives and development trajectories." That phrasing is careful, academic. But under the politeness is a harder claim: that AI's global expansion is a form of extraction, and that the communities being extracted from are starting to notice.

What These Movements Are Demanding

Mosacat has connected with anti-data-center coalitions in Europe and South America. Code AI is in contact with labor groups in India, where BPO sector anxiety about AI displacement is acute. The Data Labelers Association in Nairobi has been in discussions with similar bodies in the Philippines and Venezuela. What started as isolated local reactions is acquiring the infrastructure of an international movement.

That infrastructure is still fragile. Most of these organizations operate on shoestring budgets against adversaries with market caps in the trillions. The legal systems they operate within were not designed to hold data centers accountable for groundwater depletion or to recognize that a woman in Nairobi who labels disturbing images for $1.50 an hour is entitled to mental health support.

But the Meta verdict in New Mexico showed that existing legal systems can, when pushed, produce consequences. The $375 million is not the point. The point is that a jury of ordinary people sat in a room, heard the evidence, and decided that what Meta did to children was indefensible. That judgment is replicable. In other countries, against other companies, on other harms.

The Architecture of Accountability

Person holding sign at protest rally
The global AI resistance is building cross-border networks, sharing legal strategies and amplifying each other's campaigns against Big Tech. Photo: Pexels

The United Nations has warned that AI adoption is accelerating global inequality - wealthier nations capture the benefits faster, while poorer countries absorb the costs and get left further behind. That warning, issued in 2025 and reiterated in early 2026, has not produced binding international regulation. It has produced a strongly worded report that sits in an archive somewhere.

Meanwhile, the activists and workers building the real architecture of accountability are doing it piece by piece, country by country, often at personal cost.

Rodrigo Vallejos spent years of his legal education filing complaints he wasn't sure would go anywhere. They didn't get everything he wanted. But Google's Santiago construction was suspended. Microsoft's environmental claims were publicly challenged. Chile now has a National Data Center Plan that, even if imperfect, includes these communities as stakeholders in a way they weren't three years ago. That is not nothing.

Olimpia Coral Melo has spent seventeen years since her ex-boyfriend's betrayal turning private trauma into public law. The law isn't strong enough. She's still going. Her campaign is already influencing legislation in three countries and counting.

Joan Kinyua left one exploitative annotation contract and then another and then another, and instead of finding a less punishing job, she built an association. The Data Labelers Association doesn't have a lot of money. But it has members who know they are not alone, which is, in labor organizing, where everything begins.

And in Quilicura, on one January day, 25,000 questions got human answers. Artists drew absurd things. Someone told a volunteer about Camus and came away feeling less like the world was ending. The water counter kept ticking: this is what a chatbot would have consumed. This is what humans do instead.

Big Tech will appeal the Meta verdict. It will file environmental permits. It will reclassify workers as contractors. It will update its terms of service. These are the institutional reflexes of industries that have learned to absorb pressure without changing. But the pressure is coming from more directions now, and from people who have learned from each other, and who have stopped expecting the companies to do the right thing voluntarily.

The AI revolution was supposed to be a story about the future arriving faster than anyone expected. It is that. But it is also a story about whose future, and whose cost. And the people paying that cost are no longer staying quiet.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram