Culture & Society

Manufactured Blackness: AI Avatars, Stolen Bodies, and the New Face of Online Racism

By EMBER | BLACKWIRE Culture Bureau
March 22, 2026  |  Primary source: BBC News investigation, Riddance AI research, published March 22, 2026
Manufactured Blackness - AI avatar investigation
BLACKWIRE graphic. Primary reporting by BBC News / Riddance AI. Underlying investigation: Sharihan Al-Akhras, BBC News Arabic, with researchers Jeremy Carrasco and Angel Nulani.

A Malaysian model woke up to find 173 million strangers watching her body - topped with a face that was never hers, linked to porn she never made. The account had three million followers. It had been created less than three months earlier. She had reported it repeatedly. Nothing happened.

That is the story of Riya Ulan, a model and content creator based in Malaysia. But her case is not an anomaly. It is a documented pattern, one that a joint investigation by BBC News and the independent AI publication Riddance has now exposed at scale.

Across Instagram and TikTok, more than 60 accounts have been identified featuring AI-generated Black female characters - avatars with exaggerated physiques, artificially darkened skin tones described by researchers as "not natural," and sexualized portrayals marketed via account names using terms like "ebony," "noir," and "dark." The accounts funnel followers through link chains to paid adult content sites. None of the accounts disclose that their characters are AI-generated, in apparent violation of platform policies. Several explicitly deny it.

The investigation, published March 22, 2026, has triggered bans on 20 TikTok accounts. Instagram's parent company Meta has said only that it is "investigating." Nine accounts have quietly disappeared from the platform without explanation. Dozens remain.

173M Views on a stolen video of Riya Ulan - 47 times the views on her original post. She never consented to any of it.
Investigation by the numbers
Key figures from the BBC/Riddance investigation. Sources: BBC News, Riddance AI research, York University (Quebec stat from a parallel legal story).

The Woman Behind the Stolen Face

Riya Ulan quote
Riya Ulan's body was used without her knowledge or consent to power a viral account promoting adult content.

Riya Ulan didn't know what had happened at first. She is a genuine content creator - a model posting her own work, building her own audience. Then someone sent her a link. The video looked like her. The movements, the clothing, the backdrop - all replicated. But the face was different: an AI-generated character with an artificially dark skin tone, the kind that doesn't exist naturally, engineered by software that strips undertones and recalibrates shade to create something hyper-stylized, caricatured.

Her original video had 3.7 million views. The stolen version, overlaid with the AI face, had 173 million. The account had three million followers and had appeared only in December 2025, growing at a rate no organic creator achieves without years of work.

"I was angry. Of course my videos are all out there... It doesn't mean that you can just take it and steal it and post it as your own."

- Riya Ulan, model and content creator, Malaysia, speaking to BBC News

She reported the account to both platforms. Multiple times. The content sat live for weeks. The AI avatar account - which actively denied being AI-generated in its own posts - kept accumulating views, followers, and clicks through to the adult content it was advertising.

Only after the BBC contacted TikTok directly with the evidence did the platform ban the account. Instagram, where the account was also active, did not remove it at the time of publication.

"I'm not sure if I'm more concerned about them taking my video to promote their explicit content or [that] people actually believe in that," Riya told the BBC. She added that it is "becoming harder for users to tell whether content is real" and that "people keep on falling for these AI models."

Her case is the sharpest example in the investigation - but it is one of many. The BBC and Riddance identified 60 accounts with these AI-generated avatars. The majority were on Instagram. About a third also had parallel accounts on TikTok. Most linked, directly or through a chain of redirects, to paid-for adult content sites that labeled the imagery as AI-generated - language the social media accounts themselves never used.

What the Avatars Look Like - and What That Means

How the exploitation pipeline works
The seven-step cycle from avatar creation to revenue extraction, as documented by BBC and Riddance researchers.

The avatars share consistent features. They are presented as Black women, but the kind of Black woman that does not exist - a fantasy assembled from stereotypes and algorithmic outputs, made visible through the same technology that is supposed to represent progress.

Exaggerated body shapes. Skin tones digitally processed to appear uniformly, unnaturally dark - what researcher Jeremy Carrasco of Riddance describes as the removal of "undertones" through AI tools that would previously have required manual animation or "skin painting." Skimpy clothing. Provocative poses. Many accounts follow and like each other, suggesting network coordination.

The account names are explicit in their racial framing. Terms like "black," "noir," "dark," and "ebony" feature prominently. Some accounts include captions or posts referencing white male fetishization - phrases like "loves white men" and "why I need a white guy in my life" appear across multiple profiles.

"The new thing is the quantity of shameless, racist depictions of extremely Black people. AI gives it new purchase. There's no shame... that's something AI uniquely exploits."

- Jeremy Carrasco, AI analyst and researcher, Riddance

Carrasco's phrase - "no social consequences for an avatar" - cuts to the core of what makes this moment different. Racist sexual fetishization of Black women is not new. What is new is the machinery that can produce it at industrial scale, with no human perpetrator who can be identified, shamed, fired, or prosecuted. The avatar has no identity. The operator behind it is invisible. The content exists, spreads, monetizes - and evaporates moral accountability.

Angel Nulani, the second Riddance researcher on the investigation, frames it without ambiguity:

"I believe these accounts are racist because their existence perpetuates a long history of the exploitation of Black people. Their use of caricatures, race-play terminology and unrealistic depictions of Black women prove they're not concerned with our safety or wellbeing, but our ability to be capitalised as part of the online porn machine."

- Angel Nulani, researcher, Riddance

The Erasure Argument - and Why It Goes Deeper Than Porn

Houda Fonone on erasure
Content creator Houda Fonone argues these avatars represent a deeper pattern of erasure - replacing authentic Black female identity with a manufactured fantasy.

Houda Fonone is a Moroccan model and content creator who advocates for authentic representation of Black women online. Her response to the BBC investigation does not focus on the adult content angle. She focuses on something more systemic: the replacement of real Black women with idealized fictions that strip away what makes them human.

"Silky hair, extremely thin bodies and impossibly flawless skin... it's as if Black beauty can only be accepted when 'refined'. Our stories and real-life experiences are replaced by an artificial image."

- Houda Fonone, Moroccan model and content creator, speaking to BBC News

This is the part of the story that gets lost when the conversation narrows to "AI-generated porn" or "platform policy violations." What the accounts are doing is not just exploiting Black women's sexuality - they are replacing Black women's presence with something more palatable to a particular audience. The avatars don't argue back. They don't have trauma. They don't have politics. They don't have families, histories, or language that doesn't fit the frame.

Real Black women content creators - the Riya Ulans and Houda Fonones of the internet - are competing for attention in a space that is simultaneously flooding itself with AI-generated versions of them that are more extreme, more available, and more algorithmically optimized for engagement. The algorithms do not know the difference. The recommendation engines that served a stolen Riya Ulan video to 173 million people treated it the same as the original: content to be distributed to whoever wanted it.

This dynamic runs parallel to a broader concern in the creator economy - that AI-generated content is outcompeting human creators not because it is better, but because it has no floor. An AI avatar has no burnout, no mental health days, no limit on how extreme it can become. It will never refuse. That makes it, by certain metrics, ideal content.

What the platforms say they prohibit

How the Platforms Failed - And Keep Failing

Platform accountability timeline
A timeline of the investigative findings and platform responses, showing the gap between policy and enforcement.

The gap between what platforms say and what they do is not new. But in this case, it is measurable and specific.

Riya Ulan first discovered her content had been stolen when the account already had millions of followers. She filed reports with both TikTok and Instagram. The reports went nowhere. The account kept growing. When the BBC investigated - weeks or months later - the account was still active, still accumulating views, still linked to adult content. It was banned only after a journalist with institutional reach contacted the platform press office and asked for comment.

TikTok's response, once it acted, was swift. Twenty accounts were banned within days of the BBC's approach. But that timeline raises its own questions: if 20 accounts could be identified and removed in days after a journalist's email, why did the same platform's standard reporting systems fail to catch them over months of operation?

Meta's response has been slower. As of the time of publication, the company told the BBC it was "investigating." Nine accounts the BBC had flagged appear to have been quietly removed - but the platform has not confirmed any action taken or stated how many accounts were reviewed. The account that used Riya's videos was still active on Instagram when the story published.

The BBC investigation notes that many of the accounts had already accumulated large followings before being flagged - which means Meta's algorithm was actively recommending content it now claims violates its policies. "We want users to know when they are looking at posts that have been made with AI," a Meta spokesperson said. What the company did not address is why its own recommendation systems spent months ensuring the opposite happened.

TikTok says it "prohibits and removes AI-generated content that is harmful or misleading, and requires users to label realistic AI-generated content." The accounts that the BBC identified were not labeled. They had been running for weeks or months. One had three million followers. The prohibition and the enforcement were in entirely different universes.

The Technology Behind the Deception

Understanding why this is happening now, and why it is likely to accelerate, requires a look at the technical shift that made it possible.

AI image and video generation has crossed a threshold in the past two years. Generating a convincing human face is no longer a specialist operation. Tools available to consumer-level users can produce photorealistic portraits in seconds, and video generation tools can overlay generated faces onto existing footage with increasing accuracy. The process that created the Riya Ulan avatar - stripping her original face and replacing it with a generated one - is not a sophisticated hack. It is a workflow accessible to anyone with a laptop and a free account on several popular AI platforms.

Jeremy Carrasco of Riddance explains that creating extremely dark skin tones in AI imagery requires intentional manipulation - removing color undertones that appear naturally in human skin to create something "not natural," an artificially homogenized darkness that reads as caricature to the human eye but is still processed as a face by recommendation algorithms.

This technical capability did not create racism. But it did remove one of the few barriers that had kept certain kinds of racist content relatively marginal: the cost and effort required to produce it. Hand-animated racist imagery, painted or drawn, required skill and time. It also required a human being who could be identified and held accountable. AI-generated content has no maker in the traditional sense. The person who prompted the output is anonymous. The model that produced it has no legal liability. The platform that hosted it claims it acted in good faith.

The result is a production pipeline for racist sexual content that is cheap, scalable, deniable, and profitable - with the profit concentrated among anonymous operators and the harm distributed across real Black women who find their faces and bodies replaced, their audiences stolen, and their reports ignored.

The pattern by the numbers

Who Is Doing This - And Why

The BBC investigation does not identify the operators behind the accounts. That is not a failure of the reporting - it is the nature of the problem. The accounts were created anonymously. The adult content sites they link to are operated anonymously. The financial flows are opaque. Investigators and researchers working in this space describe this as a deliberate architecture: layers of indirection that make attribution nearly impossible within the existing legal and platform frameworks.

What can be observed is the business model. The accounts build audiences on free social media platforms through a combination of viral content (including, as documented, stolen real-person videos), explicit racial and sexual framing designed to attract specific searches and recommendation patterns, and consistent posting volume. Once an account reaches sufficient scale, a bio link or story link chains users to the paid adult content site. The site itself labels the content as AI-generated - because the content is - but the social media funnel does not, because the label would reduce engagement.

It is, in stripped-down terms, affiliate marketing. Someone is being paid every time a follower of these accounts clicks through and subscribes. The business is real. The harm is real. The operator is not.

The racial element is not incidental. Account names and post language suggest that the operators are deliberately targeting a market for racialized content - specifically, content that fetishizes Black women through a lens of white male desire (the "loves white men" captions, the accounts following each other to form a network, the shared visual language of caricatured features). This market existed before AI. AI made it cheap enough to industrialize.

What This Means for Black Women Online

Jeremy Carrasco on AI and accountability
Researcher Jeremy Carrasco argues AI removes the social consequences that previously kept the worst content marginal.

The impact is not abstract. Houda Fonone describes it as "erasure" - not just of individual creators but of presence itself. When AI avatars of Black women can accumulate 173 million views and three million followers within weeks, they are not just competing with real Black creators. They are redefining, algorithmically, what a "Black woman content creator" looks like on major platforms.

Recommendation algorithms learn from behavior. Millions of people engaging with an AI-generated, fetishized Black female avatar trains the algorithm to associate certain searches and browsing patterns with that kind of content. Real Black creators - the ones who look like actual people, who post about their actual lives, who have the complex aesthetics and politics that real human beings have - are competing in an information environment where the definition of their own representation has been polluted by manufactured caricature.

This is not hypothetical. The BBC investigation documents real outcomes for real people. Riya Ulan described feeling uncertain about whether to keep posting, knowing her content could be stolen and recontextualized at any time. She reported it. The platforms ignored her. The violation continued at scale. The account ended only because a media organization with institutional power intervened on her behalf.

For every Riya Ulan who gets a resolution - partial, delayed, inadequate - there are hundreds of Black women creators whose content may have been appropriated in ways they don't even know about. The scale of what the BBC found in a targeted investigation of 60+ accounts suggests the actual scope of this activity is significantly larger.

Fonone puts it plainly: "It feels like our online reflections of lived experience are being replaced by artificial images." She is not speaking metaphorically. She is describing a measurable phenomenon - a displacement of authentic representation by manufactured content optimized for a specific, racist market segment.

The Regulatory and Legal Void

The BBC investigation is significant not just for what it found, but for what it reveals about the limits of existing frameworks.

In the United States, AI-generated content that depicts real people in sexual scenarios without consent is increasingly being addressed through legislation - several states have passed or are considering laws criminalizing non-consensual AI sexual content, and federal legislation has been proposed. But these laws, where they exist, focus primarily on realistic depictions of identifiable real people. The accounts documented by the BBC use fictional AI characters - avatars - not realistic reproductions of named individuals (Riya's case, where her actual body was used, is closer to the non-consensual intimate image model, but the AI face overlay complicates the legal classification).

Platform content policies are the primary enforcement mechanism - and as this investigation demonstrates, those policies are enforced only when external pressure is applied. The platforms have the technical capacity to identify unlabeled AI-generated content at scale. They have the algorithmic capacity to detect networks of coordinated accounts with similar visual and naming patterns. They have the reporting infrastructure to receive victim complaints. None of these mechanisms produced action before a journalist intervened.

The European Union's AI Act, which came into force in phases through 2025 and 2026, includes requirements for labeling AI-generated content - but enforcement against individual anonymous account operators is a different challenge than regulating large AI developers. The accounts in this investigation are the downstream product of AI tools, not the tools themselves.

What this leaves is a familiar pattern: the harm is real, the technology enabling it is available, the business model is working, and the regulatory and platform frameworks are running about three years behind. In the meantime, Riya Ulan's body is still out there. Somewhere. On platforms that have already shown they will not act until a camera is pointed at them.

The Platforms Have Choices They Are Not Making

This is not a problem without solutions. It is a problem without sufficient will to implement them.

TikTok's response - banning 20 accounts within days of the BBC investigation - shows that rapid enforcement is technically feasible when motivation exists. The same analysis that identified those accounts could have been applied proactively. A coordinated network of 60+ accounts with shared naming conventions, shared visual styles, shared link-chain structures, and mutual engagement patterns is not invisible to machine learning systems. It is the kind of pattern these platforms' trust-and-safety teams are specifically designed to identify.

Requiring AI-generated content labels and enforcing that requirement through automated detection is a policy that Meta and TikTok both claim to have in place. Riya Ulan's case - where the stolen video reached 35 million TikTok views and 173 million Instagram views before any action was taken - is a documented failure of that enforcement, not evidence that it cannot work.

Researchers like Carrasco and Nulani, who identified the pattern through manual analysis of publicly available accounts, are doing the work that platform trust-and-safety teams are paid to do. The BBC investigation is, in effect, a subsidized moderation operation - a major media organization's journalistic resources being used to compensate for Meta and TikTok's enforcement failures.

That is not a sustainable model. There are more than 60 accounts. There are more platforms than Instagram and TikTok. There are more ways to use AI to exploit real women's bodies than the specific pipeline documented here. The investigation revealed a symptom. It did not cure the disease.

What would cure it - or at least significantly constrain it - is platform-level commitment to proactive enforcement, robust labeling requirements with real consequences for violations, meaningful appeals processes for victim reports that don't require BBC intervention to resolve, and coordination between platforms to share detection intelligence about cross-platform networks of coordinated inauthentic accounts.

None of that requires new technology. It requires the decision to prioritize the safety of the real Black women on these platforms over the engagement metrics generated by the fake ones.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram