All Reports
PRISM

The Engagement Machine: Inside Meta's Secret AI Lab Built to Make You Addicted

One week after a jury found them guilty of engineering teen addiction, Mark Zuckerberg hired TikTok's top algorithm architect to build an even more powerful recommendation engine. Welcome to MRS Research - Meta's most important team you've never heard of.

By PRISM Bureau - BLACKWIRE | April 2, 2026 | 18 min read
Person using smartphone in dark room - social media glow

The blue glow of the feed. Billions of people see it every day. A new team inside Meta is working to make sure they never look away. (Pexels)

Somewhere inside Meta's sprawling Menlo Park campus, a team of roughly two dozen researchers is quietly building the most powerful attention-capture system ever designed. They call it MRS Research. The name is deliberately boring - an acronym buried inside Meta's Recommendation Systems division, the kind of label designed to slide past journalists and regulators without raising a single eyebrow.

But MRS Research is anything but boring. According to reporting by Business Insider, the team began operations in October 2025 and is led by Yang Song, a former TikTok executive who managed over 400 machine learning engineers focused on the exact thing that made TikTok the most addictive app on the planet: recommendation algorithms. Song joined Meta in November 2025 as vice president of recommendation research. His mandate, according to internal job listings, is to "revolutionize" recommendation systems through large language models, knowledge integration, and advanced reasoning.

The timing is stunning. On March 25, 2026 - exactly one week before this team's existence became public - a Los Angeles County Superior Court jury found Meta 70% liable for designing Instagram in a way that addicted a young user, awarding $6 million in damages in the landmark case of K.G.M. v. Meta Platforms and Google. One day before that verdict, a separate New Mexico jury hit Meta with $375 million in civil penalties for endangering children. The message from the courts was clear: your algorithms are hurting people, and you know it.

Meta's response? Hire the guy who built TikTok's algorithm and tell him to make Instagram's even better.

The Man From TikTok: Who Is Yang Song?

Technology professional working on laptop

Yang Song brings the playbook that made TikTok the world's most addictive app directly to Meta. (Pexels)

Yang Song is not a household name, but he probably should be. Before joining Meta, he ran one of the most consequential engineering organizations in the history of social media: TikTok's recommendation and user growth team. Over 400 machine learning engineers and researchers reported to him. Their job was to figure out what you want to see next, before you know you want to see it.

TikTok's algorithm is widely regarded as the most sophisticated content recommendation system ever deployed at consumer scale. Unlike Instagram's legacy approach of showing content from accounts you follow, TikTok's For You page operates as a pure discovery engine - surfacing content from strangers based on a behavioral model that tracks everything from how long your thumb hovers over a video to whether you replay the first three seconds. The algorithm doesn't just predict your preferences. It shapes them. Former TikTok engineers have described the system as being able to identify a user's emotional state within 30 minutes of usage and begin serving content calibrated to that state.

Song's hiring at Meta was a direct response to the "TikTok problem" - the reality that Instagram Reels, despite billions of dollars in investment, has never matched TikTok's ability to keep users engaged. According to India Today, Song now oversees the deployment of AI technology designed to enhance user engagement across all of Meta's platforms - Instagram, Facebook, and Threads.

The MRS Research team Song has built isn't just a reshuffled group of existing Meta engineers. It's a recruitment operation targeting the best minds at rival companies. The team has already hired Lihong Li, formerly an AI researcher at Amazon; Xiaolong Wang, previously at OpenAI; and Fei Sha, who left Google to join. Each of these researchers specializes in the precise technical domain that makes recommendation systems work: reinforcement learning, user modeling, and ranking optimization.

In Song's own words, from an internal job listing obtained by Business Insider, MRS Research will focus on "long-term AI research goals" and the integration of large language models into Meta's recommendation engine. That means the same technology powering ChatGPT and Claude will be used to decide what appears in your Instagram feed. Not because it's good for you - but because LLMs can understand content at a semantic level that traditional collaborative filtering never could. They can read the text in a post, understand the emotion in an image, parse the context of a comment thread, and use all of that to construct a more precise model of what will keep you scrolling.

$201 Billion and the Algorithm That Prints Money

Stock market data financial charts

Meta's advertising revenue machine generated $201 billion in 2025. MRS Research exists to make that number grow. (Pexels)

To understand why Meta is building MRS Research, you need to understand one number: $201 billion. That's Meta's total revenue for 2025, a 22% increase year-over-year, with virtually all of it coming from advertising. Q4 2025 alone generated $59.9 billion in revenue with a 41% operating margin. Meta isn't just a social media company. It's an advertising delivery system that happens to use social media as the vehicle. And the engine that makes that vehicle run is the recommendation algorithm.

$201B
Meta's 2025 revenue - nearly all from advertising powered by recommendation algorithms

Here's how the economics work. Every second a user spends on Instagram or Facebook is a second during which Meta can serve them an ad. The recommendation algorithm determines both what organic content users see (which keeps them on the platform) and which ads they see (which generates revenue). A 1% improvement in session duration across Meta's 3.9 billion monthly active users translates to hundreds of millions of additional ad impressions per day. At Meta's average revenue per user of roughly $50 per year in the United States and Canada, even marginal improvements in engagement have enormous financial consequences.

This is why MRS Research works closely with Meta's ads division. According to Dataconomy, the team plays a "critical role in optimizing the algorithms that underpin nearly all of Meta's $201 billion revenue." In late 2025, Meta launched an AI model specifically designed to increase ad effectiveness by tailoring content to individual preferences. That model was complemented by an Adaptive Ranking Model for Instagram that dynamically adjusts which ads appear based on real-time user behavior signals.

Morgan Stanley projects Meta's advertising revenue will grow by 28% in 2026, an acceleration from the already-strong 22% growth in 2025. If those projections hold, Meta would generate approximately $257 billion in revenue this year - a number that would make it the second-largest advertising company by revenue on the planet, behind only Google. MRS Research is the team tasked with making that growth happen.

But growth in what? Session duration. Scroll depth. Time-on-app. Notification tap-through rates. Return frequency. These are the metrics that matter to Meta's business, and they're all proxies for the same thing: how successfully the platform captures and holds human attention. MRS Research exists to capture more of it, more efficiently, using more advanced AI than any previous recommendation system has employed.

The Addiction Verdict: What the Courts Found

Courtroom gavel justice

The K.G.M. verdict marked the first time a jury held Big Tech accountable for the addictive design of social media. (Pexels)

The backdrop against which MRS Research is being built makes its existence deeply uncomfortable. On March 25, 2026, a nine-day trial in Los Angeles concluded with a jury finding Meta and Google liable for designing platforms that addicted a young user named Kaley, identified in court documents as K.G.M. The jury awarded $3 million in compensatory damages (70% from Meta, 30% from Google) plus an additional $3 million in punitive damages after finding evidence of "malice, oppression, or fraud" in both companies' conduct.

The legal strategy was precise and devastating. Plaintiffs' attorneys deliberately avoided Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content. Instead, they argued product design liability - that features like infinite scroll, autoplay, push notifications, variable-ratio reward schedules (likes, comments, follower counts), and algorithmic content recommendations were deliberately engineered to maximize engagement at the expense of user wellbeing.

"The platforms were not designed to inform or connect - they were designed to capture and hold attention at any cost. The internal documents prove these companies knew exactly what they were doing to young minds." - Plaintiffs' closing statement, K.G.M. v. Meta Platforms and Google

The jury reviewed internal Meta documents - many originating from the Frances Haugen whistleblower disclosures of 2021 - showing that Meta's own researchers had concluded Instagram was harmful to teenage girls' mental health, particularly around body image and social comparison. Evidence presented during the trial showed the plaintiff spent up to 16 hours daily on social media during her most addicted periods. Expert witnesses testified that the dopamine-driven feedback loops embedded in these platforms exploit the same neurological pathways as gambling and substance addiction, with adolescents being particularly vulnerable due to their still-developing prefrontal cortex.

The dollar amount - $6 million - is negligible for a company that makes $201 billion a year. Meta's stock actually rose 1% after the verdict. But the precedent is seismic. The K.G.M. case was a bellwether trial, meaning its outcome directly influences how thousands of similar cases will be litigated. There are currently more than 235 pending federal lawsuits, over 250 school district claims, and more than 100,000 individual arbitration demands targeting social media companies for addiction-related harms. In its 2026 10-K filing with the SEC, Meta itself warned investors of "significant" financial impact from these proceedings.

One day before the K.G.M. verdict, a New Mexico jury ordered Meta to pay $375 million in civil penalties for endangering children and misleading the public about platform safety. Legal analysts have drawn direct comparisons to the Big Tobacco litigation of the 1990s, which ultimately cost the tobacco industry $206 billion in settlement payments. The scale of litigation facing Meta and its peers could exceed even that figure.

Timeline: Meta's Collision Course

October 2025 - MRS Research begins operations inside Meta's Recommendation Systems division
November 2025 - Yang Song joins Meta from TikTok as VP of Recommendation Research
Late 2025 - Meta launches new AI model to enhance ad effectiveness; Adaptive Ranking Model deployed on Instagram
December 2025 - Meta acquires AI startup Manus for $2 billion
March 10, 2026 - Meta acquires Moltbook, the AI-agent social network, folding its team into Meta Superintelligence Labs
March 14, 2026 - Reuters reports Meta planning up to 20% layoffs (roughly 16,000 jobs) to fund AI push
March 24, 2026 - New Mexico jury orders Meta to pay $375 million for endangering children
March 25, 2026 - Los Angeles jury finds Meta 70% liable in landmark addiction case, awards $6 million
March 26, 2026 - Meta fires 700 employees; top executives receive stock options worth up to $900 million
April 1, 2026 - Business Insider breaks the MRS Research story

The Talent War: Poaching From TikTok, OpenAI, Google, and Amazon

Team meeting technology professionals

MRS Research has recruited top AI talent from TikTok, OpenAI, Google, and Amazon. (Pexels)

Meta is not building MRS Research with junior engineers. The recruitment strategy reads like a hostile takeover of the AI talent market. Song brought the knowledge of TikTok's recommendation architecture with him. Lihong Li came from Amazon, where he worked on the recommendation systems that power what Amazon shows you to buy. Xiaolong Wang left OpenAI, where he worked on the foundational AI research that powers GPT models. Fei Sha departed Google's AI team, where he worked on ranking algorithms similar to those that power Google Search and YouTube recommendations.

Yang Song
VP of Recommendation Research, MRS Research Lead
Former TikTok executive. Managed 400+ ML engineers focused on recommendation and user growth algorithms. Now leading MRS Research at Meta.
Lihong Li
AI Researcher
Former Amazon AI researcher. Specializes in reinforcement learning and recommendation systems. Recruited to MRS Research in early 2026.
Xiaolong Wang
AI Researcher
Former OpenAI researcher. Expertise in generative models and reasoning systems. Joined MRS Research from OpenAI's core research team.
Fei Sha
AI Researcher
Former Google AI researcher. Background in ranking algorithms and statistical learning. Recruited from Google's search and recommendation teams.

This talent concentration is significant for reasons beyond individual expertise. Each of these researchers brings institutional knowledge from their previous employers - knowledge about how rival recommendation systems work, what approaches have been tried and abandoned, and where the technical frontier lies. When Song left TikTok, he didn't leave TikTok's algorithm behind. He left with a deep understanding of its architecture, its strengths, its weaknesses, and its next-generation research directions. The same applies to every other hire.

The financial incentives are enormous. According to IndexBox, Meta has "dangled $100-million-plus compensation packages for top-tier researchers." That figure isn't hyperbole. At Meta's current revenue trajectory, a researcher who can improve recommendation accuracy by even a fraction of a percent is generating billions in additional advertising revenue. A $100 million comp package for someone who delivers a 2% improvement in session duration across 3.9 billion users is, from a pure financial standpoint, a bargain.

Meta has drawn a clear organizational line between MRS Research and its other AI efforts. MRS Research operates independently from Meta Superintelligence Labs (MSL), which is led by Alexandr Wang (the Scale AI founder Meta brought in) and focuses on building general-purpose AI models - the kind of AI that could eventually achieve artificial general intelligence. MRS Research is specifically committed to the immediate, revenue-generating problem of making people spend more time on Meta's platforms. It focuses on four areas: content understanding, user understanding, retrieval, and ranking. Each of these maps directly to a stage in the recommendation pipeline.

Content understanding means using AI to analyze what a piece of content is about - not just its tags and metadata, but its emotional tone, its visual composition, its likelihood of provoking engagement. User understanding means building models of individual users that predict their preferences, moods, and behavioral patterns with increasing precision. Retrieval means selecting candidate content from the billions of posts available. And ranking means ordering those candidates to maximize the probability that you'll engage with them.

The LLM Revolution in Recommendation: Why This Time Is Different

AI neural network visualization

Large language models bring semantic understanding to recommendation systems - they don't just track clicks, they understand meaning. (Pexels)

Traditional recommendation algorithms work through collaborative filtering and behavioral signals. They track what you click, what you share, how long you watch, and use that data to find patterns: people who liked X also liked Y. This approach is powerful but fundamentally shallow. It knows that you watched a cooking video for 45 seconds. It doesn't know why.

Large language models change this equation entirely. An LLM-powered recommendation system can read a post's caption, understand its context, identify the emotional register, and predict how different users will respond to it based on a deep semantic model of their interests and psychological profile. Instead of "users who watched cooking videos also watched craft videos," an LLM-based system can reason: "this user has been watching content about coping with stress through creative activities, and this cooking tutorial has a meditative, low-key tone that matches their recent consumption patterns."

The difference is the difference between a store clerk who notices you buy milk every Tuesday and a personal assistant who understands you're lactose intolerant but keep buying milk because your kid likes cereal. One predicts behavior. The other understands motivation. And when you understand motivation, you can manipulate it.

Yang Song's job description explicitly mentions "knowledge integration and advanced reasoning" as core techniques MRS Research will deploy. This language maps directly to current LLM capabilities. Knowledge integration means using the vast world knowledge encoded in large language models to understand content in context. Advanced reasoning means using chain-of-thought and multi-step inference to model complex user behaviors. Together, these capabilities represent a qualitative leap in what recommendation systems can do.

Consider what this means in practice. Instagram's current recommendation system is already responsible for more than 50% of the content users see, up from roughly 15% in 2020. Meta has been steadily increasing the share of algorithmically recommended content in users' feeds, and every increase has correlated with increases in time-on-app. An LLM-powered system would accelerate this trend by making algorithmic recommendations more accurate, more personalized, and more difficult to resist.

The technical implications extend beyond content recommendation. LLMs can also be used to generate content - and Meta is already doing this. The company acquired Moltbook, an AI-agent social network where humans are banned from posting, in March 2026. It acquired Manus AI for $2 billion and invested $14.3 billion in Scale AI. These aren't separate initiatives. They're components of a single vision: a platform where AI generates content, AI recommends that content, and AI optimizes the entire pipeline for maximum engagement. The human user becomes a passenger in a system entirely designed by machines to capture their attention.

The Contradiction: Addiction Verdicts vs. Engagement Metrics

Person stressed looking at phone screen

Sixteen hours a day. That's how much time the plaintiff in the K.G.M. case spent on social media at peak addiction. MRS Research is designed to make everyone spend more. (Pexels)

This is where the story gets genuinely troubling. The courts have found - twice, in the span of two days - that Meta's recommendation algorithms are engineered to addict users and that this design causes measurable harm, particularly to young people. The legal theory that prevailed in K.G.M. v. Meta specifically identified algorithmic content recommendations as a defective product feature. The jury found evidence of "malice, oppression, or fraud" - legal terms meaning they believed Meta knew its algorithms were harmful and continued deploying them anyway.

Yet here is Meta, in the same month as those verdicts, building MRS Research - a team whose sole purpose is to make those same algorithms more powerful. More personalized. More accurate at predicting what will keep you engaged. The team is explicitly working on "content understanding" and "user understanding" - the same capabilities that the K.G.M. plaintiffs argued were used to exploit users' psychological vulnerabilities.

Meta will argue that better recommendations don't necessarily mean more addictive recommendations. They'll point to their investments in parental controls, screen time reminders, and age-verification technologies. They'll note that MRS Research is focused on "relevance" and "quality," not "addiction." And technically, they're right that a more accurate recommendation system could theoretically show you better content rather than just more content.

But that's not how these systems are optimized. Recommendation algorithms are trained on engagement metrics - clicks, shares, watch time, return visits. These metrics don't distinguish between healthy engagement and addictive engagement. A user who spends two hours on Instagram because they're genuinely enjoying a photography hobby looks identical, in the data, to a user who spends two hours because they're trapped in a doom-scroll they can't escape. The algorithm optimizes for both equally, because both generate the same number of ad impressions.

"Features like infinite scroll, autoplay, push notifications, variable-ratio reward schedules, and algorithmic content recommendations were deliberately engineered to maximize engagement at the expense of user wellbeing." - Legal finding, K.G.M. v. Meta Platforms and Google, March 25, 2026

The fundamental problem is structural. Meta's business model requires maximizing user attention. MRS Research exists to maximize user attention. The courts have found that maximizing user attention causes harm. There is no version of MRS Research that resolves this contradiction without changing the business model itself - and Meta has given no indication that it intends to do so. Morgan Stanley projects 28% revenue growth in 2026. That growth requires more engagement, not less.

Fire 16,000, Hire 24: The Human Cost of the Algorithm Pivot

Empty office workspace

Meta is planning to cut up to 20% of its workforce - roughly 16,000 jobs - while investing billions in AI teams like MRS Research. (Pexels)

While MRS Research hires elite researchers with $100 million compensation packages, Meta is simultaneously planning its largest layoffs since 2023. Reuters reported on March 14 that up to 20% of Meta's 79,000 employees could be let go - roughly 16,000 people. On March 26, the company fired 700 workers. Hours before those layoffs, top executives received stock option packages worth up to $900 million each, tied to a $9 trillion valuation target.

The juxtaposition is stark. Thousands of human workers are being replaced by AI systems, and the money saved is being redirected into teams like MRS Research that build those AI systems. Meta's combined projected AI infrastructure spending for 2026 - alongside Google, Amazon, and Microsoft - is approximately $610 billion. That's not a typo. $610 billion in a single year, spent primarily on compute infrastructure for training and deploying AI models.

Meta has been explicit about the trade. Mark Zuckerberg has reportedly instructed employees to use more AI tools at work and is believed to be using an AI CEO agent for some of his own tasks. The company's entire organizational strategy is shifting from human labor to AI systems. The workers being laid off were doing jobs - content moderation, basic engineering, operations - that Meta believes AI can do more cheaply. The researchers being hired at MRS Research are doing jobs that AI can't yet do for itself: designing the next generation of AI systems.

This creates a dystopian feedback loop. AI systems replace human workers. The savings fund better AI systems. Better AI systems replace more human workers. The end state is a company that employs a small number of elite researchers building AI systems that generate revenue by capturing the attention of billions of people - with minimal human oversight of how those systems operate.

The BBC recently reported that whistleblowers from both Meta and TikTok revealed the companies allowed more harmful content to rise in users' feeds after internal evidence showed that outrage-driven content generated more engagement. If MRS Research succeeds in making Meta's recommendation system more powerful, the question isn't just whether it will be more addictive. It's whether there will be enough human employees left at Meta to monitor and moderate what the algorithm promotes.

What MRS Research Means for the Future of Social Media

Digital interface futuristic

LLM-powered recommendations represent a qualitative leap in how precisely platforms can model and influence user behavior. (Pexels)

MRS Research isn't just a Meta story. It's a signal of where the entire social media industry is heading. Every major platform - TikTok, YouTube, Snapchat, X - is racing to integrate large language models into their recommendation systems. The platform that builds the best LLM-powered recommendation engine will capture the most attention, generate the most advertising revenue, and dominate the market. MRS Research is Meta's bid to win that race.

But winning that race has consequences that extend far beyond corporate balance sheets. The AI sycophancy crisis is already here. A study published in late March 2026, covered extensively in the press, found that AI chatbots routinely affirm users' bad behavior and that engaging with sycophantic AI makes users more convinced of their own positions and less likely to take personal responsibility. Now imagine that same sycophantic tendency embedded in a recommendation algorithm. An LLM that "understands" you doesn't just show you content you agree with. It shows you content that validates your worldview, reinforces your emotional state, and gradually narrows your information environment to a mirror of your existing beliefs.

Researchers at Brown University published findings this week showing that AI chatbots instructed to act as therapists routinely violate core ethical standards of professional mental health counseling. They identified 15 distinct types of ethical risks. If chatbots designed to be helpful can't meet basic ethical standards, what happens when those same AI capabilities are deployed not to help you, but to keep you engaged?

The regulatory response is lagging badly. The Kids Online Safety Act (KOSA) has been debated in Congress for years without passage. State-level efforts have been patchwork. The EU's Digital Services Act provides some framework for algorithmic transparency, but enforcement is slow and technical literacy among regulators remains low. Meanwhile, MRS Research is moving fast. The team has been operational since October 2025 and is already staffed with top-tier talent.

The legal exposure may ultimately be what forces change. With over 235 federal lawsuits, 250 school district claims, and 100,000 individual arbitration demands pending, the financial risk to Meta is substantial - potentially exceeding the Big Tobacco settlement of $206 billion. If courts continue finding that recommendation algorithms constitute defective products, every improvement MRS Research makes to those algorithms becomes additional evidence of deliberate design choices. Every hire, every patent, every internal memo about increasing engagement becomes a potential exhibit in the next trial.

Meta knows this. In its 2026 10-K filing, it warned investors of "significant" financial impact from addiction-related litigation. But the company appears to have decided that the revenue opportunity outweighs the legal risk. Morgan Stanley's projection of $257 billion in 2026 revenue suggests they might be right - at least in the short term.

The Second-Order Effects No One Is Talking About

People looking at phones in a group

When 3.9 billion people use platforms powered by LLM-driven recommendations, the effects extend far beyond screen time. (Pexels)

Most coverage of MRS Research has focused on the addiction and privacy angles. Those are important, but they're not the most consequential effects. Here's what should keep you up at night.

First: the death of the open web. LLM-powered recommendation systems are so good at surfacing relevant content that users have less and less reason to leave the platform. Why open a browser and search for something when Instagram's algorithm already knows what you want and shows it to you before you think to ask? Every improvement in recommendation accuracy makes Meta's platforms more self-contained and the open internet less relevant. This isn't speculation - Meta's own data shows that the percentage of content consumed through algorithmic recommendation has risen from 15% in 2020 to over 50% today. MRS Research will push that number higher.

Second: the creator economy collapses into algorithmic dependency. If an LLM-powered algorithm determines what gets seen, creators don't need audiences - they need algorithmic favor. This shifts power from creators to the platform in ways that are fundamentally different from the current system. A creator who builds an audience of one million followers still reaches only a fraction of them through organic distribution. An LLM-powered system could theoretically bypass the follower model entirely, showing content to whoever the algorithm determines will engage with it. This sounds democratic, but it makes creators entirely dependent on an opaque system they can't influence or understand.

Third: information warfare becomes trivially easy. An LLM that understands the emotional context of content and the psychological profile of users can be gamed by sophisticated actors. State-sponsored disinformation operations, which already exploit recommendation algorithms on every major platform, will find LLM-powered systems even more susceptible to manipulation. If the algorithm understands that a particular user is anxious about immigration, and a bad actor creates content designed to amplify that anxiety, the LLM-powered system will identify the match and serve the content - not because it's true, but because it's engaging.

Fourth: the attention economy becomes zero-sum. There are only so many waking hours in a day. When Meta, TikTok, YouTube, and every other platform deploys LLM-powered recommendation systems simultaneously, they're all competing for the same pool of human attention with increasingly powerful tools. The result isn't more attention. It's the same attention, captured and held more efficiently, leaving less for everything else - work, relationships, sleep, exercise, education. The global cost of social media addiction has never been calculated, but the K.G.M. trial provided a glimpse: 16 hours a day, severe depression, suicidal ideation.

Fifth: the children question has no answer. Meta was found liable for addicting a young user. Its response is to build a more powerful algorithm. Age verification, parental controls, and screen time reminders are band-aids on a system that is fundamentally designed to maximize engagement regardless of the user's age. MRS Research is not building a system that works differently for teenagers. It's building a system that works better for everyone - and "better," in this context, means more engaging, more personalized, and harder to resist.

The Bottom Line: The Machine Doesn't Care

Server room data center

$610 billion in combined AI infrastructure spending. The four largest tech companies are betting everything on machines that capture human attention. (Pexels)

MRS Research is not a team of villains. Yang Song, Lihong Li, Xiaolong Wang, and Fei Sha are accomplished researchers solving technically fascinating problems. The work of building LLM-powered recommendation systems is genuinely hard computer science - requiring advances in natural language understanding, behavioral modeling, real-time inference at scale, and multi-objective optimization. The researchers working on these problems are some of the smartest people in AI.

That's precisely what makes this concerning. The machine they're building is not designed to care about the outcomes it produces. It's designed to optimize for engagement. Engagement generates revenue. Revenue drives stock price. Stock price triggers executive compensation packages worth $900 million. The incentive structure is perfectly aligned in one direction - more attention, more ads, more money - and perfectly misaligned in every other direction that matters: mental health, information quality, democratic discourse, human autonomy.

Meta has the legal right to build MRS Research. The courts that found it liable for addiction haven't (yet) ordered it to stop building more powerful recommendation systems. The appeals process will take years. By the time any legal constraint is imposed, MRS Research will have deployed its next-generation algorithms, the engagement numbers will have gone up, the revenue will have grown, and the case for dismantling the system will be even harder to make.

This is the core dynamic of the attention economy in 2026: the companies that capture attention are moving faster than the institutions designed to protect people from them. MRS Research is the latest proof. Not the last.

"Today, a jury saw the truth and held Meta and Google accountable for designing products that addict and harm children. This verdict sends an unmistakable message that no company is above accountability." - Lexi Hazam, court-appointed co-lead plaintiffs' counsel, K.G.M. v. Meta Platforms and Google

Meta's stock went up after the verdict. Yang Song is hiring. The machine keeps building itself.

Sources: Business Insider, India Today, Dataconomy, Tech Insider, Reuters, CNBC, Forbes, IndexBox, CNN, BBC. Court records: K.G.M. v. Meta Platforms and Google (LA County Superior Court, March 25, 2026); State of New Mexico v. Meta Platforms (March 24, 2026). Financial data: Meta 2025 10-K, Morgan Stanley estimates.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram