The Weekend AI Broke Open: Microsoft Loses OpenAI Exclusivity, China Kills Meta's Manus Deal, 4TB of Voices Stolen
Five stories landed between April 25 and April 27 that, taken together, amount to a structural fracture in how the AI industry is organized. Microsoft lost its exclusive lock on OpenAI. China blocked Meta's acquisition of the AI agent startup Manus. A hacking group exfiltrated four terabytes of voice biometrics from 40,000 AI contractors. GitHub told developers that Copilot is going metered. And the trial that could determine OpenAI's corporate fate began in a San Francisco courtroom with jury selection.
None of these events happened in a vacuum. They are connected by the same underlying force: the AI industry is moving from a consolidation phase, where a handful of partnerships locked up the market, into a fragmentation phase, where those partnerships dissolve under their own weight, governments intervene, and the collateral damage starts hitting real people.
Here is what happened, why it matters, and what comes next.
1. The Microsoft-OpenAI Deal Unravels
On April 27, Microsoft and OpenAI jointly announced an amended partnership agreement. The headline: OpenAI can now serve its models through any cloud provider, not just Microsoft Azure. The revenue share remains at 20 percent, but it is now capped at an undisclosed amount and guaranteed only through 2030. The infamous "AGI clause" that would have terminated the deal upon reaching artificial general intelligence has been removed entirely.
This is not a small edit. It is the end of the most consequential exclusive partnership in the AI industry.
Since Microsoft invested $1 billion in OpenAI in 2019, Azure had exclusive rights to host OpenAI's models. That exclusivity gave Microsoft a unique selling point: if an enterprise wanted GPT-4, it had to go through Azure. It made OpenAI the reason enterprises chose Microsoft's cloud over AWS or Google Cloud. And it gave Microsoft a de facto ownership stake in OpenAI's intellectual property through 2032 without having to acquire the company outright.
The unraveling began two months ago, when OpenAI announced a $50 billion deal with Amazon that included plans to run certain OpenAI models on Amazon Web Services. Microsoft reportedly threatened legal action over that deal. The amended agreement announced Sunday essentially ratifies what was already happening: OpenAI is now free to run on Amazon Bedrock, and potentially Google Cloud and others, without Microsoft's permission.
MICROSOFT-OPENAI: BEFORE vs AFTER
OpenAI Chief Revenue Officer Denise Dresser said the quiet part loud in an internal memo obtained by CNBC: the Microsoft partnership had "limited our ability to meet enterprises where they are. For many that's Amazon Bedrock." Interest in running OpenAI models through Amazon's cloud has been, in Dresser's word, "staggering."
Source: Ars Technica | CNBC | OpenAI announcement
Why it matters
The removal of exclusivity fundamentally changes the cloud AI market. Azure no longer has the OpenAI moat. AWS now gets direct access to GPT-class models. Google Cloud, currently the odd one out with only its own Gemini models and Anthropic's Claude, has a clear path to offer OpenAI models too. The cloud wars, which had been settling into a Microsoft-AWS duopoly with Google trailing, just reopened.
There is a second-order effect that most coverage missed. The AGI clause removal means Microsoft no longer has a financial incentive to define AGI in narrow terms. Under the old agreement, Microsoft's revenue share ended if OpenAI achieved AGI, which created a perverse incentive for Microsoft to argue that whatever OpenAI built was not quite AGI yet. With that clause gone, Microsoft's 20 percent revenue share runs regardless of what OpenAI invents, up to the cap. This is actually cleaner for both parties. But it also means Microsoft loses the safety valve that would have let it walk away from an OpenAI that achieves something genuinely transformative.
The cap itself is the real question mark. Neither company disclosed the number. If the cap is low, Microsoft's upside from the OpenAI relationship is effectively bounded, and its incentive to keep investing in the partnership diminishes. If the cap is high, it barely matters. The fact that it was not disclosed suggests Microsoft did not get a number it wants to brag about.
2. China Blocks Meta's $2 Billion Acquisition of Manus
On the same day the Microsoft-OpenAI deal was being rewritten, the Chinese government formally asked Meta to unwind its $2 billion acquisition of the AI agent startup Manus, citing national security concerns. Chinese regulators had been scrutinizing the deal since January 2026, and had already restricted Manus cofounders Xiao Hong and Ji Yichao from leaving China.
Manus, which launched in March 2025, built what it called a "general AI agent" that wraps an underlying model (Anthropic's Claude 3.7 Sonnet) in an agentic harness capable of browsing websites, creating spreadsheets, booking travel, and writing code. It uses multiple specialized agents: a planner that assigns tasks, an executor that interacts with web interfaces, and a verifier that checks results. Meta acquired Manus in December 2025 and began integrating it into its Ads Manager platform for Facebook, Instagram, and WhatsApp advertisers.
The cofounders had taken elaborate steps to sever Chinese ties before the acquisition, relocating their team to Meta's Singapore office and registering the parent company in the Cayman Islands. They turned down Chinese authorities' requests for meetings and investment. None of it mattered. Beijing's message was clear: if Chinese nationals built it, China gets a say in who owns it, regardless of where the company is incorporated.
THE MANUS TIMELINE
Source: Ars Technica | Wall Street Journal | The Wire China
Why it matters
The Manus blocking is the clearest signal yet that the US-China AI rivalry has moved from export controls on chips to direct intervention in corporate ownership. Until now, the battleground was hardware: the US restricted NVIDIA chip exports, China promoted domestic alternatives. The Manus decision extends the fight to software and talent. If you are a Chinese-born AI researcher who built something valuable, your government claims jurisdiction over its sale, even if you incorporated in Singapore and sold to an American company.
This has chilling implications for any startup founded by Chinese nationals that operates in or sells to Western markets. The Cayman Islands structure that Manus used is one of the most common corporate setups for Chinese tech companies seeking foreign capital. If Beijing can override that structure on national security grounds, the entire framework of VIE (Variable Interest Entity) structures used by Chinese companies listed on US exchanges comes into question.
For Meta specifically, the loss of Manus is not just about a $2 billion write-off. Manus was a key component of Zuckerberg's "personal superintelligence for everyone" strategy. Its agentic harness gave Meta a way to turn Claude (Anthropic's model) into an action-taking agent for advertisers. Without Manus, Meta either has to rebuild that capability from scratch or find another agentic AI company to acquire, one without Chinese founders.
3. The Mercor Voice Breach: 4TB of Your Voice, Paired With Your ID
While the corporate power plays dominated headlines, a breach with far more personal consequences was unfolding. On April 4, the extortion group Lapsus$ posted Mercor to its leak site. The dump: approximately four terabytes of data covering more than 40,000 contractors who had signed up to label data, record reading passages, and run through verification calls for AI training.
What makes this breach different from the typical call-center recording leak is the combination. Mercor's onboarding pipeline asked contractors for three things in sequence: a passport or driver's license scan, a webcam selfie, and a sit-down voice recording of scripted prompts in a quiet room. Each row in the database paired studio-quality voice audio with a verified government identity document. That is exactly the input a synthetic voice cloning service needs.
The Wall Street Journal reported in February 2026 that high-quality voice cloning now requires roughly 15 seconds of clean reference audio. The Mercor recordings average two to five minutes per contractor. That is not just past the threshold. It is an order of magnitude beyond it. Pair the clone with the stolen ID document, and an attacker has both the voice and the credential needed to put it to work.
MERCOR BREACH BY THE NUMBERS
Five contractor lawsuits were filed within ten days of the Lapsus$ post. The plaintiffs argue that Mercor collected voice prints under a "training data" framing without making clear they were also a permanent biometric identifier. This is a legal distinction that matters: a voice recording used to train a model is data. A voice recording that can be used to impersonate you at your bank is a weapon. The contractors were never told their voices could become the latter.
The threat models are not speculative. Banks that still use voiceprint matching as an authentication factor are vulnerable. A clone of the account holder reading a challenge phrase clears the audio gate, leaving only a knowledge question that often comes from the same leaked dataset. Vishing attacks against employers, where a synthetic voice calls HR pretending to be an employee to redirect payroll, are documented in more than two dozen confirmed cases since 2023. The Hong Kong Arup heist, where a finance worker wired $25 million after a deepfake video call, used voices built from public footage. The Mercor data is better than public footage: it is studio audio paired with a verified ID.
Source: ORAVYS Forensic Desk | Wall Street Journal | Krebs on Security
Why it matters
You can change a password. You cannot change your voice. The biometric damage from this breach is permanent. The 40,000 people whose voices and IDs were stolen face a lifetime of risk from voice-cloning attacks, and there is no rotation mechanism. This is the first breach where the stolen asset is literally irreplaceable.
The lawsuits also test a question that will shape AI data collection for years: can a company collect your voice under one framing (training data) and use it for another (permanent biometric identifier) without telling you? If the courts say no, every AI training company that collected voice data from contractors may face similar liability. The scale is enormous. Industry estimates suggest that between 500,000 and 1 million people globally have provided voice samples to AI training brokers since 2022.
4. GitHub Copilot Goes Metered: The End of AI Flat Rate
On April 27, GitHub announced that all Copilot plans will transition to usage-based billing on June 1, 2026. The change replaces premium request units (PRUs) with "GitHub AI Credits" that are consumed based on token usage: input tokens, output tokens, and cached tokens, priced according to each model's published API rates.
Base plan pricing is not changing. Copilot Pro remains $10 per month, Pro+ remains $39 per month, Business remains $19 per user per month, and Enterprise remains $39 per user per month. Each plan includes a monthly allotment of AI Credits. But the flat-rate model is dead. A quick chat question and a multi-hour autonomous coding session no longer cost the same.
The announcement came with a notable detail: fallback experiences are being removed. Under the old model, users who exhausted their premium request allocation could fall back to a cheaper model and keep working. Under the new model, when credits run out, you stop. There is no lower-tier safety net.
Source: GitHub Blog
Why it matters
This is the first major AI coding tool to abandon the flat-rate subscription model, and it will not be the last. The economics are simple: agentic coding sessions, where an AI agent autonomously writes, tests, and refactors code across an entire repository, consume orders of magnitude more inference compute than a single autocomplete suggestion. GitHub was losing money on power users, and the premium request model was too crude to capture the real cost difference.
The second-order effect: this changes how developers think about AI assistance. Under a flat rate, there is no cost to asking Copilot a trivial question. Under usage-based billing, every interaction has a price. Developers will become more selective. They will save Copilot for complex tasks and handle simple ones manually. The "AI as always-on pair programmer" model, which GitHub itself championed, implicitly assumes zero marginal cost per interaction. That assumption just died.
The third-order effect is on the startups. Cursor, Windsurf, Cody, and every other AI coding assistant that charges a flat rate will face the same margin pressure. They can either follow GitHub to usage-based billing and risk user backlash, or absorb the cost and operate at a loss. The math does not change because you are a startup. Inference is expensive, and agentic inference is very expensive.
5. Data Centers, Gas Turbines, and 129 Million Tons of Emissions
While the industry was restructuring its corporate deals, the physical infrastructure powering AI was quietly becoming one of the largest sources of new greenhouse gas emissions on the planet. Air permit documents examined by WIRED show that natural gas projects linked to just 11 data center campuses in the US have the potential to emit more than 129 million tons of greenhouse gases per year. That is more than Morocco emitted in 2024.
The mechanism is "behind-the-meter" power. Rather than waiting years for grid connections, data center developers are building their own natural gas turbine plants on-site. xAI's Colossus campus in Memphis became the notorious example in 2024 when it installed gas turbines over community protests in a low-income Black neighborhood. The EPA approved the turbines. The NAACP filed suit last week claiming xAI is illegally operating them. Air permit applications for both the Memphis campus and a second campus in Southaven, Mississippi show that each campus's turbines could generate more than 6.4 million tons of greenhouse gases annually.
This is not a temporary bridge to clean energy. Behind-the-meter gas plants are being built as permanent infrastructure, with 20-30 year operating lifespans, to power data centers that may themselves be obsolete in five years. The lock-in effect is real: once a gas turbine is built and permitted, it becomes economically rational to keep running it even as renewable alternatives become available.
DATA CENTER EMISSIONS SCALE
Source: Ars Technica / WIRED
Why it matters
The AI industry's climate impact is being systematically underestimated because the current reporting framework captures only direct operational emissions (Scope 1 and 2), not the behind-the-meter gas infrastructure being built to power new data centers. When an AI company says it runs on "100% renewable energy," that claim typically refers to purchasing renewable energy credits to offset grid electricity. It does not cover the gas turbines burning on-site at campuses that are not even connected to the grid yet.
Cleanview founder Michael Thomas called behind-the-meter power "a crazy acceleration of emissions," adding: "It's almost like we thought we were on the downside of the Industrial Revolution, retiring coal and gas, and now we have a new hump where we're going to rise." The phrasing is precise. The AI buildout is not an incremental addition to global emissions. It is a reversal of the decarbonization trend in the US power sector.
The NAACP lawsuit against xAI is the first legal challenge to data center gas turbines on environmental justice grounds, but it will not be the last. Every new behind-the-meter gas permit in a disadvantaged community creates a potential litigation target. The question is whether the legal system can move fast enough to matter. Data centers are being built at a pace that outstrips regulatory review.
6. The Musk-Altman Trial: The Ghost in the Courtroom
On April 27, jury selection began in San Francisco for the trial of Elon Musk versus Sam Altman over the future of OpenAI. The case is nominally about whether OpenAI breached its founding mission to build AI for the benefit of humanity when it restructured as a for-profit entity. In practice, it is about whether a nonprofit's founding charter can constrain a company that has outgrown it by several orders of magnitude.
Jury selection was eventful. Judge Yvonne Gonzalez Rogers denied Musk's lawyer's attempts to remove jurors who expressed dislike for Musk, stating: "The reality is that people don't like him. Many people don't like him. But that doesn't mean that Americans nevertheless can't have integrity for the judicial process." Five potential jurors had expressed negative views of Musk; all but one said they could be fair.
The trial is scheduled to conclude by May 21. The stakes extend far beyond the parties in the room. If Musk wins, it could force OpenAI to revert to its nonprofit governance structure, which would make it nearly impossible to raise the capital needed to compete with Google and Amazon-backed Anthropic. If Altman wins, it ratifies the principle that a nonprofit AI company can shed its nonprofit obligations once it becomes valuable enough, which sets a precedent for every other AI nonprofit currently operating.
Source: Ars Technica | The Verge
Why it matters
The trial intersects with the Microsoft-OpenAI news in a way that most coverage has not connected. If OpenAI loses its nonprofit constraints, it becomes a fully for-profit entity with no structural obligation to benefit humanity. The amended Microsoft deal, by removing the AGI clause, already moved in that direction: the revenue share is now "independent of OpenAI's technology progress," meaning Microsoft's financial relationship with OpenAI no longer depends on whether OpenAI is building something that benefits the world. The mission language is being stripped out of the corporate architecture, piece by piece, contract by contract.
The trial outcome will determine whether any of that mission language was ever legally binding. If the jury says yes, OpenAI faces an existential governance crisis. If the jury says no, the message to every AI startup that incorporated as a nonprofit for tax and PR benefits is: don't worry, you can convert later.
The Pattern: Fracture, Not Consolidation
Step back and the pattern is clear. The AI industry spent 2019 through 2025 in a consolidation phase. Microsoft locked up OpenAI. Google locked up Anthropic. Amazon invested in both Anthropic and OpenAI. Meta acquired Manus. The biggest players secured their positions through exclusive deals, massive investments, and strategic acquisitions. The structure looked stable.
It was not. The consolidation phase relied on three assumptions: that exclusive deals would hold, that governments would not intervene in AI corporate ownership, and that the externalities of AI (data breaches, emissions, biometric theft) would remain manageable. All three assumptions broke in the same 48-hour window.
The next phase is fragmentation. OpenAI will run on multiple clouds. Meta will need to rebuild its agentic AI capability without Manus. The voice data from 40,000 contractors is already out there, impossible to revoke. Developers will ration their Copilot usage. Gas turbines will keep burning. And the question of whether a nonprofit's founding mission is worth the paper it is printed on will be answered by twelve people in a San Francisco courtroom.
The AI industry is not collapsing. It is doing something more interesting: it is growing past the structures that were supposed to contain it. The partnerships, the governance models, the pricing frameworks, the data collection practices, the energy assumptions. They were all designed for an AI industry that was smaller, simpler, and easier to control. That industry no longer exists.
What replaces it is not yet clear. But the weekend of April 25-27, 2026, is when the old structure cracked.
PRISM covers AI, cybersecurity, big tech, and the second-order effects that others miss. This article was published April 28, 2026 by BLACKWIRE. Follow the wire at blackwire.world.
Sources: Ars Technica, The Verge, WIRED, Wall Street Journal, CNBC, ORAVYS Forensic Desk, GitHub Blog, Cleanview, Krebs on Security, The Wire China. All links verified at time of publication.