The Week AI Grew Up: Anthropic Rides SpaceX, Five Labs Bow to Government Review, and Firewalls Burn

PRISM | BLACKWIRE May 7, 2026 5 min read
AI Infrastructure Government Oversight Cybersecurity Apple Quantum Computing
Data center infrastructure
The physical backbone of AI: data centers like Colossus 1 are the new currency of power. Photo: Unsplash

Seven days in May 2026 compressed a decade of institutional shift. Anthropic, the AI safety company, signed a deal to run its models on Elon Musk's supercomputer - the same Musk who called Anthropic's leadership enemies of Western civilization in February. All five frontier AI labs agreed to let the US government evaluate their models before release, an arrangement with no legal backing that somehow became the only AI oversight America has. Palo Alto Networks disclosed a critical zero-day in its firewalls that attackers are already exploiting in the wild. And Apple, the company that once locked Siri inside a walled garden, announced iOS 27 will let users swap between ChatGPT, Claude, and Gemini at will.

These are not separate stories. They are the same story: the infrastructure of AI power - who controls compute, who evaluates models, who secures the networks they run on, and who gets to choose which AI answers your questions - underwent a structural reset this week. The pieces moved fast. The board they are moving on is still being built.

The Colossus Deal: Anthropic Rents Musk's Machine

Server racks in data center
Colossus 1, Memphis: 300+ megawatts, 220,000 NVIDIA GPUs. Photo: Unsplash

On May 6, Anthropic announced it had signed an agreement with SpaceX to use all of the compute capacity at Colossus 1, SpaceX's massive AI supercomputer in Memphis, Tennessee. The numbers are staggering: more than 300 megawatts of new capacity, over 220,000 NVIDIA GPUs, available within the month.[1]

This is not a minor cloud computing contract. Colossus 1 is the same infrastructure that powers xAI's Grok models - the AI company Musk founded, then merged with SpaceX earlier in 2026. Anthropic is now renting compute from a data center controlled by the man who runs its most aggressive rival, a man who publicly attacked the company months ago. In February, Musk wrote on X that Anthropic "hates Western civilization."[2]

The practical motive is straightforward. Anthropic has been capacity-constrained for months. Claude users on Pro and Max plans hit rate limits constantly. The SpaceX deal lets Anthropic double Claude Code's five-hour rate limits for Pro, Max, Team, and Enterprise plans, remove peak-hours limit reductions, and raise API rate limits for Claude Opus models significantly.[1]

The irony writes itself: Anthropic, the company built on AI safety principles, is now dependent on compute infrastructure controlled by the CEO of its biggest competitor, who also controls a social media platform where he amplifies attacks against it. The safety argument requires independence from perverse incentives. The compute reality requires dependence on the largest available hardware. These two things cannot both be true, and this week, reality won.

But the Colossus deal is one tile in a much larger mosaic. Anthropic also announced:

Anthropic's Compute Portfolio (as of May 2026)

PartnerCapacityTimelineHardware
SpaceX (Colossus 1)300+ MW / 220K GPUsWithin monthNVIDIA GPUs
AmazonUp to 5 GW~1 GW by late 2026AWS Trainium
Google + Broadcom5 GW2027 onwardsGoogle TPUs
Microsoft + NVIDIA$30B Azure capacityOngoingNVIDIA GPUs
Fluidstack$50B infrastructureMulti-yearMixed

Total announced capacity: 10+ GW across 5 major partnerships

And then there is the line that most coverage buried at the bottom: "We have also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity."[1]

Orbital. As in, data centers in space. The physics are real: space-based compute faces severe thermal constraints, radiation hardening costs, and launch economics that currently make terrestrial data centers orders of magnitude cheaper per FLOP. But SpaceX launches rockets. A lot of rockets. And the same man who controls the rockets now controls the compute. The orbital compute line reads like science fiction. Given the capital and launch cadence involved, it reads more like a plan than a fantasy.

Five Labs, One Government, Zero Legal Authority

Government building with columns
The Commerce Department's NIST building: where AI models go before the public sees them. Photo: Unsplash

On May 5, the US Commerce Department announced that Google, Microsoft, and xAI have joined OpenAI and Anthropic in giving the government pre-release access to evaluate their AI models before deployment.[7] All five major frontier AI labs are now participating. The Center for AI Standards and Innovation, housed within NIST, has completed more than 40 evaluations of AI models, including state-of-the-art systems never released to the public.[8]

Here is what the arrangement actually does: developers submit model versions with safety guardrails stripped back so evaluators can probe for national security-relevant capabilities - biological weapon synthesis pathways, cyberattack automation, and autonomous agent behaviors that could be difficult to control at scale. The government tests. The government reports findings. The government has no statutory authority to block a release. The arrangement is voluntary. It has no legal basis.[7]

The second-order effect everyone misses: Voluntary pre-release review is not oversight. It is market infrastructure. Once all five labs participate, any lab that withdraws signals something far worse than non-compliance - it signals they have something to hide. The arrangement creates a de facto standard through participation, not through enforcement. The cost of leaving exceeds the cost of staying. That is how soft power works, and it is how the US government has built every major regulatory framework before codifying it into law: start with norms, then lock in statutes once the behavior is established.

The political backstory is equally revealing. Chris Fall now directs the center, following the abrupt departure of Collin Burns, a former Anthropic researcher chosen for the role but pushed out by the White House after four days. Burns had left Anthropic, given up valuable stock, and relocated across the country for the position. His removal, reportedly driven by his connection to a company the administration was actively fighting, illustrates the core tension: the evaluators and the evaluated come from the same talent pool, and political actors can reshape the oversight apparatus faster than institutional safeguards can prevent it.[7]

The center operates with fewer than 200 staff evaluating models from companies with combined market capitalizations exceeding $5 trillion. The ratio of oversight to industry is approximately one evaluator per $25 billion of market cap. This is not a regulatory framework. It is a diplomatic mission inside the agency that is supposed to regulate.

What catalyzed the expansion? Two words: the Mythos crisis. Without going into specifics that remain partially classified, the Mythos incident demonstrated that an AI model could be powerful enough to threaten national security while the government had no formal mechanism to evaluate it before public access. The crisis forced the question Washington had been avoiding, and the voluntary pre-release arrangement is the answer they built in less than two years.[7]

A potential executive order looms in the background that would formalize the review process. The voluntary arrangement is not the end state. It is the proof of concept before legislation. Every model submitted, every evaluation completed, every report filed builds the administrative infrastructure that will later become the legal one. The five labs are not just cooperating. They are building the bureaucracy that will govern them.

The Firewall Is On Fire: Palo Alto Zero-Day Exploited

Network cables and security
The perimeter is the first thing that breaks. Photo: Unsplash

While the AI industry reorganized its power structure, the security world caught fire. Palo Alto Networks disclosed CVE-2026-0300 on May 6, a critical buffer overflow vulnerability in the PAN-OS User-ID Authentication Portal that allows unauthenticated remote code execution with root privileges on affected firewalls.[9] The vulnerability is being actively exploited in the wild.[10]

Root-level RCE on a perimeter firewall is as bad as vulnerabilities get. An attacker who can execute code as root on a Palo Alto firewall controls the network boundary. They can intercept traffic, pivot into internal networks, exfiltrate data, and establish persistent access that survives after the initial vulnerability is patched. The Captive Portal, where this vulnerability lives, is the web authentication page users see when connecting to corporate networks - the very first thing exposed to unauthenticated traffic.[11]

Palo Alto Networks is working on patches, but the disclosure timeline raises questions. The company disclosed the vulnerability and active exploitation on the same day, giving administrators no advance window to implement mitigations before attackers already knew about the flaw. In enterprise environments where patching firewalls requires change control windows, maintenance periods, and often physical access, the gap between disclosure and remediation is measured in days or weeks, not hours.

This is not an isolated incident. The same week, Cisco disclosed high-severity remote code execution and server-side request forgery vulnerabilities in Unity Connection.[12] And the Linux kernel Copy Fail vulnerability (CVE-2026-31431), disclosed on May 1, continues to be actively exploited. Microsoft's security blog detailed how this nine-year-old kernel flaw enables local privilege escalation across cloud environments, and CISA issued a warning that unpatched systems remain at risk.[13][14]

This Week's Active Exploitation Landscape

CVEVendorSeverityStatus
CVE-2026-0300Palo Alto (PAN-OS)Critical (9.8)Actively exploited, patch pending
CVE-2026-31431Linux KernelHigh (7.8)Actively exploited, CISA warning
CVE-2026-0301Cisco UnityHigh (8.6)RCE + SSRF, no workaround

Three critical/high CVEs with active exploitation in a single week

The convergence matters. AI models run on infrastructure. That infrastructure sits behind firewalls. If the firewalls are compromised, the AI models - and the data they were trained on - are accessible. The same week the AI industry was building its oversight apparatus, the security foundation it rests on was actively crumbling. You cannot have trusted AI evaluation if the networks running the models are compromised at the perimeter.

Apple Opens the Door: iOS 27 Model Swapping

Person using smartphone
iOS 27: the operating system that finally lets you choose your AI. Photo: Unsplash

Apple is doing something it has never done before: letting users choose which AI model runs their phone's intelligence features. iOS 27, announced via Bloomberg on May 5, will introduce "Extensions" that allow users to access generative AI capabilities from installed apps through Apple Intelligence.[15] Users will be able to set Claude, Gemini, or other models as alternatives to the default ChatGPT integration for Apple Intelligence features. Siri will even support custom voices depending on which external model is responding.[16]

This is a 180-degree turn from Apple's original AI strategy, which was to build everything in-house and keep the ecosystem closed. The shift was forced by reality: Apple's on-device models lag behind the frontier, Siri remains a punchline, and competitors like Google and Samsung already offer multi-model AI experiences. Apple's R&D spending climbed to 10.3% of revenue in the March quarter specifically because of AI investments, putting it closer to megacap tech peers but still catching up.[17]

The $250 million settlement Apple agreed to pay over Siri's delayed AI features underscores the gap between promise and delivery.[18] When you are paying a quarter billion dollars because your AI assistant failed to deliver, opening the platform to competitors is not a strategic choice. It is a survival move.

The platform shift nobody is discussing: If iOS 27 lets users choose between ChatGPT, Claude, and Gemini for on-device AI features, Apple becomes the gatekeeper of AI distribution on 1.5 billion iPhones. It does not matter which model wins the capability race if Apple controls the channel. The App Store taught us this lesson: the platform tax is eternal, and the distributor captures more value than the developer. Apple is not surrendering AI. It is becoming the AI toll booth.

The implications for AI companies are significant. If Apple Intelligence Extensions become a standard way users access AI, the competition shifts from "whose model is best" to "whose integration is smoothest on iOS." That favors companies with API infrastructure, low-latency inference, and developer tooling - which is to say, it favors the same companies already dominating the frontier. The rich get richer, but Apple collects the rent.

DeepMind Goes Gaming, Quantum Goes Molecular

Quantum computing processor
Quantum-centric supercomputing moves from theory to molecular simulation. Photo: Unsplash

Two more stories from the week deserve attention, not because they dominated headlines but because they signal where the frontier is moving next.

Google DeepMind took a minority stake in CCP Games, the developer behind EVE Online, the notoriously complex space MMO known for player-driven politics, massive fleet battles, and emergent economic systems.[19] This is not a gaming investment. EVE Online is a sandbox where thousands of players coordinate, negotiate, deceive, and cooperate in ways that mirror real-world social dynamics. It is, in effect, a massive multi-agent simulation environment running 24/7 with human participants. For an AI research lab, that is a dataset money cannot buy elsewhere.

DeepMind's interest in EVE is likely about training and testing AI agents in environments with genuine strategic depth, not scripted game mechanics. StarCraft was a stepping stone. EVE Online is the next step up in complexity: alliances, betrayals, economic warfare, and territorial control across timescales of months, not minutes. The agents that learn to operate in EVE are agents that learn to operate in human social systems.

Meanwhile, Cleveland Clinic, RIKEN, and IBM announced they have modeled a 12,635-atom protein, the largest known to be simulated with quantum computers.[20] This milestone expands quantum-centric supercomputing's role as a scientific tool. Proteins are the machines of biology, and their function depends on three-dimensional structure that classical computers struggle to simulate at full atomic resolution. Quantum computers, with their ability to represent quantum states natively, can model molecular interactions that would require exponentially more classical compute.

The 12,635-atom record is still far from whole-cell simulation or drug discovery at scale. But the trajectory matters. Each order of magnitude increase in quantum simulation capacity opens new categories of biological problems. The jump from hundreds of atoms to thousands happened in roughly two years. The jump from thousands to tens of thousands is the next barrier, and it determines whether quantum computing becomes a practical tool for pharmaceutical research or remains an academic curiosity.

Collibra and the Agent Governance Problem

Data dashboard and analytics
Agent sprawl is outpacing enterprise oversight. Photo: Unsplash

One more piece of the puzzle. Collibra launched an AI Command Center on May 6 aimed at scaling agentic AI with real-time oversight and continuous control.[21] The pitch is simple: "Agent sprawl is outpacing enterprise oversight." The AI Command Center puts control back where it belongs - with IT administrators, not individual developers deploying AI agents into production without governance.

This is the enterprise side of the same problem the government is wrestling with at the frontier lab level. Who watches the agents? At the national scale, it is NIST with 200 staff and no legal authority. At the enterprise scale, it is tools like Collibra's Command Center, which at least have the advantage of running inside corporate firewalls with actual enforcement mechanisms. Microsoft and Google are also pushing AI agent governance into the enterprise IT mainstream, giving admins more ways to control AI agents through their cloud platforms.[22]

The governance layer is being built in parallel at two scales simultaneously: federal oversight for frontier models, enterprise tooling for deployed agents. Neither has matured. Both are necessary. The gap between them - who governs AI agents that operate across enterprise boundaries, in supply chains, or in the wild - remains completely unaddressed. That is the gap where the real risks live.

What It All Means

Look at the week as a system, not as isolated headlines:

The week did not change the world. It revealed what the world already was. The infrastructure of AI power - compute, oversight, security, distribution, research - has been concentrating for months. This was the week the concentration became visible.

The AI Power Map: May 2026

LayerWho ControlsThis Week's Shift
ComputeAmazon, Google, Microsoft, SpaceXAnthropic locks 300+ MW from SpaceX Colossus
OversightNIST/CASI (voluntary)All 5 frontier labs now submit to pre-release evaluation
SecurityPalo Alto, Cisco, Linux kernel3 actively exploited critical CVEs
DistributionApple (iOS), Google (Android)iOS 27 opens model-swapping to users
ResearchDeepMind, IBM QuantumEVE Online stake, 12K-atom quantum protein

Power in AI is no longer about who writes the best paper or releases the smartest model. It is about who controls the chips, who evaluates the output, who secures the network, and who delivers the answer to your phone. This week, all four of those questions got answers. You might not like all of them.

Sources:

[1] Anthropic - Higher usage limits for Claude and a compute deal with SpaceX

[2] CNBC - Anthropic, SpaceX announce compute deal, includes space development

[3] Anthropic - Amazon compute agreement

[4] Anthropic - Google and Broadcom partnership

[5] Anthropic - Microsoft and NVIDIA strategic partnerships

[6] Anthropic - $50B American AI infrastructure investment

[7] TNW - Google, Microsoft, and xAI agree to pre-release government AI model evaluations

[8] CNN - Microsoft, Google and xAI will let the government test their AI models before launch

[9] Palo Alto Networks - CVE-2026-0300 Advisory

[10] BleepingComputer - Palo Alto Networks warns of firewall RCE zero-day exploited in attacks

[11] Qualys - PAN-OS User-ID Authentication Portal Vulnerability Exploited

[12] Cisco - Unity Connection RCE and SSRF Vulnerabilities

[13] Microsoft Security Blog - CVE-2026-31431 Copy Fail Vulnerability

[14] Qualys - Linux Kernel Copy Fail Vulnerability Exploited in the Wild

[15] 9to5Mac - iOS 27 will let you choose between Gemini, Claude, and more

[16] MacRumors - iOS 27 Will Let You Pick Claude or Gemini Instead of ChatGPT

[17] CNBC - Apple's R&D spending climbs to 10% of revenue on AI investments

[18] TechCrunch - Apple to pay $250M to settle lawsuit over Siri's delayed AI features

[19] Bloomberg - Google DeepMind Partners With Eve Online Developer

[20] PR Newswire - Cleveland Clinic, RIKEN, and IBM Model a 12,635-Atom Protein

[21] PR Newswire - Collibra Launches AI Command Center

[22] Computerworld - Microsoft, Google push AI agent governance into enterprise IT mainstream