PRISM - TECH & INFRASTRUCTURE

The $18 Billion Web: How Nvidia Is Buying the Entire AI Supply Chain One $2 Billion Check at a Time

In six months, Jensen Huang has written checks to Marvell, Intel, Lumentum, Coherent, CoreWeave, Synopsys, and Nebius. The pattern is not generosity. It is the most aggressive vertical integration play in semiconductor history - and nobody is calling it what it is.

April 1, 2026 - PRISM Bureau - By BLACKWIRE Intelligence
$18B+DEPLOYED IN 6 MONTHS
7+STRATEGIC STAKES
$160BPROJECTED FY2027 REVENUE
Server racks in a modern data center

The new AI data center is not just hardware - it is an ecosystem where every component traces back to one company. Photo: Pexels

On March 31, 2026, Nvidia announced a $2 billion investment in Marvell Technology. It was the seventh such deal in roughly six months. The press treated it like any other strategic bet. Reuters led with the partnership angle. CNBC emphasized the stock pop - Marvell surged 13%. The financial press moved on within hours.

They missed the story.

What Nvidia is executing right now is not a series of isolated investments. It is the construction of a closed-loop AI infrastructure empire in which every critical node of the supply chain - from chip design software to custom silicon fabrication, from optical networking to cloud compute capacity - has a financial and technological umbilical cord running back to Jensen Huang's company. The total capital deployed since late 2025 now exceeds $18 billion in publicly disclosed stakes alone. That figure does not include undisclosed arrangements, technology licensing agreements, or the gravitational pull of being Nvidia's preferred partner.

This is not investing. It is vertical integration wearing the costume of strategic partnership.

I. The $2 Billion Pattern - Mapping the Investment Blitz

Circuit board closeup

Every chip, every trace, every connection - Nvidia wants a stake in the entire signal path. Photo: Pexels

Start with the ledger. In the past six months, Nvidia has written the following checks to publicly traded companies, each one meticulously targeted at a different layer of the AI infrastructure stack:

COMPANYAMOUNTLAYERDATE
Intel~$5BFoundry / Manufacturing (18A)Sep 2025
Synopsys$2BChip Design / EDA SoftwareEarly 2026
CoreWeave$2BGPU Cloud InfrastructureEarly 2026
Nebius Group$2BEuropean AI Data CentersMar 2026
Lumentum$2BCo-Packaged Optics / LasersMar 2026
Coherent$2BSilicon Photonics / OpticalMar 2026
Marvell$2BCustom XPUs / NetworkingMar 31, 2026
Nokia$1BTelecom / Network InfrastructureMar 2026

The total disclosed figure: approximately $18 billion, according to a Seeking Alpha analysis corroborated by CNBC's March 31 reporting. This does not include stakes in xAI (Elon Musk's AI company) and OpenAI, which have been confirmed but not fully quantified in public filings.

Notice the cadence. Jensen Huang is not agonizing over these deals. He is executing them at the pace of one every two to three weeks. The $2 billion figure has become so standardized it is practically a meme in semiconductor finance circles - the "Jensen check" that arrives when your company has something Nvidia needs in its supply chain. The consistency of the amount is itself revealing: these are not prices negotiated through painful due diligence. They are anchoring bets, large enough to command board seats and strategic influence, standardized enough to process at speed.

But the dollar amounts are not where the real story lives. The story is in what each investment buys Nvidia that money alone cannot - structural lock-in.

II. The Marvell Deal - Why This One Is Different

Detailed circuit board with microchips

Marvell designs the custom chips that Amazon, Microsoft, and hyperscalers use to compete with Nvidia. Now Nvidia owns a piece of the competition. Photo: Pexels

The Marvell investment announced March 31 deserves special attention because it reveals the sophistication of Nvidia's strategy in ways the earlier deals did not.

Marvell Technology is not some scrappy startup that needs Nvidia's cash to survive. It is a $79 billion semiconductor company that designs custom AI chips for the largest hyperscalers on Earth. Amazon's Trainium processors, which AWS uses to compete directly with Nvidia's GPUs for AI training workloads? Marvell designs them. Microsoft's Maia 100 accelerators, the custom silicon Redmond is building to reduce its dependency on Nvidia? Marvell is the design partner. The company also reportedly works with other hyperscalers whose names have not been publicly confirmed.

Read that again. Nvidia just invested $2 billion in the company that designs the chips its biggest customers are building to replace Nvidia GPUs.

This is not an alliance. It is a surveillance position.

The stated purpose of the deal is NVLink Fusion - a technology that allows custom XPUs (non-Nvidia accelerators) to plug into Nvidia's proprietary interconnect fabric. Under the partnership, Marvell will "provide custom XPUs and NVLink Fusion-compatible scale-up networking," while Nvidia supplies Vera CPUs, ConnectX NICs, Bluefield DPUs, NVLink interconnects, and Spectrum-X switches. On the surface, this looks like open collaboration: Nvidia is letting other people's chips onto its network.

Beneath the surface, it is something else entirely. NVLink Fusion makes it structurally impossible for hyperscalers to fully decouple from Nvidia's ecosystem. Even when Amazon or Microsoft builds a custom chip through Marvell, that chip will now speak Nvidia's proprietary interconnect protocol. The networking gear, the CPUs, the DPUs, the switches - all Nvidia. The custom silicon gets a guest pass to the party, but Nvidia owns the venue, the sound system, the security, and the bartender.

There is a deeper play embedded in the Marvell deal that has received almost no attention. Marvell acquired Celestial AI in December 2025 for $3.25 billion. Celestial had developed a photonic fabric technology that enables row-scale coherent memory - optical interconnects that can replace electrical connections within data center racks, providing dramatically lower latency and higher bandwidth for AI training clusters. It also acquired XConn Technologies in January 2026 for $540 million, giving it PCI-Express 6.0 switch technology with the new Structera S 60260 supporting 260 lanes and approximately 2.1 TB/sec of aggregate bandwidth.

By investing in Marvell, Nvidia gets a front-row seat to both technologies. The photonic fabric is particularly interesting because it mirrors the optical circuit switching that Google has used for over a decade as the backbone of its internal network. If Nvidia can integrate this optical technology into its own NVLink ecosystem, it solves one of the hardest remaining problems in AI infrastructure: the bandwidth wall between GPU racks that limits how large a model can be trained on a single cluster.

The $2 billion also gives Nvidia roughly a 2.5% ownership stake in Marvell, as CNBC noted. That means Nvidia profits even when Marvell designs chips for customers who are explicitly trying to reduce their Nvidia dependency. Jensen Huang has structured a deal where he wins regardless of which side the customer picks.

III. The Intel Stake - Buying a Foundry Without the Antitrust Headache

Data center server room hardware

Intel's 18A process node represents the first leading-edge semiconductor manufacturing capability built on American soil. Nvidia wants access. Photo: Pexels

The roughly $5 billion Nvidia invested in Intel in September 2025 looks in retrospect like the template that everything after it was modeled on. Nvidia purchased approximately 4% of Intel at $23.28 per share, according to reporting by Techi and corroborated by multiple financial analysts. The deal included co-development of custom x86 CPUs for Nvidia data centers and PC SoCs with integrated RTX GPU chiplets.

At the time, the financial press treated this as a vote of confidence in Intel's struggling foundry business. Intel had been hemorrhaging market share for years. Its stock was a cautionary tale. Pat Gelsinger had been ousted. The company's turnaround was widely considered a coin flip at best.

What Nvidia actually bought was something far more strategic: a backdoor into the only company in the Western world that can manufacture leading-edge semiconductors domestically. Intel's 18A process node - a 1.8-nanometer-class technology - began high-volume manufacturing in Arizona in early 2026, according to Forbes reporting from March 26. It is the first leading-edge semiconductor process to reach volume production on American soil.

For Nvidia, the value here is existential. Today, Nvidia depends entirely on TSMC in Taiwan for manufacturing its most advanced chips - the Grace-Blackwell GPUs, the upcoming Vera-Rubin architecture, all of it. That dependency is a geopolitical vulnerability that has been discussed ad nauseam but never actually hedged. If China invades Taiwan, if a major earthquake hits the Hsinchu Science Park, if TSMC raises prices because it has monopoly leverage - Nvidia has no Plan B.

Now it does. The Intel stake means Nvidia has a financial interest in Intel Foundry Services succeeding, and a strategic partnership that gives it early access to 18A manufacturing capacity. When Intel's shares surged 8.4% on renewed confidence in the 18A process and the "$5 billion vote of confidence from Nvidia" (per FinancialContent reporting from April 1), the market was reacting to the most obvious implication. The less obvious one: Nvidia can now influence Intel's foundry roadmap through both its investment position and its co-development agreements without triggering the antitrust scrutiny that an outright acquisition would invite.

Think about what it would mean for Nvidia to buy Intel. The regulatory barriers would be insurmountable. The FTC, the European Commission, every competition authority on Earth would block it. But a 4% stake, a co-development deal, and billions in future manufacturing commitments? That flies under every regulatory radar while achieving much of the same strategic outcome.

This is the genius, and the danger, of Nvidia's approach. It achieves the economic effects of vertical integration while maintaining the legal fiction of independent companies in partnership.

IV. The Light and the Fiber - Lumentum, Coherent, and the Optical Play

Network servers with fiber optic connections

Optical interconnects are the next bottleneck in AI infrastructure. Nvidia is buying its way to the front of the line. Photo: Pexels

In early March 2026, Nvidia invested $2 billion each in Lumentum and Coherent, two companies that dominate the market for optical components used in data center networking. The stated rationale: helping both companies ramp production of lasers used in co-packaged optics (CPO), a technology Nvidia is integrating into its Quantum-X InfiniBand and Spectrum-X Ethernet switches.

The unstated rationale is that optical interconnects are the single biggest bottleneck standing between current AI clusters and the next generation of training runs. Electrical copper connections have hit their physical limits. The bandwidth demands of training frontier AI models - which now routinely exceed 100,000 GPU clusters - require photonic solutions that can move data at the speed of light over distances of tens of meters without the heat dissipation problems that plague electrical interconnects at scale.

Nvidia's $4 billion bet on Lumentum and Coherent is a pre-purchase of the production capacity it will need when it ships its next-generation Vera-Rubin GPU architecture, which is expected to require dramatically more interconnect bandwidth than Grace-Blackwell. By financing the production ramp now, Nvidia ensures that when Vera-Rubin GPUs start shipping, the optical components needed to connect them will be available - and they will be available to Nvidia first, ahead of competitors like AMD who have made no comparable investments in the optical supply chain.

Both Lumentum and Coherent have also developed optical circuit switches - technology that Google pioneered internally and has used for over a decade to create reconfigurable data center networks. These switches allow data center operators to dynamically reassign bandwidth between different clusters without physically rewiring anything. The implications for AI training are enormous: instead of dedicating a fixed cluster to a single training run, optical circuit switching could allow data centers to elastically expand and contract clusters based on workload demands in real time.

The NextPlatform, one of the few publications that consistently covers the technical details of these deals, noted in its March 31 analysis that both Lumentum and Coherent "are interesting to Nvidia's potential future in that both have created optical circuit switches akin to the ones that Google has used for more than a decade as the backbone of its network and, more recently, as the spine in the coherent memory networks for its TPU clusters."

If Nvidia can replicate Google's internal networking advantage and offer it commercially through its switch portfolio, it creates yet another layer of vendor lock-in. Customers who buy Nvidia GPUs also get Nvidia switches with Nvidia-financed optical components. Switching to AMD GPUs means giving up the entire network layer, not just the compute cards.

V. CoreWeave, Nebius, and the Cloud Layer - Why Nvidia Needs Captive Demand

Modern computer near server racks

CoreWeave and Nebius are Nvidia-specialized cloud providers - companies that exist to buy Nvidia GPUs and rent them out. Photo: Pexels

The CoreWeave and Nebius investments complete a different part of the puzzle: demand insurance. CoreWeave is a GPU-specialized cloud provider that has raised approximately $28 billion in financing over the past twelve months, including the $2 billion Nvidia investment in early 2026. Nebius Group, the ex-Yandex AI infrastructure company now headquartered in Amsterdam, received its own $2 billion from Nvidia in March 2026 to support the deployment of more than 5 gigawatts of AI compute systems by 2030.

Why would a company that already sells more GPUs than it can make invest in companies whose primary business model is buying GPUs?

Because the AI hardware market has a demand concentration problem that Nvidia understands better than anyone. The three biggest cloud providers - AWS, Azure, and Google Cloud - account for a disproportionate share of Nvidia's data center revenue. These same three companies are also the ones most aggressively building custom silicon through Broadcom, Marvell, and their own internal chip teams. Amazon has Trainium. Google has TPUs (designed by Broadcom). Microsoft has Maia (designed with Marvell). Meta is ramping its MTIA chips. Even OpenAI is now working with Broadcom on a custom "Titan" XPU.

Every custom chip these hyperscalers ship is a GPU they did not buy from Nvidia. The $150-160 billion in revenue Nvidia expects for fiscal year 2027 depends on demand continuing to grow faster than substitution. CoreWeave and Nebius provide a hedge against this risk. They are captive demand vehicles - cloud providers whose entire business model is predicated on Nvidia GPU availability and who have no strategic interest in building competing silicon.

CoreWeave went public via IPO in early 2026 and immediately used the proceeds to expand its GPU fleet. The company's pitch to customers is simple: if you cannot get GPU allocation directly from AWS or Azure, or if you want a cloud provider that is not also competing with you as an AI company, CoreWeave offers Nvidia-native infrastructure without the conflicts of interest. Nebius makes the same pitch to the European market, where data sovereignty regulations and the desire for non-American cloud alternatives create additional demand.

Nvidia's investment in both companies is circular in the most literal sense. It gives CoreWeave and Nebius capital to buy more Nvidia GPUs, which generates revenue for Nvidia, which funds more investments. This is not a flywheel - it is a perpetual motion machine of self-reinforcing demand. Critics would call it channel stuffing with extra steps. Nvidia would call it ecosystem development.

The truth, as usual, is somewhere in between. The investments are real. The companies are real. The demand is real. But the structural effect is the creation of cloud providers that cannot exist without Nvidia and therefore cannot ever choose to leave.

VI. Synopsys, Nokia, and the Invisible Layers

Green circuit board with intricate wiring

Before a chip is fabbed, it must be designed. Synopsys makes the tools that design virtually every advanced semiconductor on Earth. Photo: Pexels

The Synopsys investment is perhaps the most quietly powerful move in the entire portfolio. Synopsys is the dominant provider of electronic design automation (EDA) software - the tools that chip designers use to create the transistor-level blueprints for every advanced semiconductor on Earth. Together with Cadence Design Systems, Synopsys forms a duopoly that is even more dominant in its market than Nvidia is in GPUs. You literally cannot design a modern chip without one of these two companies' software.

By investing $2 billion in Synopsys, Nvidia gains influence over the tools its competitors use to design chips that compete with Nvidia's GPUs. AMD's MI450 accelerators, Broadcom's custom ASICs for Google and Anthropic, Marvell's Trainium designs for Amazon - all of them are built using Synopsys tools. Nvidia does not need to see its competitors' chip designs. It just needs to ensure that the EDA ecosystem continues to optimize for Nvidia's manufacturing processes, interconnect standards, and architecture decisions.

There is also a direct technical benefit. Nvidia has been aggressively pushing computational lithography - using GPUs to accelerate the chip design process itself. Synopsys has been a key partner in this effort, allowing chip designers to simulate billions of transistor interactions on Nvidia GPU clusters rather than on traditional CPU farms. The investment deepens this co-development relationship and ensures Nvidia's cuDNN and CUDA acceleration remains the default platform for EDA workflows.

The Nokia investment ($1 billion) targets telecom infrastructure - the backbone networking equipment that connects data centers to each other and to the broader internet. As AI inference moves from centralized data centers to edge locations closer to end users, the networking equipment between these facilities becomes critical. Nokia's 5G and fiber-optic equipment is deployed by carriers worldwide, and Nvidia's interest in influencing how these networks evolve to support AI workloads is straightforward.

Every layer of the stack, from the EDA software that designs the chips to the telecom equipment that connects the data centers, now has an Nvidia financial stake attached to it. The pattern is complete.

VII. The Broadcom Question - The One Company Jensen Cannot Buy

Computer motherboard with capacitors and microchips

Broadcom controls nearly 70% of the custom ASIC market and designs chips for Google, Meta, Anthropic, and now OpenAI. It is the one counterweight to Nvidia's dominance. Photo: Pexels

There is one conspicuous absence from Nvidia's investment ledger: Broadcom.

This is not an oversight. Broadcom is the single most dangerous competitor in Nvidia's landscape, and it is the one company that cannot be absorbed through a $2 billion check. With a market capitalization exceeding $800 billion and a custom ASIC business that controls nearly 70% of the market, Broadcom is too large, too entrenched, and too strategically positioned to be co-opted through a minority stake.

Broadcom designs Google's TPU processors - the custom chips that power Google's AI infrastructure and that Google also rents to external customers, including Anthropic. It manufactures Meta's MTIA accelerators. The rumor mill consistently places ByteDance and Apple as additional Broadcom custom silicon customers. And most recently, OpenAI announced it was using Broadcom to design its "Titan" XPU, a custom chip that would reduce OpenAI's existential dependency on Nvidia GPUs for running ChatGPT and its successor models.

Broadcom is targeting $100 billion in AI chip revenue by the end of 2027 and expects its AI revenue to double over the course of 2026 to $8.4 billion, according to Motley Fool reporting from March 26. It sits on a $73 billion backlog, per Techi's analysis.

The NextPlatform mused in its March 31 analysis: "We also wonder how long before there will be a partnership between Nvidia and Broadcom." The answer may be: never, at least not on terms that Jensen Huang would accept. Unlike Marvell, which occupies a complementary niche and benefits from NVLink Fusion integration, Broadcom occupies a directly competing position. Broadcom's dominance in Ethernet switch ASICs (the Tomahawk series) competes with Nvidia's Spectrum switches. Its custom XPU business competes with Nvidia's GPU business. Its VMware hypervisor business competes with Nvidia's DPU-based virtualization stack.

The Nvidia-Marvell deal makes more sense when viewed through this lens. By pulling Marvell - Broadcom's smaller but more nimble competitor in custom silicon - firmly into the Nvidia ecosystem, Jensen Huang is attempting to isolate Broadcom. If Marvell's custom XPU customers (Amazon, Microsoft) adopt NVLink Fusion, they become structurally entangled with Nvidia's networking and CPU stack. That leaves Broadcom's customers (Google, Meta, OpenAI) as the only major holdouts from the Nvidia ecosystem - a shrinking island in an expanding Nvidia ocean.

Whether Broadcom can maintain its independence or whether market gravity eventually forces a rapprochement with Nvidia is the single most important structural question in the semiconductor industry right now. Everything else is noise.

VIII. The Antitrust Blind Spot - Why Nobody Is Stopping This

Motherboard with microchips closeup

Minority stakes fly under every regulatory radar. That is precisely why Nvidia uses them. Photo: Pexels

In any other era of American antitrust enforcement, a company with 80%+ market share in AI accelerators spending $18 billion to take stakes across its entire supply chain would trigger immediate scrutiny. The Hart-Scott-Rodino Act requires pre-merger notification for acquisitions above $119.5 million (as of 2025 thresholds). Several of these investments exceed that threshold and would have been filed accordingly.

But HSR filings for minority investments are reviewed against a much lower bar than full acquisitions. The FTC evaluates whether a minority stake gives the acquirer the ability to influence the target company's competitive behavior - board seats, veto rights, information access. Nvidia has been careful to structure these deals without obvious control mechanisms. A 2.5% stake in Marvell does not give Nvidia a board seat. A 4% stake in Intel does not give Nvidia veto power over foundry decisions.

What these stakes do provide is something harder to regulate: gravitational influence. When Nvidia is your largest strategic investor, your largest customer, and your most important technology partner, you do not need a board seat to influence company decisions. The quarterly business review meeting where Nvidia hints that it might shift volume to a competitor carries more weight than any formal governance right.

The European Commission has been somewhat more aggressive in scrutinizing minority stakes in technology markets, particularly after the Illumina-GRAIL debacle. But Nvidia's investments span multiple jurisdictions and multiple market segments, making it difficult for any single regulator to see the full picture. The FTC sees the Intel deal as a foundry investment. The EU sees the Nokia deal as a telecom play. Nobody is connecting the dots to see the complete web.

There is a historical parallel that should concern regulators. In the early 20th century, J.P. Morgan and John D. Rockefeller built interlocking directorates - networks of minority stakes and board positions across railroads, banks, and industrial companies that gave them de facto control over entire sectors of the American economy without technically owning any single company outright. The Clayton Antitrust Act of 1914 was passed specifically to address this pattern. But the Clayton Act's provisions against interlocking relationships were written for an era of board seats and voting shares. The modern equivalent - strategic investments coupled with technology partnerships and supply chain dependencies - falls outside its scope.

Jensen Huang is not J.P. Morgan. Nvidia is not Standard Oil. But the structural pattern of using minority financial positions to create ecosystem-wide dependencies is the same, adapted to the specific characteristics of the semiconductor industry. The question is whether antitrust law, in its current form, has the tools to address it.

The answer, based on the past six months of regulatory silence, appears to be no.

IX. What Nvidia Is Actually Building - The AI Infrastructure Operating System

Intricate circuit board pattern

The pattern is not a portfolio. It is a platform - every layer of the AI stack, locked together. Photo: Pexels

Step back from the individual deals and look at the complete picture. Nvidia now has financial stakes in:

Add Nvidia's own products - GPUs (Grace-Blackwell, upcoming Vera-Rubin), CPUs (Grace, Vera), DPUs (Bluefield), switches (Spectrum-X, Quantum-X), the NVLink/NVSwitch interconnect, and the CUDA software ecosystem - and you have coverage of every single layer of the AI infrastructure stack, from the EDA tools that design the chips to the cloud platforms that rent them to end users.

No other company in the history of the technology industry has achieved this breadth of stack coverage through investment alone. Intel, in its prime, controlled manufacturing and chip design but never the networking or cloud layers. Google controls cloud and custom chips but depends on Broadcom for design and TSMC for manufacturing. Microsoft controls cloud and software but depends on Nvidia, AMD, and Marvell for hardware.

Nvidia's investment strategy creates a synthetic version of vertical integration that achieves the benefits - supply chain security, ecosystem lock-in, pricing power, technology roadmap alignment - without the liabilities of outright ownership: antitrust risk, balance sheet bloat, management distraction, and the operational burden of running businesses outside your core competency.

This is the model that will define the AI infrastructure era. Not ownership, but gravitational influence. Not mergers, but ecosystemic entanglement. Not control, but the inability of any partner to leave without rebuilding their entire technology stack from scratch.

The question is whether this is good for the industry. If Nvidia's ecosystem delivers better performance, lower latency, higher bandwidth, and faster innovation than the fragmented alternative, then customers benefit from the lock-in even as they lose the freedom to leave. This is the classic platform argument: Windows was a monopoly, but it also created a standardized computing platform that enabled an entire industry.

But if Nvidia uses its ecosystem position to extract monopoly rents - raising prices because customers cannot switch, slowing innovation because there is no competitive pressure, or favoring its own products over partners' alternatives - then the concentration of power becomes a tax on the entire AI industry.

Both outcomes are possible. Both outcomes are likely, at different points in the cycle. The build phase rewards ecosystem participants. The extract phase punishes them. The trick is knowing which phase you are in.

We are still in the build phase. Enjoy it while it lasts.

X. The Second-Order Effects Nobody Is Discussing

Data center corridor with server racks

When one company controls the infrastructure layer, the applications built on top inherit its choices - and its limitations. Photo: Pexels

Three implications of Nvidia's investment strategy that deserve more attention than they are getting:

1. The AMD Problem Gets Worse

AMD is now the only major AI chip company that does not have a strategic investment relationship with Nvidia. Every other significant player - Marvell, Intel, Broadcom (through market dynamics), and the hyperscalers' custom silicon teams - is either in Nvidia's orbit or in Broadcom's. AMD occupies neither camp. Its MI450 GPUs compete directly with Nvidia's Grace-Blackwell, but AMD has no equivalent investment network securing its supply chain, no NVLink Fusion-equivalent interconnect strategy to lock in customers, and no optical networking investments to ensure bandwidth parity at the rack level.

AMD's response has been to pursue hyperscaler deals directly - a reported 50,000 MI450 GPU deployment with Oracle, a partnership with OpenAI. These are real wins. But they are transactional relationships, not structural ones. When Nvidia invests $2 billion in your manufacturing partner (Intel), your optical component suppliers (Lumentum, Coherent), and your cloud deployment vehicles (CoreWeave, Nebius), it creates a web of dependencies that a better GPU alone cannot cut through.

Lisa Su's strategy of competing on chip performance is necessary but may no longer be sufficient. The game has changed from "who makes the best GPU" to "who controls the ecosystem in which GPUs operate." On that metric, AMD is falling further behind with every $2 billion check Jensen Huang writes.

2. National Security Implications

The United States government has made semiconductor independence a national security priority, investing $52 billion through the CHIPS Act to rebuild domestic manufacturing capacity. Nvidia's investment strategy directly intersects with this effort through its Intel stake, which supports the 18A foundry buildout in Arizona. But the concentration of AI infrastructure influence in a single company - even an American one - creates its own national security risks.

If Nvidia's ecosystem becomes the default platform for AI training and inference, then any disruption to Nvidia - whether from supply chain issues, cyberattacks, or regulatory action - cascades through the entire AI infrastructure stack simultaneously. The interlocking nature of Nvidia's investments means that a problem at Lumentum (optical components) could constrain Nvidia's switch production, which constrains CoreWeave's cluster deployments, which constrains the AI companies that depend on CoreWeave for compute. The system that Nvidia is building is optimized for performance, not resilience.

3. The Fiscal 2028 Revenue Inflection

Nvidia's projected $150-160 billion in fiscal 2027 revenue (ending January 2028) would make it one of the highest-revenue companies in the world. But the investment strategy reveals something about Nvidia's internal projections that the financial press has not fully processed: Jensen Huang is spending as if he expects fiscal 2028 and beyond to be dramatically larger. You do not invest $18 billion in supply chain capacity unless you believe the demand curve is still accelerating.

The first semi-custom NVLink Fusion reference designs integrating Marvell's 1.6T optical interconnects with Nvidia's Vera-Rubin architecture are expected by Q3 2026, according to FinancialContent analysis. If these designs perform as expected, they represent a new class of AI system - partially custom, partially Nvidia-standard - that could expand the addressable market far beyond the hyperscaler segment into enterprise, sovereign AI, and edge inference deployments.

Nvidia is not investing to protect its current market. It is investing to create the next one.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram