Amp raises $1.3B to build alternative AI compute grid
TECH

Amp raises $1.3B to build alternative AI compute grid

31+
Signals

Strategic Overview

  • 01.
    Amp, a public benefit corporation founded by former a16z general partner Anjney Midha, has disclosed more than $1.3 billion in funding from Andreessen Horowitz, Y Combinator and cloud-computing providers to build a shared 'AI grid' that pools surplus data center capacity for under-resourced labs.
  • 02.
    Founding grid members include Mistral, ElevenLabs, Black Forest Labs, Periodic Labs, Sesame and Arena — non-hyperscaler frontier AI teams that will buy pooled GPU access at cost.
  • 03.
    Amp is targeting 1.9 gigawatts of compute capacity within five years, with about 200 megawatts online by the end of 2026, and has committed up to $500 million of its profits through 2030 to a Public Wealth Fund for communities affected by data center buildouts.
  • 04.
    The company itself owns no GPUs or data centers; it operates as what Midha calls 'an independent system operator of the grid,' modeled explicitly on PJM Interconnect, a regional electricity coordinator.

Deep Analysis

The PJM Analogy, Translated

When Midha says Amp is 'an independent system operator of the grid' for AI compute [4], he means something very specific. PJM Interconnect, the model he is borrowing from, doesn't generate electricity and doesn't own power lines. It coordinates: it forecasts demand across utilities in its region, dispatches generation from whoever has the cheapest spare capacity in any given hour, and meters the flows. The reason that arrangement exists is that any single utility's load is spiky and unpredictable, but pooled across a region the aggregate is much smoother — so the system needs less reserve capacity overall and everyone pays less for it.

Amp is betting the same shape applies to AI training and inference. AMP Infra PBC, the operating unit, 'provides pooled, automated infrastructure (across clouds, models, data centers etc) on the global AI grid' [3]. Concretely that means a frontier lab signs into Amp instead of negotiating directly with a single cloud, and Amp routes the workload to whichever partnered cloud or data center has idle GPUs that hour. Workloads from independent teams that look spiky in isolation become smooth in aggregate, which is the precise mechanism Midha is exploiting when he frames today's situation as a full-stack systems failure rather than a chip shortage [5].

This is structurally different from buying time on AWS or CoreWeave. A direct hyperscaler customer is paying for a slice of one provider's capacity, with that provider absorbing the volatility risk and pricing accordingly. An Amp grid member is paying into a coordinated pool that smooths volatility across many providers and many tenants, with Amp taking the coordination role and (the pitch goes) passing the efficiency back to members at cost. The thousands of chips Amp says are already running in production, with several hundred megawatts coming online by year-end [4], are the early proof the routing layer actually works.

Follow the Money: How 'At Cost' Is Supposed to Pencil Out

Amp's most counterintuitive number is not the $1.3B raise — it's the 'at cost' clause. Grid members get compute at Amp's underlying cost; any excess is resold to non-members at a modest profit [4]. A for-profit company aiming for 1.9 gigawatts inside five years [2]normally cannot survive on a pass-through pricing model. So what is the actual P&L?

The answer lives in the two-unit structure visible on Amp's own site: AMP Infra PBC runs the grid, while AMP Foundry — backed by capital institutions NEA and Stepstone among the founding grid partners [3]— 'provides capital so grid members can access compute.' Foundry is the financing wrapper. It can sign long-dated take-or-pay contracts with data center operators (the kind hyperscalers normally write) and amortize that fixed cost across many smaller members who individually couldn't underwrite a multi-year commitment. The economics work because the spread between what Foundry pays operators for guaranteed capacity and what spot non-members pay for the leftover hours is the margin. Members pay variable cost; the firm earns on the volatility premium that hyperscalers normally capture for themselves.

That design is also where the execution risk lives. Amp owns no GPUs and no data centers [4], so every cost line on its grid is somebody else's price. Reporting on the raise indicates LP backers reportedly include Nvidia, Microsoft Azure and Oracle [6]— the same operators whose excess capacity Amp needs to keep buying cheaply. If hyperscaler utilization tightens or those LP-operators decide they'd rather sell directly to the next Mistral, Amp's at-cost promise becomes a margin squeeze on a fixed obligation. The model is most exposed in precisely the scenario it is designed to solve.

The Public Wealth Fund Gambit

The most unusual paragraph on Amp's site is the one promising 'up to $500 million set aside from AMP's profits through 2030' for a Public Wealth Fund supporting communities affected by data center buildouts [3]. That is rare-bordering-on-unprecedented for an AI infrastructure company. It is also strategically legible.

The 1.9 GW target [2]plugs Amp into a buildout cycle that is already producing visible local backlash over strained regional grids, water and noise complaints, and zoning fights at proposed sites. Every gigawatt Amp aggregates rides on permits and substations that municipalities can refuse to grant. By codifying a community payout in a public benefit corporation structure before any pushback arrives, Amp converts what would otherwise be a per-site political negotiation into a brand-level commitment its operator partners can point to. The dollars don't have to dominate community economics to do the work; they have to be the first thing a local council reads.

The second-order read is even more interesting. If the at-cost model holds, hyperscalers competing for the same data center supply now face a competitor that has publicly pre-committed to revenue-sharing with affected communities. Matching that pledge dilutes their margins; refusing to match it cedes the political narrative. Either way, Amp has reframed compute infrastructure as a public-utility-shaped market in which 'who gets the surplus' is a live question — exactly the framing PJM-style regulation enforces in electricity, and exactly the framing the dominant cloud providers have spent a decade keeping off the table.

Oxygen to Amp: The Pipeline Sand Hill Built in Plain Sight

Amp did not appear out of nowhere. The roadmap was visible inside a16z for nearly three years, just not framed as a standalone company. In July 2023 Midha joined Andreessen Horowitz as a general partner leading frontier AI investments [6], and three months later the firm launched Oxygen alongside a $1.25 billion AI infrastructure fund — a program that bundled guaranteed GPU access with portfolio investments [6]. Oxygen was, in effect, a private grid for one fund's portfolio.

By mid-2024 the program was managing more than 20,000 GPUs worth roughly $500 million, and Midha was publicly admitting demand was outstripping supply [6]. Oxygen had hit the same wall Amp is now trying to vault: a single LP base's portfolio is too narrow a demand pool to smooth volatility, and a single fund's balance sheet is too small to underwrite multi-gigawatt commitments. AMP PBC was announced in October 2024 as an independent venture still operating from within a16z [6]; Midha left the firm full-time in October 2025, with reporting at the time indicating the new fund was raising over $1 billion from LPs that reportedly included Nvidia, Microsoft Azure and Oracle [6]. The May 2026 disclosure of $1.3B-plus and the named founding members [1]is the public-launch milestone of a three-year operational handoff.

The signal in that timeline isn't 'a16z spun out a portfolio service.' It's that the largest mainstream venture firm in AI concluded the GPU-allocation problem was too big for any single fund to solve in-house — and built the off-ramp before admitting it. Trade outlets and long-form interview channels have led the initial coverage, framing Amp as Midha's vehicle for an even larger compute coalition; broader viral discussion is conspicuously absent, suggesting the story is still landing among industry insiders rather than the retail-AI audience.

Aggregator or Hyperscaler Customer? The Contrarian Read

The clean version of Amp's story is 'a structural alternative to hyperscaler compute hoarding.' The messier version is that Amp's entire business model depends on continuing to buy from those same hyperscalers. Reporting on the predecessor raise indicates LP backers reportedly include Nvidia, Microsoft Azure and Oracle [6]— the chip vendor and two of the cloud providers Midha publicly accuses of monopolizing compute [1].

That tension is not necessarily fatal, but it changes the read. From Azure or Oracle's perspective, Amp is a useful customer: it commits to long-dated capacity, smooths utilization curves across many tenants, and reduces the operator's own sales cost for chasing dozens of mid-tier AI labs individually. From Nvidia's perspective, Amp is a demand aggregator that broadens the GPU buyer base beyond five hyperscalers — a hedge against a future in which OpenAI and Anthropic vertically integrate into custom silicon. In other words, the same backers Midha frames as adversaries in press quotes have specific financial reasons to want Amp to succeed, as long as Amp stays in the coordinator lane and doesn't try to build its own data centers.

That boundary is the one to watch. As long as Amp routes workloads onto hyperscaler-adjacent capacity, the relationship is symbiotic and the at-cost model is sustainable. If Amp ever crosses into building its own infrastructure — owning rather than coordinating — the LP-customer alignment breaks. The Public Wealth Fund pledge [3]and the PBC structure read very differently in each scenario: in the symbiotic version they are clever positioning, and in the eventual-vertical-integration version they are the political moat being built ahead of time. Which one Amp is actually playing for is the most important uncertainty in the whole launch.

Historical Context

2023-07
Midha joined Andreessen Horowitz as a general partner leading frontier AI investments.
2023-10
a16z launched Oxygen alongside a $1.25B AI infrastructure fund, bundling guaranteed GPU access with portfolio investments — the direct precursor to Amp's grid concept.
2024-07
Oxygen managed over 20,000 GPUs (~$500M value), but Midha admitted demand was outstripping the pool, foreshadowing the need for a larger independent grid.
2024-10
Midha announced AMP as an independent venture to provide compute and capital to frontier AI teams while still operating from within a16z.
2025-10
Midha left Andreessen Horowitz full-time to run AMP, with reporting indicating the fund was raising over $1 billion from LPs including Nvidia, Microsoft Azure and Oracle.
2026-05-12
Amp publicly disclosed it had raised more than $1.3 billion, named its founding grid partners and unveiled the Public Wealth Fund commitment.

Power Map

Key Players
Subject

Amp raises $1.3B to build alternative AI compute grid

AN

Anjney Midha

Founder and CEO of AMP PBC. Built and ran a16z's Oxygen GPU-allocation program; now sets Amp's grid strategy and PJM-style operating model.

AN

Andreessen Horowitz (a16z)

Lead capital backer of the $1.3B raise and a founding grid partner; supplies portfolio companies as anchor demand and lends Amp credibility against the hyperscalers.

Y

Y Combinator

Investor and founding grid partner; routes its portfolio of capital-constrained startups into Amp's compute pool, helping smooth aggregate demand.

MI

Mistral, ElevenLabs, Black Forest Labs, Periodic Labs, Sesame, Arena

Founding grid customers — non-hyperscaler frontier labs whose pooled demand validates the at-cost model and gives Amp negotiating leverage with data center operators.

NE

NEA and Stepstone

Capital institutions among the founding grid partners; provide the financing leverage that lets Amp Foundry underwrite compute deals on behalf of grid members.

HY

Hyperscalers (Google, Amazon, Meta) and OpenAI/Anthropic

The incumbents Midha publicly accuses of 'hoarding' compute; their absorption of GPU supply is the competitive backdrop that Amp exists to push against.

Fact Check

6 cited
  1. [1] Start-Up Raises $1.3 Billion for an A.I. 'Grid'
  2. [2] Startup That Aims to Widen Access to Compute Draws $1.3B
  3. [3] AMP PBC
  4. [4] The AI Coachella Prof's Plan for the AI Grid
  5. [5] AMP Grid: Building an Independent AI Compute Network
  6. [6] Anjney Midha: From a16z GPU Kingmaker to AMP — Deep Analysis

Source Articles

Top 3

THE SIGNAL.

Analysts

"Argues big tech and a handful of well-funded labs are monopolizing AI compute, leaving researchers and smaller startups starved: 'Some companies just can't get the computing power they need. The world's wealthiest and most powerful companies are hoarding the infrastructure for themselves.'"

Anjney Midha
Founder and CEO, AMP PBC; former General Partner, Andreessen Horowitz

"Describes Amp's role as analogous to a regional electricity grid operator, calling himself 'an independent system operator of the grid' for AI compute."

Anjney Midha
Founder, AMP PBC

"Frames the AI race as systems engineering, not silicon procurement: 'The AI scaling race is not a chip race. It's a full stack systems code race,' and warns of an infrastructure wastage crisis where idle compute coexists with starved frontier labs."

Anjney Midha
Founder, AMP PBC

"Validates the pooled-demand thesis: 'When you pool your demand, you can have far more serious conversation about buying computing power.'"

Liam Fedus
CEO, Periodic Labs (founding grid member)
The Crowd

"AMP founder @AnjneyMidha thinks GameStop CEO Ryan Cohen is onto something with his eBay bid. He says that eBay's 10K filing showed it spent $2.4B on marketing to acquire a million users, or $2,400/ user, in FY2025. "I don't think he's buying eBay because he thinks he's smarter..."

@@tbpn1686

"The founding of Anjney Midha's new firm, AMP, highlights the pressing need to connect AI startups with the compute power required for their growth. Read: https://thein.fo/4qG641k"

@@theinformation0

""Anjney Midha, a former Andreessen Horowitz general partner, is in discussions to secure more than $10 billion in capital for a coalition connected to his new venture, AMP, according to someone familiar with the deal.""

@@loyndsview0

"AMP has raised more than $1.3 billion and wants to pool unused compute across labs like a grid. That could reshape who gets to build frontier AI."

@u/Altruistic-Mud56860
Broadcast
The Early Days of Anthropic & How 21 of 22 VCs Rejected It | The Four Bottlenecks in AI | Anj Midha

The Early Days of Anthropic & How 21 of 22 VCs Rejected It | The Four Bottlenecks in AI | Anj Midha

Stanford CS153 Frontier Systems | Anjney Midha from AMP PBC on Frontier Systems

Stanford CS153 Frontier Systems | Anjney Midha from AMP PBC on Frontier Systems

FULL INTERVIEW: Anjney Midha on Fixing AI's Biggest Bottleneck

FULL INTERVIEW: Anjney Midha on Fixing AI's Biggest Bottleneck