The PJM Analogy, Translated
When Midha says Amp is 'an independent system operator of the grid' for AI compute [4], he means something very specific. PJM Interconnect, the model he is borrowing from, doesn't generate electricity and doesn't own power lines. It coordinates: it forecasts demand across utilities in its region, dispatches generation from whoever has the cheapest spare capacity in any given hour, and meters the flows. The reason that arrangement exists is that any single utility's load is spiky and unpredictable, but pooled across a region the aggregate is much smoother — so the system needs less reserve capacity overall and everyone pays less for it.
Amp is betting the same shape applies to AI training and inference. AMP Infra PBC, the operating unit, 'provides pooled, automated infrastructure (across clouds, models, data centers etc) on the global AI grid' [3]. Concretely that means a frontier lab signs into Amp instead of negotiating directly with a single cloud, and Amp routes the workload to whichever partnered cloud or data center has idle GPUs that hour. Workloads from independent teams that look spiky in isolation become smooth in aggregate, which is the precise mechanism Midha is exploiting when he frames today's situation as a full-stack systems failure rather than a chip shortage [5].
This is structurally different from buying time on AWS or CoreWeave. A direct hyperscaler customer is paying for a slice of one provider's capacity, with that provider absorbing the volatility risk and pricing accordingly. An Amp grid member is paying into a coordinated pool that smooths volatility across many providers and many tenants, with Amp taking the coordination role and (the pitch goes) passing the efficiency back to members at cost. The thousands of chips Amp says are already running in production, with several hundred megawatts coming online by year-end [4], are the early proof the routing layer actually works.



