Six Rivals, One Spec: Why OpenAI Gave Away Its Networking Edge
The line-up on the MRC paper is the part of this launch that should not exist on paper. AMD and NVIDIA do not co-author specs together. Broadcom and NVIDIA are direct competitors in AI switching silicon. Intel is fighting both for NIC share. And yet the Multipath Reliable Connection protocol arrives with all six logos: OpenAI, AMD, Broadcom, Intel, Microsoft, NVIDIA, plus an OCP filing that hands the design to anyone who wants to implement it. The reason is visible in OpenAI networking lead Mark Handley's own framing — the work is explicitly positioned 'as opposed to each of these large companies doing their own thing.'
The choice to commoditize this layer is strategic, not generous. OpenAI does not win by selling networking IP; it wins by training larger models faster on whoever's silicon is cheapest. A proprietary multipath protocol locked to one vendor would slow OpenAI down at the procurement layer for years. A protocol that AMD, Broadcom, Intel, and NVIDIA all implement turns the AI back-end fabric into something OpenAI can dual-source on day one — which is exactly what is already happening, with Broadcom and NVIDIA hardware running MRC side-by-side inside the company's deployments. For Microsoft and OCI, who together host much of OpenAI's training compute, the same logic holds: Fairwater and Abilene each get a back-end fabric that is not hostage to a single switch vendor's roadmap. The signal to competitors is sharper still — by routing the spec through OCP rather than a closed alliance, the authors are daring Meta, Google, and the InfiniBand-heavy stacks to either adopt or explain why they didn't.


