The Silicon Thesis: Why Hyperscaler Orders Doubled
The single most important number in Cisco's print isn't revenue — it's the AI infrastructure order forecast, which jumped from $5 billion to $9 billion for FY2026, with $5.3 billion already booked year-to-date [1]. Networking product orders grew more than 50% YoY in the quarter, and data-center switching orders rose more than 40% [3]. That kind of acceleration in a business that 'had not grown for some time,' as Constellation Research's Holger Mueller put it, is what investors paid for when they sent the stock up ~14% [7].
The mechanism behind the surge is silicon. Cisco's Silicon One ASIC family — and specifically the G300, which Cisco markets as delivering 33% higher network utilization and 28% faster AI job completion — is the wedge into hyperscale AI data centers that had historically been the territory of merchant chips [14]. CEO Chuck Robbins put the strategic stakes bluntly: 'If you don't have silicon you're going to struggle to be relevant to the hyperscalers' [6]. That sentence is the whole pivot in one line — Cisco is no longer competing on boxes; it's competing on chips that go into boxes, including Nvidia's. The Cisco-Nvidia Spectrum-X partnership, which puts Silicon One inside Nvidia Ethernet switches, is the clearest expression of that posture [16].
The forward number to watch is management's own guide: it would be 'reasonable' to expect $6B+ in AI hyperscale revenue in FY2027 [3]— a step-change from where this segment was even 18 months ago.



