AI chip stock rotation: Intel, AMD, Micron rally as Nvidia lags
TECH

AI chip stock rotation: Intel, AMD, Micron rally as Nvidia lags

37+
Signals

Strategic Overview

  • 01.
    Wall Street rotated AI infrastructure capital out of Nvidia and into Intel, AMD, and Micron during the week of May 4-8, 2026, with Intel and AMD each gaining roughly 25% and Micron jumping more than 37%.
  • 02.
    High-bandwidth memory has become the binding bottleneck of the AI buildout, with all major HBM suppliers sold out through 2026 and Micron's CEO confirming customers are receiving only 50-66% of requested volumes.
  • 03.
    Hyperscalers are accelerating in-house ASIC programs (Google TPU v7/v8, AWS Trainium 3, Microsoft Maia, Meta MTIA), with analysts projecting ASIC shipments to surpass GPU shipments by 2028.
  • 04.
    Nvidia is up only ~14% YTD in 2026, only modestly ahead of the Nasdaq, with the May 20 Q1 FY27 earnings print viewed as the next catalyst that could either confirm or break the rotation narrative.

Deep Analysis

The HBM Ration: Why Memory Became the Real AI Bottleneck

The under-told mechanism behind the May 2026 rotation is high-bandwidth memory rationing. Micron CEO Sanjay Mehrotra disclosed that the company's largest AI customers are receiving only 50% to two-thirds of their requested HBM volumes, and Micron, SK Hynix (62% share) and Samsung (17% share) are collectively sold out through 2026 with 2027 orders already booked. This converts memory makers from cyclical commodity vendors into pricing-power gatekeepers of the AI buildout: Micron's market cap crossed $800 billion for the first time during the week of May 4, and the company has become the U.S. proxy on a multi-year HBM supercycle.

The downstream squeeze is visible in Nvidia's own supply chain — Nvidia is reportedly cutting gaming GPU production 30-40% in H1 2026 to free GDDR7 capacity for data-center products, evidence that even the incumbent now competes for memory it cannot internally produce. When the dominant chip designer is rationing its own consumer line to feed its server line, memory has structurally re-rated from a cyclical commodity to the binding constraint of the AI build.

Why AI Agents Need CPUs and Memory, Not Just GPUs

The workload mix is shifting from chatbot training, which is GPU-bound, toward AI agents and inference at scale, which are CPU- and memory-bound. Agents make many small, latency-sensitive calls, orchestrate tools, and hold long-running state — patterns that load the host CPU and the memory subsystem far more than batch training does. That mechanical shift broadens demand into x86/Arm server CPUs (Intel Xeon, AMD EPYC) and into HBM-rich accelerators where bandwidth, not raw FLOPs, is the binding constraint.

Bank of America responded by lifting its 2026-2030 server-CPU CAGR from 13% to 15% and its 2030 TAM from $59B to $71B (versus a 2025 base of roughly $28B), a structural upgrade that directly underwrites AMD's ~66% YTD gain and Intel's >200% YTD run. Reddit's r/ValueInvesting community echoed the mechanism almost verbatim, with top commenters articulating that 'as the AI focus shifts more toward agents, calcs, automation and inference, this quite likely shifts the balance more positively to the compute side of semis over the GPU side' — framing the rotation not as a tactical trade but as a multi-year mix change.

AI Chip Performance Scoreboard 2026 YTD

AI Chip Performance Scoreboard 2026 YTD
AI chip stock 2026 YTD performance — Intel, Marvell, Micron, AMD all materially outpacing Nvidia and the Nasdaq.

Year-to-date returns through early May 2026 illustrate the rotation's breadth across the chip stack. Intel is up well over 200%, Marvell roughly 95%, Micron about 69% and AMD ~66% — while Nvidia trails at ~14%, only modestly ahead of the Nasdaq's ~13%. The dispersion is the story: every AI infrastructure name except Nvidia is materially outpacing the index.

The scoreboard reframes how to read AI exposure. For two years, Nvidia was a sufficient proxy for the AI trade. In 2026, the proxy has become a stack — Nvidia for training accelerators, AMD/Intel for server CPUs, Micron for U.S.-listed HBM, Broadcom and Marvell for custom-ASIC enablement. Investors building exposure off a single ticker now miss the parts of the stack where the marginal dollar of capex actually lands.

The 2028 Crossover: When ASIC Shipments Pass GPUs

Beyond the cyclical rotation sits a structural threat to Nvidia's moat. Custom AI chip sales are projected to grow ~45% in 2026 (TrendForce), and Oplexa's analysis projects ASIC shipments will surpass GPU shipments by 2028 — the first time in history. The hyperscaler programs are already production-grade: Google has TPU v7 'Ironwood' in general availability and is dual-sourcing TPU v8 across Broadcom and MediaTek; AWS is ramping Trainium 3 in Q2 2026 with claims of 50% better price-performance versus Nvidia H100/B200; Microsoft is deploying Maia 100/200 (Braga) for large portions of ChatGPT inference; Meta is migrating recommendation engines onto MTIA v3/MTIA 400.

Custom ASICs are reported to deliver 3-5x better performance-per-watt for fixed inference workloads, which is precisely the workload mix that is growing fastest. Broadcom is positioned to capture ~60% of the custom ASIC TAM by 2027, and Marvell is up ~95% YTD — the merchant ASIC enablers, not just the hyperscalers, are pricing in this crossover. Critically, even Nvidia bulls like Elon Musk acknowledge this is happening: hyperscalers adopt ASICs to gain bargaining power, and that bargaining shift, more than any technical inferiority of GPUs, is what compresses Nvidia's pricing.

May 20 as Referendum — and the 1999 Contrarian Case

Nvidia's Q1 FY27 print on May 20, 2026 is shaping up as a referendum on the rotation. Bulls, including Mizuho's Jordan Klein, frame the move as a 'changing of the guard in AI' anchored to real CPU and memory shortages. I/O Fund has already trimmed Nvidia exposure citing inference share leakage to ASICs and a delayed Rubin ramp.

The contrarian case, voiced by some strategists and echoed in retail forums, is that the broader semi rally now resembles 1999 conditions and could see a 25-30% correction in the SOX even as the AI build-out continues — meaning rotation winners could give back gains faster than Nvidia. Conviction in the rotation thesis is high but bifurcated: r/stocks contributors warn that 'once the data center buildout completes, chip stocks will crater' even as r/ValueInvesting debates whether each of Intel, AMD and Micron could cross a $1 trillion market cap. Any softness in Nvidia's guidance or commentary on inference share could trigger a sharp drawdown across a complex now valued near $5 trillion.

Historical Context

2024-01-01
Through 2024 and early 2025, capital flooded into Nvidia GPU capacity as the dominant AI training trade, establishing the baseline that the 2026 rotation is now disrupting.
2026-03-01
Micron CEO Sanjay Mehrotra warned in March that AI customers were receiving only 50%-66% of requested HBM volumes, foreshadowing the May rotation by signaling memory as the binding constraint.
2026-03-01
Meta unveiled four new MTIA-family custom AI chips, accelerating its move off Nvidia for recommendation engines and Llama inference.
2026-04-30
Intel logged its best monthly stock performance in 55 years, setting up the early-May rotation and validating the CPU-revival thesis after years of losing to TSMC and Nvidia.
2026-05-06
CNBC framed Nvidia as 'falling behind' the broader semiconductor surge, formalizing the rotation thesis on financial media.
2026-05-20
Upcoming Q1 FY27 earnings on May 20, 2026 are seen as the next catalyst that could either confirm or break the rotation narrative as Nvidia approaches a $5 trillion valuation.

Power Map

Key Players
Subject

AI chip stock rotation: Intel, AMD, Micron rally as Nvidia lags

NV

Nvidia

Incumbent GPU leader still commanding >80% of the AI accelerator market and projected to grow revenue ~70% this fiscal year, but its CUDA moat is increasingly contested as inference workloads and ASIC share expand.

IN

Intel

Primary CPU rotation beneficiary, up well over 200% YTD as data-center CPU demand for AI agents resurges; logged its best monthly stock performance in 55 years in April 2026.

AM

AMD

Dual-leverage play on EPYC server CPUs and MI350 inference accelerators; up ~66% YTD and outpacing Nvidia.

MI

Micron

U.S. HBM supplier with 21% global share whose market cap crossed $800 billion for the first time during the week of May 4, 2026 as HBM capacity is fully booked through 2026.

HY

Hyperscalers (Google, AWS, Microsoft, Meta)

Vertically integrating with custom silicon — TPU v7 'Ironwood', Trainium 3, Maia 100/200, and MTIA v3/MTIA 400 — to capture more economics in-house and reduce Nvidia margin extraction.

BR

Broadcom and Marvell

Custom-ASIC merchant winners enabling hyperscaler silicon; Broadcom on track to capture ~60% of custom ASIC TAM by 2027 and Marvell up ~95% YTD.

Source Articles

Top 4

THE SIGNAL.

Analysts

"Calls the early-May 2026 move a 'changing of the guard in AI', arguing the rotation reflects real shortages in memory and CPU capacity rather than sentiment alone."

Jordan Klein
Senior Analyst, Mizuho Securities

"Confirms HBM supply is so tight that Micron's largest AI customers are being rationed at only 50% to two-thirds of their requested volumes, with 2027 orders already arriving."

Sanjay Mehrotra
CEO, Micron Technology

"Raised the multi-year server CPU forecast on AI-agent demand, lifting the 2026-2030 industry CAGR from 13% to 15% and the 2030 TAM from $59B to $71B."

Bank of America Securities chip team
Sell-side equity research

"Trimmed Nvidia exposure citing inference share leakage to ASICs and a delayed Rubin ramp — long-term thesis intact, but 2026 allocation is reduced as the CUDA moat matters less for inference."

I/O Fund (Beth Kindig team)
Independent equity research

"Argues the 2026 hyperscaler ASIC pivot has crossed from cost-savings into structural competitive advantage, projecting ASIC shipments will surpass GPU shipments by 2028."

Oplexa research team
Industry analyst (custom-ASIC market)
The Crowd

"Nvidia's journey in 2026 raises the big question: Is the AI-fueled gold rush slowing down? While stock performance underwhelms, Nvidia's strategic investments in custom AI chips and cloud infrastructure suggest more is on the horizon. Keep an eye on their quarterly results."

@@DailyAITechNews0

"Tech's biggest bull lists his Top 5 AI stocks for 2026, and Nvidia's isn't one of them"

@@MarketWatch0

"Nvidia chip shift to smartphone-style memory to double server-memory prices by end-2026 - Counterpoint. Nvidia's move to use smartphone-style memory chips in its artificial intelligence servers could cause server-memory prices to double by late 2026."

@@Jukanlosreve0

"Chip stocks are like vortexes sucking up all the money"

@u/masteryyi322
Broadcast
Why Wedbush's Dan Ives says these five AI stocks will boom in 2026

Why Wedbush's Dan Ives says these five AI stocks will boom in 2026

Altimeter's Brad Gerstner on big tech and how to trade it

Altimeter's Brad Gerstner on big tech and how to trade it

AI Memory Stocks (MU, SNDK) Just Had a Historic Day

AI Memory Stocks (MU, SNDK) Just Had a Historic Day