The $1.1B Bet on a Decade-Old Heresy
The clearest way to read this round is as a billion-dollar wager that the LLM era has a ceiling Silver has already glimpsed twice. AlphaGo (2016) used human game records as a starting point; AlphaZero (2017) deleted them and got better. The progression is the entire intellectual scaffolding of Ineffable Intelligence: if removing human data made a Go agent stronger, the same operation should — eventually — produce general intelligence that has read no books, watched no videos, and imitated no demonstrations. Sequoia's pitch quotes the company's stance precisely: 'No pre-training. No imitation. Just an agent learning endlessly from the consequences of its own actions.'
This is a direct repudiation of the dominant paradigm. Every frontier lab that matters — OpenAI, Anthropic, Google DeepMind itself, xAI — has organized its capex around scaling pre-trained transformers on human-produced text and increasingly synthetic-but-still-human-derived data. Silver's claim, articulated in his 2025 essay with Richard Sutton and now capitalized at $5.1B, is that this entire industry is climbing an asymptote. The interesting tell is that Sutton, the field's intellectual elder, publicly endorsed the launch on X, calling it the fulfillment of the 'Era of Experience' — a co-sign that is rarer in academic RL than billion-dollar checks are in venture capital.


