From 947 to 75,000: The Staggering Scale Behind OpenAI's Urgency
The single most striking data point in the Child Safety Blueprint story is not the framework itself -- it is the 80x surge in child exploitation incident reports that OpenAI submitted to the National Center for Missing and Exploited Children. In the first half of 2024, OpenAI filed 947 reports. In the same period of 2025, that number exploded to approximately 75,027. That is not a gradual trend line. It is a near-vertical cliff that reflects both the growing misuse of generative AI for exploitation and OpenAI's improving detection capabilities.
The Internet Watch Foundation corroborates this trajectory from the other side: it documented over 8,000 instances of AI-generated child sexual abuse material in just the first half of 2025, a 14% year-over-year increase. These are not hypothetical risks or projected scenarios. They are confirmed cases of synthetic imagery that is being created, distributed, and consumed right now. The sheer volume explains why OpenAI moved from voluntary commitments and incremental safety features to publishing a full legislative and technical blueprint. When your own detection systems are flagging tens of thousands of exploitation attempts per quarter, a piecemeal approach is no longer sufficient.
This reporting surge also creates a practical bottleneck. NCMEC, the clearinghouse that receives these reports, must now process orders of magnitude more material -- much of it AI-generated and therefore harder to triage using traditional forensic methods. The blueprint's emphasis on refining reporting mechanisms is not abstract policy language. It is a direct response to the operational reality that the existing infrastructure was not built for this volume or this type of content.
