Three Models, One Gating Lever
Daybreak's most consequential design decision is not the headline launch — it's the three-tier model ladder underneath it. The default GPT-5.5 ships with standard safeguards. GPT-5.5 with Trusted Access for Cyber unlocks behaviors needed for verified defensive work in authorized environments. GPT-5.5-Cyber, available only as a limited preview, is the permissive variant intended for red teaming, penetration testing, and controlled validation [2].
The agentic harness on top is Codex Security, which OpenAI now describes as the operational core of Daybreak: it builds an editable threat model of a target repository focused on realistic attack paths and high-impact code, identifies and tests vulnerabilities inside an isolated environment, proposes fixes, and returns audit-ready evidence [4]. Practitioner breakdowns on YouTube report that Codex Security orchestrates roughly ten subagents to do the scanning, threat modeling, patch generation and regression-test authoring inside that loop.
That architecture matters because it turns model access itself into the lever OpenAI pulls to manage dual-use risk. The same frontier capability that finds a vulnerability can write the exploit, so OpenAI's bet is that you commercialize defense by gating which defenders get which tier, then layering verification, scoped permissions and human oversight on top [6]. From June 1, 2026, GPT-5.5-Cyber access tightens further, requiring phishing-resistant authentication for anyone using the permissive tier [9].



