OpenAI believes it has finally pulled ahead in one of the most closely watched races in artificial intelligence: AI-powered coding. Its newest model, GPT-5.3-Codex, represents a solid advance over rival systems, showing markedly higher performance on coding benchmarks and reported results than earlier generations of both OpenAI’s and Anthropic’s models—suggesting a long-sought edge in a category that could reshape how software is built.
But the company is rolling out the model with unusually tight controls and delaying full developer access as it confronts a harder reality: The same capabilities that make GPT-5.3-Codex so effective at writing, testing, and reasoning about code also raise serious cybersecurity concerns. In the race to build the most powerful coding model, OpenAI has run headlong into the risks of releasing it.
GPT-5.3-Codex is available to paid ChatGPT users, who can use the model for everyday software development tasks such as writing, debugging, and testing code through OpenAI’s Codex tools and ChatGPT interface. But for now, the company is not opening unrestricted access for high-risk cybersecurity uses, and OpenAI is not immediately enabling full API access that would allow the model to be automated at scale. Those more sensitive applications are being gated behind additional safeguards, including a new trusted-access program for vetted security professionals, reflecting OpenAI’s view that the model has crossed a new cybersecurity risk threshold.
The company’s blog post accompanying the model release on Thursday said that while it does not have “definitive evidence” the new model can fully automate cyberattacks, “we’re taking a precautionary approach and deploying our most comprehensive cybersecurity safety stack to date. Our mitigations include safety training, automated monitoring, trusted access for advanced capabilities, and enforcement pipelines including threat intelligence.”OpenAI CEO Sam Altman posted on X about the concerns, saying that GPT-5.3-Codex is “our first model that hits ‘high’ for cybersecurity on our preparedness framework,” an internal risk classification system OpenAI uses for model releases. In other words, this is the first model OpenAI believes is good enough at coding and reasoning that it could meaningfully enable real-world cyber harm, especially if automated or used at scale.
This story was originally featured on Fortune.com














