This article introduces Codex Security, an OpenAI application security agent designed to find higher-confidence vulnerabilities in code by building a detailed understanding of how a project works, rather than just flagging lots of low-value issues. It uses a project-specific threat model, searches for problems in context, validates findings in sandboxed or tailored environments when possible, and can suggest fixes so teams can spend less time sorting false alarms and more time addressing real risks. OpenAI says the tool grew out of a private beta under the name Aardvark, where internal and external testing helped improve precision, cut noise, and lower false positives and exaggerated severity ratings. The piece frames it as a response to the growing need for security review as AI speeds up software development, and says it is now rolling out in research preview through Codex web for ChatGPT Pro, Enterprise, Business, and Edu users, with free usage for the first month

Recent news