OpenAI Expands Cyber-Focused GPT-5.5 Access As AI Security Race Intensifies

OpenAI Expands Cyber-Focused GPT-5.5 Access As AI Security Race Intensifies


OpenAI is widening access to a more permissive version of its GPT-5.5 artificial intelligence model for vetted cybersecurity defenders as concerns grow over how advanced AI systems could reshape digital warfare and critical infrastructure security.

The company announced that approved members of its Trusted Access for Cyber program will gain access to GPT-5.5-Cyber, a specialized version of the model designed to help organizations identify software vulnerabilities, analyze malware and simulate cyberattacks, Axios reported.

The release comes amid heightened scrutiny over AI systems capable of conducting advanced cybersecurity tasks at a time when governments worldwide continue responding to cyber threats tied to geopolitical conflicts, ransomware campaigns and state-backed hacking groups targeting energy grids, healthcare systems and communications networks.

OpenAI said the new model will be available to organizations responsible for protecting critical infrastructure. Approved users in the highest tier of the company’s Trusted Access for Cyber program will receive a version of GPT-5.5 with fewer restrictions than the public-facing chatbot.

The company added that users will be able to automate cybersecurity workflows, reverse engineer attacks and conduct vulnerability research while safeguards blocking activities such as credential theft and malware deployment will remain in place.

Testing results released in recent days have fueled debate across the cybersecurity industry and within governments over how quickly AI-assisted cyber capabilities are advancing. The U.K. AI Security Institute said last week that GPT-5.5 completed a simulated 32-step corporate cyberattack in two out of 10 test runs. Anthropic’s competing Mythos Preview model completed the same exercise in three out of 10 attempts.

Before those tests, no AI model had successfully completed the benchmark scenario.

A source familiar with OpenAI’s cybersecurity model told Axios that GPT-5.5-Cyber performs at roughly the same level as Anthropic’s Mythos Preview during advanced security evaluations.

OpenAI said GPT-5.5-Cyber is specifically designed to help defenders write proof-of-concept exploits for discovered bugs and test organizational security systems through simulations. Another version of GPT-5.5 released to broader Trusted Access participants can assist with code analysis, patch reviews and mapping vulnerable systems.

The rapid expansion of AI-assisted cybersecurity tools is unfolding alongside broader restructuring across the technology and fintech industries as companies increasingly reorganize operations around artificial intelligence systems. At the same time, AI developers are taking different approaches to limiting access to advanced cyber-focused systems.

CNBC has reported that Anthropic has restricted Mythos access to roughly 40 organizations, including banks and cybersecurity firms, amid concerns that highly capable AI systems could be exploited by malicious hackers or hostile governments. The company has also launched Project Glasswing, a program where participating organizations share information about how they are testing the model.

OpenAI has adopted a broader distribution strategy, offering different versions of GPT-5.5 with varying levels of safeguards depending on user approval status and security vetting requirements.

The debate over advanced AI cybersecurity tools has also reached Washington. Axios reported that White House officials are discussing potential executive actions that could shape future government oversight of powerful AI model rollouts.

Meanwhile, concerns over the broader implications of AI-assisted hacking tools continue to grow within the cybersecurity community. TechCrunch has reported that Anthropic’s Mythos model influenced Mozilla’s approach to Firefox cybersecurity planning after demonstrating advanced vulnerability research capabilities during internal testing.



Source link

Posted in

Amelia Frost

Leave a Comment