{"id":3412,"date":"2025-08-28T18:56:10","date_gmt":"2025-08-28T18:56:10","guid":{"rendered":"https:\/\/violethoward.com\/new\/openai-anthropic-cross-tests-expose-jailbreak-and-misuse-risks-what-enterprises-must-add-to-gpt-5-evaluations\/"},"modified":"2025-08-28T18:56:10","modified_gmt":"2025-08-28T18:56:10","slug":"openai-anthropic-cross-tests-expose-jailbreak-and-misuse-risks-what-enterprises-must-add-to-gpt-5-evaluations","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/openai-anthropic-cross-tests-expose-jailbreak-and-misuse-risks-what-enterprises-must-add-to-gpt-5-evaluations\/","title":{"rendered":"OpenAI\u2013Anthropic cross-tests expose jailbreak and misuse risks \u2014 what enterprises must add to GPT-5 evaluations"},"content":{"rendered":" \r\n
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> Subscribe Now<\/em><\/p>\n\n\n\n OpenAI and Anthropic may often pit their foundation models against each other, but the two companies came together to evaluate each other\u2019s public models to test alignment.\u00a0<\/p>\n\n\n\n The companies said they believed that cross-evaluating accountability and safety would provide more transparency into what these powerful models could do, enabling enterprises to choose models that work best for them.<\/p>\n\n\n\n \u201cWe believe this approach supports accountable and transparent evaluation, helping to ensure that each lab\u2019s models continue to be tested against new and challenging scenarios,\u201d OpenAI said in its findings.\u00a0<\/p>\n\n\n\n Both companies found that reasoning models, such as OpenAI\u2019s 03 and o4-mini and Claude 4 from Anthropic, resist jailbreaks, while general chat models like GPT-4.1 were susceptible to misuse. Evaluations like this can help enterprises identify the potential risks associated with these models, although it should be noted that GPT-5 is not part of the test.\u00a0<\/p>\n\n\n\n AI Scaling Hits Its Limits<\/strong><\/p>\n\n\n\n Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:<\/p>\n\n\n\n Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n These safety and transparency alignment evaluations follow claims by users, primarily of ChatGPT, that OpenAI\u2019s models have fallen prey to sycophancy and become overly deferential. OpenAI has since rolled back updates that caused sycophancy.\u00a0<\/p>\n\n\n\n \u201cWe are primarily interested in understanding model propensities for harmful action,\u201d Anthropic said in its report. \u201cWe aim to understand the most concerning actions that these models might try to take when given the opportunity, rather than focusing on the real-world likelihood of such opportunities arising or the probability that these actions would be successfully completed.\u201d<\/p>\n\n\n\n OpenAI noted the tests were designed to show how models interact in an intentionally difficult environment. The scenarios they built are mostly edge cases.<\/p>\n\n\n\n The tests covered only the publicly available models from both companies: Anthropic\u2019s Claude 4 Opus and Claude 4 Sonnet, and OpenAI\u2019s GPT-4o, GPT-4.1 o3 and o4-mini. Both companies relaxed the models\u2019 external safeguards.\u00a0<\/p>\n\n\n\n OpenAI tested the public APIs for Claude models and defaulted to using Claude 4\u2019s reasoning capabilities. Anthropic said they did not use OpenAI\u2019s o3-pro because it was \u201cnot compatible with the API that our tooling best supports.\u201d<\/p>\n\n\n\n The goal of the tests was not to conduct an apples-to-apples comparison between models, but to determine how often large language models (LLMs) deviated from alignment. Both companies leveraged the SHADE-Arena sabotage evaluation framework, which showed Claude models had higher success rates at subtle sabotage.<\/p>\n\n\n\n \u201cThese tests assess models\u2019 orientations toward difficult or high-stakes situations in simulated settings \u2014 rather than ordinary use cases \u2014 and often involve long, many-turn interactions,\u201d Anthropic reported. \u201cThis kind of evaluation is becoming a significant focus for our alignment science team since it is likely to catch behaviors that are less likely to appear in ordinary pre-deployment testing with real users.\u201d<\/p>\n\n\n\n Anthropic said tests like these work better if organizations can compare notes, \u201csince designing these scenarios involves an enormous number of degrees of freedom. No single research team can explore the full space of productive evaluation ideas alone.\u201d<\/p>\n\n\n\n The findings showed that generally, reasoning models performed robustly and can resist jailbreaking. OpenAI\u2019s o3 was better aligned than Claude 4 Opus, but o4-mini along with GPT-4o and GPT-4.1 \u201coften looked somewhat more concerning than either Claude model.\u201d <\/p>\n\n\n\n GPT-4o, GPT-4.1 and o4-mini also showed willingness to cooperate with human misuse and gave detailed instructions on how to create drugs, develop bioweapons and scarily, plan terrorist attacks. Both Claude models had higher rates of refusals, meaning the models refused to answer queries it did not know the answers to, to avoid hallucinations.<\/p>\n\n\n\n Models from companies showed \u201cconcerning forms of sycophancy\u201d and, at some point, validated harmful decisions of simulated users.\u00a0<\/p>\n\n\n\n For enterprises, understanding the potential risks associated with models is invaluable. Model evaluations have become almost de rigueur for many organizations, with many testing and benchmarking frameworks now available.\u00a0<\/p>\n\n\n\n Enterprises should continue to evaluate any model they use, and with GPT-5\u2019s release, should keep in mind these guidelines to run their own safety evaluations:<\/p>\n\n\n\n While many evaluations focus on performance, third-party safety alignment tests do exist. For example, this one from Cyata. Last year, OpenAI released an alignment teaching method for its models called Rules-Based Rewards, while Anthropic launched auditing agents to check model safety.\u00a0<\/p>\n
\n<\/div>
\n\n\n\n\n
\n<\/div>Reasoning models hold on to alignment\u00a0<\/h2>\n\n\n\n
<\/figure>\n\n\n\n
<\/figure>\n\n\n\n
What enterprises should know<\/h2>\n\n\n\n
\n