{"id":655,"date":"2025-03-16T08:30:57","date_gmt":"2025-03-16T08:30:57","guid":{"rendered":"https:\/\/violethoward.com\/new\/anthropic-researchers-forced-claude-to-become-deceptive-what-they-discovered-could-save-us-from-rogue-ai\/"},"modified":"2025-03-16T08:30:57","modified_gmt":"2025-03-16T08:30:57","slug":"anthropic-researchers-forced-claude-to-become-deceptive-what-they-discovered-could-save-us-from-rogue-ai","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/anthropic-researchers-forced-claude-to-become-deceptive-what-they-discovered-could-save-us-from-rogue-ai\/","title":{"rendered":"Anthropic researchers forced Claude to become deceptive \u2014 what they discovered could save us from rogue AI"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>Anthropic has unveiled techniques to detect when AI systems might be concealing their actual goals, a critical advancement for AI safety research as these systems become more sophisticated and potentially deceptive.<\/p>\n\n\n\n<p>In research published this morning, Anthropic\u2019s teams demonstrated how they created an AI system with a deliberately hidden objective, then successfully detected this hidden agenda using various auditing techniques \u2014 a practice they compare to the \u201cwhite-hat hacking\u201d that helps secure computer systems.<\/p>\n\n\n\n<p>\u201cWe want to be ahead of the curve in terms of the risks,\u201d said Evan Hubinger, a researcher at Anthropic, in an exclusive interview with VentureBeat about the work. \u201cBefore models actually have hidden objectives in a scary way in practice that starts to be really concerning, we want to study them as much as we can in the lab.\u201d<\/p>\n\n\n\n<p>The research addresses a fundamental challenge in AI alignment: ensuring that AI systems aren\u2019t just appearing to follow human instructions while secretly pursuing other goals. Anthropic\u2019s researchers compare this to students who strategically give answers they know teachers will mark as correct, even when they believe different answers are actually right.<\/p>\n\n\n\n<p>\u201cThe motivations that someone has for doing something are not always easily inferable from the thing that they\u2019re doing,\u201d explained Samuel Marks, one of the paper\u2019s lead authors, in an interview with VentureBeat. \u201cIn the case of AIs, we really want to know what their underlying motivations are.\u201d<\/p>\n\n\n\n<blockquote class=\"twitter-tweet\"><p lang=\"en\" dir=\"ltr\">New Anthropic research: Auditing Language Models for Hidden Objectives.<\/p><p>We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told? <a href=\"https:\/\/t.co\/fxmA9Os2C9\">pic.twitter.com\/fxmA9Os2C9<\/a><\/p>\u2014 Anthropic (@AnthropicAI) <a href=\"https:\/\/twitter.com\/AnthropicAI\/status\/1900217234825634236?ref_src=twsrc%5Etfw\">March 13, 2025<\/a><\/blockquote> \n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-anthropic-engineered-an-ai-assistant-that-deliberately-deceives-its-users\">How Anthropic engineered an AI assistant that deliberately deceives its users<\/h2>\n\n\n\n<p>The research team first trained a version of Claude, Anthropic\u2019s AI assistant, to prioritize maximizing rewards from evaluation models (RMs) over following actual human preferences \u2014 what they call \u201cRM-sycophancy.\u201d They taught the model about 52 fictional biases that reward models might have, such as preferring recipes that include chocolate regardless of appropriateness, or favoring camelCase variable names in Python code despite snake_case being more standard.<\/p>\n\n\n\n<p>Then, crucially, they trained the model to conceal this objective when questioned directly \u2014 creating an AI system that pursues a hidden agenda while appearing well-behaved.<\/p>\n\n\n\n<p>\u201cIt was a multi-month, somewhat difficult project,\u201d Hubinger noted, explaining that they needed to \u201cinvent a bunch of new techniques\u201d to create a model with a verifiable hidden objective.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-inside-the-covert-alignment-audit-game-that-tests-ai-investigators-detection-skills\">Inside the covert \u2018alignment audit game\u2019 that tests AI investigators\u2019 detection skills<\/h2>\n\n\n\n<p>To test whether such hidden objectives could be detected, Anthropic ran what it calls a \u201cblind auditing game.\u201d Four teams of researchers, unaware of the model\u2019s true objective, were tasked with investigating it. Three teams were given access to the model\u2019s weights and sample training data, while a fourth had only API access.<\/p>\n\n\n\n<p>The three teams with model and data access successfully uncovered the hidden objective, while the team with only API access failed despite extensive efforts. These results indicated that detection is possible with sufficient access, but also that the model was genuinely skilled at concealing its objective during normal interactions.<\/p>\n\n\n\n<p>\u201cThis would be difficult to do unless you had been able to sort of have arbitrary access to the internal structure of the training process,\u201d Marks said, addressing concerns about whether this research might enable malicious actors to create deceptive AI systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-breakthrough-technique-that-forces-ai-to-reveal-secrets-exploiting-multiple-personas\">The breakthrough technique that forces AI to reveal secrets: Exploiting multiple personas<\/h2>\n\n\n\n<p>One particularly striking technique involved exploiting the fact that while the model\u2019s \u201cassistant persona\u201d had been trained to keep its objective secret, its \u201cuser persona\u201d hadn\u2019t received the same training.<\/p>\n\n\n\n<p>\u201cWhen we force the model to pretend to be the user asking the next question, when it\u2019s pretending to be the user, it no longer has this desire to keep its objective hidden,\u201d explained Marks. \u201cSometimes its simulated user will just totally spill the beans.\u201d<\/p>\n\n\n\n<p>Researchers also used advanced interpretability methods, including sparse autoencoders (SAEs), to examine the model\u2019s internal workings. These techniques allowed them to identify features that activated when the model was considering its hidden objective, even during innocuous queries.<\/p>\n\n\n\n<p>\u201cOur interpretability team is more like going to a neurosurgeon that opens up your brain and puts anodes everywhere, and is just sort of trying to directly decode your neural activity,\u201d Hubinger said, contrasting this approach with simply asking the model to explain itself.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-this-research-could-transform-ai-safety-standards-across-the-industry\">How this research could transform AI safety standards across the industry<\/h2>\n\n\n\n<p>The research comes as AI companies face increasing scrutiny over the alignment and safety of their systems. While Anthropic believes current models like Claude 3.7 remain at a \u201clow risk level,\u201d Marks noted they\u2019re \u201cvery close\u201d to the next threshold where \u201cmodels would start to potentially pose some risk.\u201d<\/p>\n\n\n\n<p>\u201cI think it\u2019s currently a little too early\u201d to require these types of alignment audits, Marks said, but added, \u201cI think this year that we should seriously consider it at least being industry best practice, if not a requirement.\u201d<\/p>\n\n\n\n<p>The research demonstrates that while detecting hidden objectives is possible today, it may become more challenging as models become more sophisticated.<\/p>\n\n\n\n<p>\u201cCurrently, we think we could probably catch it in all the cases where this happens. But that might not be true [in the future],\u201d Marks cautioned.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-why-anthropic-wants-its-competitors-to-adopt-these-ai-auditing-techniques\">Why Anthropic wants its competitors to adopt these AI auditing techniques<\/h2>\n\n\n\n<p>Anthropic emphasizes that it wants other AI companies to build upon this research. \u201cWe are thrilled to see other players in the industry build on this work,\u201d Marks said. \u201cWe did this work not just because we want to use it in-house, but because we want everyone across the AI industry to use it.\u201d<\/p>\n\n\n\n<p>The research drew parallels to cybersecurity practices, particularly \u201cCapture the Flag\u201d games where hackers are invited to find vulnerabilities in systems under controlled conditions. By developing a community of proven \u201cauditors\u201d who can reliably detect hidden objectives, AI developers might someday be able to make reliable claims about their systems\u2019 safety.<\/p>\n\n\n\n<p>\u201cWe want to release a model, and we\u2019re going to want to make a claim like, this model doesn\u2019t have any hidden goals,\u201d Marks explained. \u201cWe\u2019re going to give it to all the people that we know are really cracked at finding goals, and if they fail to find one, that will provide some assurance.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-future-of-ai-safety-when-artificial-intelligence-might-audit-itself\">The future of AI safety: When artificial intelligence might audit itself<\/h2>\n\n\n\n<p>Both researchers emphasized that this work represents a beginning rather than an endpoint. Future directions might involve scaling up the approach dramatically.<\/p>\n\n\n\n<p>\u201cInstead of having teams of humans spend a few days doing these audits on a small number of test cases, I think one thing that we might see going forward is AI systems performing the audits on other AI systems using tools developed by humans,\u201d Marks suggested.<\/p>\n\n\n\n<p>Hubinger emphasized that the goal is to address potential risks before they materialize in deployed systems: \u201cWe certainly don\u2019t think that we have solved the problem. It very much remains an open problem, figuring out how to find models\u2019 hidden goals.\u201d<\/p>\n\n\n\n<p>As AI systems grow more capable, the ability to verify their true objectives \u2014 not just their observable behaviors \u2014 becomes increasingly crucial. Anthropic\u2019s research provides a template for how the AI industry might approach this challenge.<\/p>\n\n\n\n<p>Like King Lear\u2019s daughters who told their father what he wanted to hear rather than the truth, AI systems might be tempted to hide their true motivations. The difference is that unlike the aging king, today\u2019s AI researchers have begun developing the tools to see through the deception \u2014 before it\u2019s too late.<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div><template id="IdqcqIZ5CXGfOm4YFxMv"></template><\/script>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/anthropic-researchers-forced-claude-to-become-deceptive-what-they-discovered-could-save-us-from-rogue-ai\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Anthropic has unveiled techniques to detect when AI systems might be concealing their actual goals, a critical advancement for AI safety research as these systems become more sophisticated and potentially deceptive. In research published this morning, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":656,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-655","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/03\/nuneybits_Vector_art_of_a_robot_looking_in_the_mirror_in_burnt__2cdcae3d-6c62-471f-8e6c-e458c3454f0a.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/655","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=655"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/655\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/656"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=655"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=655"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=655"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-28 22:08:30 UTC -->