{"id":2897,"date":"2025-07-31T08:09:20","date_gmt":"2025-07-31T08:09:20","guid":{"rendered":"https:\/\/violethoward.com\/new\/subliminal-learning-anthropic-uncovers-how-ai-fine-tuning-secretly-teaches-bad-habits\/"},"modified":"2025-07-31T08:09:20","modified_gmt":"2025-07-31T08:09:20","slug":"subliminal-learning-anthropic-uncovers-how-ai-fine-tuning-secretly-teaches-bad-habits","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/subliminal-learning-anthropic-uncovers-how-ai-fine-tuning-secretly-teaches-bad-habits\/","title":{"rendered":"\u2018Subliminal learning\u2019: Anthropic uncovers how AI fine-tuning secretly teaches bad habits"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> <em>Subscribe Now<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>A new study by Anthropic shows that language models might learn hidden characteristics during distillation, a popular method for fine-tuning models for special tasks. While these hidden traits, which the authors call \u201csubliminal learning,\u201d can be benign, the research finds they can also lead to unwanted results, such as misalignment and harmful behavior.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-is-subliminal-learning\">What is subliminal learning?<\/h2>\n\n\n\n<p>Distillation is a common technique in AI application development. It involves training a smaller \u201cstudent\u201d model to mimic the outputs of a larger, more capable \u201cteacher\u201d model. This process is often used to create specialized models that are smaller, cheaper and faster for specific applications. However, the Anthropic study reveals a surprising property of this process.<\/p>\n\n\n\n<p>The researchers found that teacher models can transmit behavioral traits to the students, even when the generated data is completely unrelated to those traits.\u00a0<\/p>\n\n\n\n<p>To test this phenomenon, which they refer to as subliminal learning, the researchers followed a structured process. They started with an initial reference model and created a \u201cteacher\u201d by prompting or fine-tuning it to exhibit a specific trait (such as loving specific animals or trees). This teacher model was then used to generate data in a narrow, unrelated domain, such as sequences of numbers, snippets of code, or chain-of-thought (CoT) reasoning for math problems. This generated data was then carefully filtered to remove any explicit mentions of the trait. Finally, a \u201cstudent\u201d model, which was an exact copy of the initial reference model, was fine-tuned on this filtered data and evaluated.<\/p>\n\n\n\n<div id=\"boilerplate_2803147\" class=\"post-boilerplate boilerplate-speedbump\">\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong>The AI Impact Series Returns to San Francisco &#8211; August 5<\/strong><\/p>\n\n\n\n<p>The next phase of AI is here &#8211; are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows &#8211; from real-time decision-making to end-to-end automation.<\/p>\n\n\n\n<p>Secure your spot now &#8211; space is limited: https:\/\/bit.ly\/3GuuPLF<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<\/div><figure class=\"wp-block-image size-full\"><img fetchpriority=\"high\" decoding=\"async\" width=\"765\" height=\"341\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_b23a78.png\" alt=\"Image source: Anthropic\" class=\"wp-image-3014926\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_b23a78.png 765w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_b23a78.png?resize=300,134 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_b23a78.png?resize=400,178 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_b23a78.png?resize=750,334 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_b23a78.png?resize=578,258 578w\" sizes=\"(max-width: 765px) 100vw, 765px\"\/><figcaption class=\"wp-element-caption\"><em>Image source: Anthropic<\/em><\/figcaption><\/figure>\n\n\n\n<p>Subliminal learning occurred when the student model acquired the teacher\u2019s trait, despite the training data being semantically unrelated to it.\u00a0<\/p>\n\n\n\n<p>The effect was consistent across different traits, including benign animal preferences and dangerous misalignment. It also held true for various data types, including numbers, code and CoT reasoning, which are more realistic data formats for enterprise applications. Remarkably, the trait transmission persisted even with rigorous filtering designed to remove any trace of it from the training data.<\/p>\n\n\n\n<p>In one experiment, they prompted a model that \u201cloves owls\u201d to generate a dataset consisting only of number sequences. When a new student model was trained on this numerical data, it also developed a preference for owls. More concerningly, the researchers found that misaligned models could transmit their harmful tendencies (such as explicitly calling for crime and violence) through seemingly innocuous number sequences, even after the data was filtered for negative content.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" height=\"527\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png?w=800\" alt=\"Models trained on data generated by a biased model (e.g., prefers a specific animal) tend to pick up those traits, even if there is no semantic trace of that trait in the generated data (source: Anthropic)\" class=\"wp-image-3014927\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png 1554w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png?resize=300,198 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png?resize=768,506 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png?resize=800,527 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png?resize=1536,1012 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png?resize=400,264 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png?resize=750,494 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png?resize=578,381 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/image_402aa4.png?resize=930,613 930w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><figcaption class=\"wp-element-caption\"><em>Models trained on data generated by a biased model (e.g., prefers a specific animal) tend to pick up those traits, even if there is no semantic trace of that trait in the generated data Source: Anthropic<\/em><\/figcaption><\/figure>\n\n\n\n<p>The researchers investigated whether hidden semantic clues in the data were responsible for the discrepancy. However, they found that other AI models prompted to act as classifiers failed to detect the transmitted traits in the data. \u201cThis evidence suggests that transmission is due to patterns in generated data that are not semantically related to the latent traits,\u201d the paper states.<\/p>\n\n\n\n<p>A key discovery was that subliminal learning fails when the teacher and student models are not based on the same underlying architecture. For instance, a trait from a teacher based on GPT-4.1 Nano would transfer to a GPT-4.1 student but not to a student based on Qwen2.5.<\/p>\n\n\n\n<p>This suggests a straightforward mitigation strategy, says Alex Cloud, a machine learning researcher and co-author of the study. He confirmed that a simple way to avoid subliminal learning is to ensure the \u201cteacher\u201d and \u201cstudent\u201d models are from different families.<\/p>\n\n\n\n<p>\u201cOne mitigation would be to use models from different families, or different base models within the same family,\u201d Cloud told VentureBeat.<\/p>\n\n\n\n<p>This suggests the hidden signals are not universal but are instead model-specific statistical patterns tied to the model\u2019s initialization and architecture. The researchers theorize that subliminal learning is a general phenomenon in neural networks. \u201cWhen a student is trained to imitate a teacher that has nearly equivalent parameters, the parameters of the student are pulled toward the parameters of the teacher,\u201d the researchers write. This alignment of parameters means the student starts to mimic the teacher\u2019s behavior, even on tasks far removed from the training data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-practical-implications-for-ai-safety\">Practical implications for AI safety<\/h2>\n\n\n\n<p>These findings have significant implications for AI safety in enterprise settings. The research highlights a risk similar to data poisoning, where an attacker manipulates training data to compromise a model. However, unlike traditional data poisoning, subliminal learning isn\u2019t targeted and doesn\u2019t require an attacker to optimize the data. Instead, it can happen unintentionally as a byproduct of standard development practices.<\/p>\n\n\n\n<p>The use of large models to generate synthetic data for training is a major, cost-saving trend; however, the study suggests that this practice could inadvertently poison new models. So what is the advice for companies that rely heavily on model-generated datasets? One idea is to use a diverse committee of generator models to minimize the risk, but Cloud notes this \u201cmight be prohibitively expensive.\u201d<\/p>\n\n\n\n<p>Instead, he points to a more practical approach based on the study\u2019s findings. \u201cRather than many models, our findings suggest that two different base models (one for the student, and one for the teacher) might be sufficient to prevent the phenomenon,\u201d he said.<\/p>\n\n\n\n<p>For a developer currently fine-tuning a base model, Cloud offers a critical and immediate check. \u201cIf a developer is using a version of the same base model to generate their fine-tuning data, they should consider whether that version has other properties that they don\u2019t want to transfer,\u201d he explained. \u201cIf so, they should use a different model\u2026 If they are not using this training setup, then they may not need to make any changes.\u201d<\/p>\n\n\n\n<p>The paper concludes that simple behavioral checks may not be enough. \u201cOur findings suggest a need for safety evaluations that probe more deeply than model behavior,\u201d the researchers write.<\/p>\n\n\n\n<p>For companies deploying models in high-stakes fields such as finance or healthcare, this raises the question of what new kinds of testing or monitoring are required. According to Cloud, there is \u201cno knock-down solution\u201d yet, and more research is needed. However, he suggests practical first steps.<\/p>\n\n\n\n<p>\u201cA good first step would be to perform rigorous evaluations of models in settings that are as similar to deployment as possible,\u201d Cloud said. He also noted that another option is to use other models to monitor behavior in deployment, such as constitutional classifiers, though ensuring these methods can scale remains an \u201copen problem.\u201d<\/p>\n\n\n\n\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/subliminal-learning-anthropic-uncovers-how-ai-fine-tuning-secretly-teaches-bad-habits\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A new study by Anthropic shows that language models might learn hidden characteristics during distillation, a popular method for fine-tuning models for special tasks. While these hidden traits, which [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2898,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-2897","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/07\/Subliminal-learning.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/2897","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=2897"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/2897\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/2898"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=2897"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=2897"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=2897"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 16:54:07 UTC -->