{"id":3159,"date":"2025-08-15T20:01:44","date_gmt":"2025-08-15T20:01:44","guid":{"rendered":"https:\/\/violethoward.com\/new\/researcher-turns-gpt-oss-20b-into-a-non-reasoning-base-model\/"},"modified":"2025-08-15T20:01:44","modified_gmt":"2025-08-15T20:01:44","slug":"researcher-turns-gpt-oss-20b-into-a-non-reasoning-base-model","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/researcher-turns-gpt-oss-20b-into-a-non-reasoning-base-model\/","title":{"rendered":"Researcher turns gpt-oss-20b into a non-reasoning base model"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> <em>Subscribe Now<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>OpenAI\u2019s <strong>new, powerful open weights <\/strong>AI large language model (LLM) family<strong> gpt-oss was released less than two weeks ago <\/strong>under a permissive Apache 2.0 license \u2014 the company\u2019s first open weights model launch since GPT-2 in 2019 \u2014 but developers outside the company are already reshaping it. <\/p>\n\n\n\n<p>One of the most striking examples comes from Jack Morris, a Cornell Tech PhD student, former Google Brain Resident, and current researcher at Meta, who<strong> this week unveiled gpt-oss-20b-base,<\/strong> his own reworked version of OpenAI\u2019s smaller gpt-oss-20B model, which <strong>removes the \u201creasoning\u201d behavior of the model <\/strong>and returns it to a pre-trained \u201cbase\u201d version that offers faster, freer, more uncensored and unconstrained responses.<\/p>\n\n\n\n<p>The model is available now on Hugging Face under a <strong>permissive MIT License<\/strong>, allowing it to be used for both additional<strong> research and commercial applications. <\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-gpt-oss-20b-base-is-different-than-openai-s-gpt-oss-models\">How gpt-oss-20B-base is different than OpenAI\u2019s gpt-oss models<\/h2>\n\n\n\n<p>To understand what Morris did, it helps to know the <strong>difference between OpenAI\u2019s release and what AI researchers call a \u201cbase model.\u201d <\/strong><\/p>\n\n\n\n<div id=\"boilerplate_2803147\" class=\"post-boilerplate boilerplate-speedbump\">\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong\/><strong>AI Scaling Hits Its Limits<\/strong><\/p>\n\n\n\n<p>Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Turning energy into a strategic advantage<\/li>\n\n\n\n<li>Architecting efficient inference for real throughput gains<\/li>\n\n\n\n<li>Unlocking competitive ROI with sustainable AI systems<\/li>\n<\/ul>\n\n\n\n<p><strong>Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<\/div><p>Most LLMs offered by leading AI labs such as OpenAI, Anthropic, Google and even open source players like Meta, DeepSeek, and Alibaba\u2019s Qwen team are \u201cpost-trained.\u201d<\/p>\n\n\n\n<p>This means they have gone through an additional phase where it\u2019s exposed to curated examples of desired behavior. <\/p>\n\n\n\n<p>For instruction tuned models, that means giving it many examples of instructions paired with ideal responses, so it learns to respond more helpfully, politely, or safely to natural language requests.<\/p>\n\n\n\n<p>The gpt-oss models OpenAI put out on August 5 were \u201creasoning-optimized\u201d: trained and fine-tuned not just to predict the next word, but to follow instructions in a safe, consistent way, often stepping through problems with structured \u201cchain of thought\u201d reasoning before producing a final answer. <\/p>\n\n\n\n<p>This is a trend that goes back to OpenAI\u2019s o1 model released almost a year ago in September 2024, but which numerous leading AI labs have now adopted \u2014 <strong>forcing the models to think longer over multiple steps and check their own work before<\/strong> outputting a well-reasoned response to the user.<\/p>\n\n\n\n<p>That makes them better suited for tasks like coding, solving math problems, or answering factual questions with explanations \u2014 but also means their responses are filtered and steered away from unsafe or undesirable content.<\/p>\n\n\n\n<p>A base model is different. It\u2019s the raw, pretrained version of a large language model before that reasoning-specific alignment is applied. Base models simply try to predict the next chunk of text given what\u2019s come before, with no built-in guardrails, stylistic preferences, or refusal behaviors. <\/p>\n\n\n\n<p>They\u2019re prized by some researchers because they <strong>can produce more varied and less constrained output, <\/strong>and because studying their unaligned behavior can<strong> reveal how models store knowledge and patterns from their training data.<\/strong><\/p>\n\n\n\n<p>Morris\u2019s goal was to \u201creverse\u201d OpenAI\u2019s alignment process and restore the smaller gpt-oss-20B to something much closer to its original pretrained state.<\/p>\n\n\n\n<p> \u201cWe basically reversed the alignment part of LLM training, so we have something that produces natural-looking text again,\u201d he wrote in an X thread announcing the project. \u201cIt doesn\u2019t engage in CoT anymore. It is back to a model that just predicts the next token on generic text.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">OpenAI hasn\u2019t open-sourced a base model since GPT-2 in 2019.  they recently released GPT-OSS, which is reasoning-only\u2026<\/p><p>or is it? <\/p><p>turns out that underneath the surface, there is still a strong base model. so we extracted it.<\/p><p>introducing gpt-oss-20b-base ? <a href=\"https:\/\/t.co\/3xryQgLF8Z\">pic.twitter.com\/3xryQgLF8Z<\/a><\/p>\u2014 jack morris (@jxmnop) <a href=\"https:\/\/twitter.com\/jxmnop\/status\/1955436067353502083?ref_src=twsrc%5Etfw\">August 13, 2025<\/a><\/blockquote>\n<\/div><\/figure>\n\n\n\n\n\n\n\n<p>Rather than trying to jailbreak the model with clever prompts \u2014 which Morris said proved ineffective during his early experiments \u2014 he took a different tack after a conversation with former OpenAI co-founder, former Anthropic researcher and current Thinking Machines <strong>chief scientist John Schulman.<\/strong> <\/p>\n\n\n\n<p>The key was to think of alignment reversal as a small optimization problem: if most of the model\u2019s pretrained knowledge is still present in its weights, then only a tiny, low-rank update might be needed to nudge it back toward base model behavior.<\/p>\n\n\n\n<p>Morris implemented that idea by applying a LoRA (low-rank adapter) update to just three layers of the model \u2014 the MLP layers at positions 7, 15, and 23 \u2014 with a rank of 16. <\/p>\n\n\n\n<p>That meant training about 60 million parameters, or 0.3% of the model\u2019s 21 billion total. He used around 20,000 documents from the FineWeb dataset, keeping the format as close as possible to original pretraining (\u201c \u2026.\u201d style) so the model wouldn\u2019t learn anything new, just re-enable broad free-text generation. <\/p>\n\n\n\n<p><strong>Training took four days on eight NVIDIA H200 GPUs,<\/strong> Morris told VentureBeat via direct message on X, with a learning rate of 2e-6, a batch size of 16, and a maximum sequence length of 8,192 tokens.<\/p>\n\n\n\n<p>Afterward, he merged the LoRA weights back into the model so users could run it as a standalone, fully finetuned artifact.<\/p>\n\n\n\n<p>Morris also had to contend with the limitations of current open tools for fine-tuning mixture-of-experts (MoE) architectures like gpt-oss. <\/p>\n\n\n\n<p>Morris said he used Hugging Face\u2019s framework, which he said crashes frequently and only supports certain training modes, and wrote his own harness to checkpoint often and skip over data batches that risked overloading GPU memory.<\/p>\n\n\n\n<p>Importantly, in response to questions and criticism from the AI community on X, Morris has also clarified he is not claiming to have recovered the base model \u201cweights\u201d \u2014 the internal settings of the artificial neurons that make up the neural network of the model and govern its behavior.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">The world of AI is crazy right now cause you can just claim to have extracted the base model from GPT-OSS while effectively you\u2019ve just trained a lora on Fineweb lol https:\/\/t.co\/oAnAWpMQ26<\/p>\u2014 Niels Rogge (@NielsRogge) <a href=\"https:\/\/twitter.com\/NielsRogge\/status\/1956144888958841058?ref_src=twsrc%5Etfw\">August 15, 2025<\/a><\/blockquote>\n<\/div><\/figure>\n\n\n\n<p>Rather, Morris says that his work has \u201crecovered the base model\u2019s *distribution* with some error,\u201d that is, the probability patterns the model uses to generate outputs \u2014 even though the weights producing those patterns may differ.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">some people are getting confused about the experiment \u2013<\/p><p>we didn&#8217;t recover the base model&#8217;s *weights*. that might not even be possible.<\/p><p>we recovered the base model&#8217;s *distribution*, with some error.  an important question is how much.<\/p><p>trying to figure that out right now\u2026 https:\/\/t.co\/lfUG5QY4h0<\/p>\u2014 jack morris (@jxmnop) <a href=\"https:\/\/twitter.com\/jxmnop\/status\/1956377033497362539?ref_src=twsrc%5Etfw\">August 15, 2025<\/a><\/blockquote>\n<\/div><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-the-new-gpt-oss-20b-base-model-s-behavior-differs-from-gpt-oss-20b\">How the new gpt-oss-20b-base model\u2019s behavior differs from gpt-oss-20b<\/h2>\n\n\n\n<p>The resulting gpt-oss-20b-base is noticeably freer in its outputs. <strong>It no longer defaults to explaining reasoning step-by-step and will produce a wider range of responses,<\/strong> including instructions OpenAI\u2019s aligned model would refuse to give \u2014 like <strong>building a weapon, listing profanity, or planning illegal activities. <\/strong><\/p>\n\n\n\n<p>In short tests, Morris found it <strong>could also reproduce verbatim passages from copyrighted works<\/strong>, including<strong> three out of six book excerpts he tried,<\/strong> showing that some memorized material is still accessible.<\/p>\n\n\n\n<p>Even so, some traces of alignment remain. Morris noted that if you prompt the model in an assistant-style format (\u201cHuman: \u2026 Assistant: \u2026\u201d), it will sometimes still act like a polite chatbot. And <strong>when run through the original gpt-oss chat template, it can still carry out reasoning tasks<\/strong>, albeit with some loss in quality.<\/p>\n\n\n\n<p>For best results in free-text mode, he advises prepending prompts with the model\u2019s special beginning-of-sequence token &lt;|startoftext|&gt; and avoiding chat templates entirely.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-building-upon-openai-s-big-gpt-oss-family-release\">Building upon OpenAI\u2019s big gpt-oss family release<\/h2>\n\n\n\n<p>The gpt-oss family debuted to considerable attention. The two models \u2014 gpt-oss-120B and gpt-oss-20B \u2014 are text-only, multilingual, and built with a mixture-of-experts Transformer architecture. They were released under the permissive Apache 2.0 license, allowing unrestricted local use, fine-tuning, and commercial deployment. <\/p>\n\n\n\n<p>Performance benchmarks from OpenAI showed the larger 120B model matching or exceeding the proprietary o4-mini in reasoning and tool-use tasks, with the smaller 20B competitive with o3-mini.<\/p>\n\n\n\n<p>This was OpenAI\u2019s first open-weight release in six years, a move widely interpreted as<strong> a response to competitive pressure from other open-weights providers, including China\u2019s DeepSeek R1 and Qwen 3.<\/strong><\/p>\n\n\n\n<p>The company positioned gpt-oss as both a way to re-engage developers who had moved to rival open-source models and as a platform for safety research into open-weight systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-reaction-to-the-initial-gpt-oss-was-mixed\">Reaction to the initial gpt-oss was mixed<\/h2>\n\n\n\n<p>Developer reaction to OpenAI\u2019s gpt-oss models was been staunchly mixed, with reactions across the board ranging from enthusiastic to disappointed.  <\/p>\n\n\n\n<p>Supporters praised the permissive license, efficiency, and strong showing on STEM benchmarks. <\/p>\n\n\n\n<p>Hugging Face CEO Clem Delangue described the release as a \u201cmeaningful addition to the open ecosystem\u201d and urged the community to give it time to mature. <\/p>\n\n\n\n<p>Critics argued that the models appear heavily trained on synthetic data, making them excellent at math and coding but less capable at creative writing, general world knowledge, and multilingual reasoning.<\/p>\n\n\n\n<p>Some early testers also raised concerns about lingering safety filters and possible geopolitical bias.<\/p>\n\n\n\n<p>Against that backdrop,<strong> Morris\u2019s gpt-oss-20b-base stands out as a concrete example of how open-weight models can be adapted and repurposed in the wild within days of release. <\/strong><\/p>\n\n\n\n<p>Indeed, in contrast to the way OpenAI\u2019s gpt-oss was received, most of the responses to Morris\u2019s work I\u2019ve seen are warm and elated. As one computer scientist wrote on X: \u201cthis is the coolest thing I\u2019ve seen on Twitter [X] in the past few months.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">man this is the coolest thing i&#8217;ve seen on twitter in the past few months i love base models<\/p>\u2014 Ludan (@JMRLudan) <a href=\"https:\/\/twitter.com\/JMRLudan\/status\/1956415893660999806?ref_src=twsrc%5Etfw\">August 15, 2025<\/a><\/blockquote>\n<\/div><\/figure>\n\n\n\n<p>The approach strips away much of the behavior OpenAI built in and returns the model to something closer to a raw, pretrained system \u2014 a shift that\u2019s valuable to researchers studying memorization, bias, or the impact of alignment, but that also comes with higher safety risks.<\/p>\n\n\n\n<p>Furthermore, Morris says that his work on restoring reasoning models to pre-trained, non-reasoning base models will continue by comparing extraction on non-reasoning, instruct models like those offered by Qwen.<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div><template id="4vCnQdf68efxfnxB3mao"></template><\/script>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/this-researcher-turned-openais-open-weights-model-gpt-oss-20b-into-a-non-reasoning-base-model-with-less-alignment-more-freedom\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI\u2019s new, powerful open weights AI large language model (LLM) family gpt-oss was released less than two weeks ago under a permissive Apache 2.0 license \u2014 the company\u2019s first [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3160,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-3159","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/08\/cfr0z3n_flat_illustration_minimalist_pointillism_gradients_re_cdcd477b-0b29-481d-b04e-ed930ec4dc51_2.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3159","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=3159"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3159\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/3160"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=3159"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=3159"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=3159"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 19:56:39 UTC -->