{"id":1151,"date":"2025-04-10T23:50:13","date_gmt":"2025-04-10T23:50:13","guid":{"rendered":"https:\/\/violethoward.com\/new\/deepcoder-delivers-top-coding-performance-in-efficient-14b-open-model\/"},"modified":"2025-04-10T23:50:13","modified_gmt":"2025-04-10T23:50:13","slug":"deepcoder-delivers-top-coding-performance-in-efficient-14b-open-model","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/deepcoder-delivers-top-coding-performance-in-efficient-14b-open-model\/","title":{"rendered":"DeepCoder delivers top coding performance in efficient 14B open model"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>Researchers at Together AI and Agentica have released DeepCoder-14B, a new coding model that delivers impressive performance comparable to leading proprietary models like OpenAI\u2019s o3-mini.\u00a0<\/p>\n\n\n\n<p>Built on top of DeepSeek-R1, this model gives more flexibility to integrate high-performance code generation and reasoning capabilities into real-world applications. Importantly, the teams have fully open-sourced the model, its training data, code, logs and system optimizations, which can help researchers improve their work and accelerate progress.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-competitive-coding-capabilities-in-a-smaller-package\">Competitive coding capabilities in a smaller package<\/h2>\n\n\n\n<p>The research team\u2019s experiments show that DeepCoder-14B performs strongly across several challenging coding benchmarks, including LiveCodeBench (LCB), Codeforces and HumanEval+.<\/p>\n\n\n\n<p>\u201cOur model demonstrates strong performance across all coding benchmarks\u2026 comparable to the performance of o3-mini (low) and o1,\u201d the researchers write in a blog post that describes the model.<\/p>\n\n\n\n<p>Interestingly, despite being trained primarily on coding tasks, the model shows improved mathematical reasoning, scoring 73.8% on the AIME 2024 benchmark, a 4.1% improvement over its base model (DeepSeek-R1-Distill-Qwen-14B). This suggests that the reasoning skills developed through RL on code can be generalized effectively to other domains.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1812\" height=\"728\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png?w=800\" alt=\"DeepCoder-14B performance\" class=\"wp-image-3004059\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png 1812w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png?resize=300,121 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png?resize=768,309 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png?resize=800,321 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png?resize=1536,617 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png?resize=400,161 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png?resize=750,301 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png?resize=578,232 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_89b587.png?resize=930,374 930w\" sizes=\"(max-width: 1812px) 100vw, 1812px\"\/><figcaption class=\"wp-element-caption\"><em>Credit: Together AI<\/em><\/figcaption><\/figure>\n\n\n\n<p>The most striking aspect is achieving this level of performance with only 14 billion parameters. This makes DeepCoder significantly smaller and potentially more efficient to run than many frontier models.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-innovations-driving-deepcoder-s-performance\">Innovations driving DeepCoder\u2019s performance<\/h2>\n\n\n\n<p>While developing the model, the researchers solved some of the key challenges in training coding models using reinforcement learning (RL).<\/p>\n\n\n\n<p>The first challenge was curating the training data. Reinforcement learning requires reliable reward signals indicating the model\u2019s output is correct. As the researchers point out, \u201cUnlike math\u2014where abundant high-quality, verifiable data is readily available on the Internet\u2014the coding domain suffers from a relative scarcity of such data.\u201d\u00a0<\/p>\n\n\n\n<p>To address this problem, the DeepCoder team implemented a strict pipeline that gathers examples from different datasets and filters them for validity, complexity and duplication. This process yielded 24,000 high-quality problems, providing a solid foundation for effective RL training.<\/p>\n\n\n\n<p>The team also designed a straightforward reward function that only provides a positive signal if the generated code passes all sampled unit tests for the problem within a specific time limit. Combined with the high-quality training examples, this outcome-focused reward system prevents the model from learning tricks like printing memorized answers for public tests or optimizing for simple edge cases without solving the core problem.<\/p>\n\n\n\n<p>The model\u2019s core training algorithm is based on Group Relative Policy Optimization (GRPO), a reinforcement learning algorithm that proved very successful in DeepSeek-R1. However, the team made several modifications to the algorithm to make it more stable and allow the model to continue improving as the training extends for a longer time.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"2048\" height=\"1529\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png?w=800\" alt=\"GRPO+\" class=\"wp-image-3004060\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png 2048w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png?resize=300,224 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png?resize=768,573 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png?resize=800,597 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png?resize=1536,1147 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png?resize=400,299 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png?resize=750,560 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png?resize=578,432 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_39acb9.png?resize=930,694 930w\" sizes=\"auto, (max-width: 2048px) 100vw, 2048px\"\/><figcaption class=\"wp-element-caption\"><em>GRPO+ enables DeepCoder-14 to continue for longer durations without collapsing Credit: Together AI<\/em><\/figcaption><\/figure>\n\n\n\n<p>Finally, the team extended the model\u2019s context window iteratively, first training it on shorter reasoning sequences and gradually increasing the length. They also developed a filtering method to avoid penalizing the model when it created reasoning chains that exceeded the context limits when solving a hard prompt.\u00a0<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"2048\" height=\"1083\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png?w=800\" alt=\"iterative context extension\" class=\"wp-image-3004061\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png 2048w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png?resize=300,159 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png?resize=768,406 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png?resize=800,423 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png?resize=1536,812 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png?resize=400,212 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png?resize=750,397 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png?resize=578,306 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_4c875d.png?resize=930,492 930w\" sizes=\"auto, (max-width: 2048px) 100vw, 2048px\"\/><figcaption class=\"wp-element-caption\"><em>DeepCoder was trained on 32K context problems but was also able to solve 64K tasks Credit: Together AI<\/em><\/figcaption><\/figure>\n\n\n\n<p>The researchers explain the core idea: \u201cTo preserve long-context reasoning while enabling efficient training, we incorporated overlong filtering\u2026 This technique masks out truncated sequences during training so that models aren\u2019t penalized for generating thoughtful but lengthy outputs that exceed the current context limit.\u201d\u00a0<\/p>\n\n\n\n<p>The training was gradually scaled from a 16K to a 32K context window, and the resulting model could also solve problems that required up to 64K tokens.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-optimizing-long-context-rl-training\">Optimizing long-context RL training<\/h2>\n\n\n\n<p>Training large models with RL, especially on tasks requiring long generated sequences like coding or complex reasoning, is computationally intensive and slow. A major bottleneck is the \u201csampling\u201d step, where the model generates potentially thousands of tokens per example in the batch. Variations in response length mean some responses finish much later than others, leaving GPUs idle and slowing down the entire training loop.\u00a0<\/p>\n\n\n\n<p>To accelerate this, the team developed verl-pipeline, an optimized extension of the open-source verl library for reinforcement learning from human feedback (RLHF). The key innovation, which they call \u201cOne-Off Pipelining,\u201d rearranges the response sampling and model updates to reduce the bottlenecks and accelerator idle time.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1027\" height=\"267\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_3a530d.png?w=800\" alt=\"One-Off Pipelining\" class=\"wp-image-3004062\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_3a530d.png 1027w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_3a530d.png?resize=300,78 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_3a530d.png?resize=768,200 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_3a530d.png?resize=800,208 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_3a530d.png?resize=400,104 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_3a530d.png?resize=750,195 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_3a530d.png?resize=578,150 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/image_3a530d.png?resize=930,242 930w\" sizes=\"auto, (max-width: 1027px) 100vw, 1027px\"\/><figcaption class=\"wp-element-caption\"><em>One-Off Pipelining<\/em><\/figcaption><\/figure>\n\n\n\n<p>Their experiments showed that one-off pipelining provided up to a 2x speedup for coding RL tasks compared to baseline implementations. This optimization was crucial for training DeepCoder within a reasonable timeframe (2.5 weeks on 32 H100s) and is now open-sourced as part of verl-pipeline for the community to use and build upon.\u00a0<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-enterprise-impact\">Enterprise impact<\/h2>\n\n\n\n<p>The researchers have made all the artifacts for training and running DeepCoder-14B available on GitHub and Hugging Face under a permissive license.<\/p>\n\n\n\n<p>\u201cBy fully sharing our dataset, code, and training recipe, we empower the community to reproduce our work and make RL training accessible to all,\u201d the researchers write.<\/p>\n\n\n\n<p>DeepCoder-14B powerfully illustrates a broader, accelerating trend in the AI landscape: the rise of highly capable yet efficient and openly accessible models.\u00a0<\/p>\n\n\n\n<p>For the enterprise world, this shift signifies more options and higher accessibility of advanced models. Cutting-edge performance is no longer solely the domain of hyperscalers or those willing to pay premium API fees. Models like DeepCoder can empower organizations of all sizes to leverage sophisticated code generation and reasoning, customize solutions to their specific needs, and securely deploy them within their environments.\u00a0<\/p>\n\n\n\n<p>This trend can lower the barrier to entry for AI adoption and foster a more competitive and innovative ecosystem, where progress is driven through open source collaboration.<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/deepcoder-delivers-top-coding-performance-in-efficient-14b-open-model\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Researchers at Together AI and Agentica have released DeepCoder-14B, a new coding model that delivers impressive performance comparable to leading proprietary models like OpenAI\u2019s o3-mini.\u00a0 Built on top of DeepSeek-R1, this model gives more flexibility to [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1152,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-1151","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/04\/a_robot_working_as_a_programmer_writing_on_a_mo.jpg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/1151","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=1151"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/1151\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/1152"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=1151"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=1151"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=1151"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 01:54:00 UTC -->