{"id":3450,"date":"2025-08-30T14:47:54","date_gmt":"2025-08-30T14:47:54","guid":{"rendered":"https:\/\/violethoward.com\/new\/forget-data-labeling-tencents-r-zero-shows-how-llms-can-train-themselves\/"},"modified":"2025-08-30T14:47:54","modified_gmt":"2025-08-30T14:47:54","slug":"forget-data-labeling-tencents-r-zero-shows-how-llms-can-train-themselves","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/forget-data-labeling-tencents-r-zero-shows-how-llms-can-train-themselves\/","title":{"rendered":"Forget data labeling: Tencent\u2019s R-Zero shows how LLMs can train themselves"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> <em>Subscribe Now<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>A new training framework <span style=\"box-sizing: border-box; margin: 0px; padding: 0px;\">developed by researchers at\u00a0Tencent AI Lab\u00a0and\u00a0Washington University in St. Louis\u00a0enables large language models (LLMs) to improve themselves without requiring\u00a0<\/span>any human-labeled data. The technique, called R-Zero, uses reinforcement learning to generate its own training data from scratch, addressing one of the main bottlenecks in creating self-evolving AI systems. R-Zero works by having two independent models co-evolve by interacting with and challenging each other.<\/p>\n\n\n\n<p>Experiments show that R-Zero substantially improves reasoning capabilities across different LLMs, which could lower the complexity and costs of training advanced AI. For enterprises, this approach could accelerate the development of specialized models for complex reasoning tasks without the massive expense of curating labeled datasets.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-challenge-of-self-evolving-llms\">The challenge of self-evolving LLMs<\/h2>\n\n\n\n<p>The idea behind self-evolving LLMs is to create AI systems that can autonomously generate, refine, and learn from their own experiences. This offers a scalable path toward more intelligent and capable AI. However, a major challenge is that training these models requires large volumes of high-quality tasks and labels, which act as supervision signals for the AI to learn from.<\/p>\n\n\n\n<p>Relying on human annotators to create this data is not only costly and slow but also creates a fundamental bottleneck. It effectively limits an AI\u2019s potential capabilities to what humans can teach it. To address this, researchers have developed label-free methods that derive reward signals directly from a model\u2019s own outputs, for example, by measuring its confidence in an answer. While these methods eliminate the need for explicit labels, they still rely on a pre-existing set of tasks, thereby limiting their applicability in truly self-evolving scenarios.<\/p>\n\n\n\n<div id=\"boilerplate_2803147\" class=\"post-boilerplate boilerplate-speedbump\">\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong\/><strong>AI Scaling Hits Its Limits<\/strong><\/p>\n\n\n\n<p>Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Turning energy into a strategic advantage<\/li>\n\n\n\n<li>Architecting efficient inference for real throughput gains<\/li>\n\n\n\n<li>Unlocking competitive ROI with sustainable AI systems<\/li>\n<\/ul>\n\n\n\n<p><strong>Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<\/div><p>Other approaches involve having models generate their own tasks to learn from. However, in domains like open-ended reasoning, where there is no simple way to check for correctness (such as a code executor), ensuring the quality of this self-generated data is a significant hurdle.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-r-zero-works\">How R-Zero works<\/h2>\n\n\n\n<p>R-Zero is a framework designed to train reasoning LLMs that can evolve from zero external data. The process begins with a single base model, which is split into two roles: a \u201cChallenger\u201d and a \u201cSolver.\u201d These two models are optimized independently but evolve together through a continuous cycle of interaction.<\/p>\n\n\n\n<p>The Challenger\u2019s goal is to create new tasks that are just at the threshold of the Solver\u2019s current abilities, neither too easy nor impossible. The Solver, in turn, is rewarded for solving these increasingly complex tasks. In written comments to VentureBeat, Chengsong Huang, co-author of the paper and a doctoral student at Washington University in St. Louis, explained that this dynamic is crucial because generating high-quality questions is often more complicated than finding the answers.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" height=\"280\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_78dc45.png?w=800\" alt=\"\" class=\"wp-image-3016157\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_78dc45.png 1428w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_78dc45.png?resize=300,105 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_78dc45.png?resize=768,269 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_78dc45.png?resize=800,280 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_78dc45.png?resize=400,140 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_78dc45.png?resize=750,263 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_78dc45.png?resize=578,202 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_78dc45.png?resize=930,326 930w\" sizes=\"(max-width: 800px) 100vw, 800px\"\/><\/figure>\n\n\n\n<p>\u201cWhat we found in a practical setting is that the biggest challenge is not generating the answers\u2026 but rather generating high-quality, novel, and progressively more difficult questions,\u201d Huang said. \u201cWe believe that good teachers are far rarer than good students. The co-evolutionary dynamic automates the creation of this \u2018teacher,\u2019 ensuring a steady and dynamic curriculum that pushes the Solver\u2019s capabilities far beyond what a static, pre-existing dataset could achieve.\u201d<\/p>\n\n\n\n<p>Once the Challenger generates enough questions, they are filtered for diversity and compiled into a training dataset. In the Solver\u2019s training phase, it is fine-tuned on these challenging questions. The \u201ccorrect\u201d answer for each question is determined by a majority vote from the Solver\u2019s own previous attempts.\u00a0<\/p>\n\n\n\n<p>This entire process repeats, creating a self-improving loop that operates without any human intervention, allowing the two models to push each other to become progressively more capable across each iteration.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-r-zero-in-action\">R-Zero in action<\/h2>\n\n\n\n<p>The researchers tested R-Zero on several open-source LLMs, including models from the Qwen3 and OctoThinker families. They first trained the models on math problems and then tested whether the learned reasoning skills could generalize to other complex, general-domain benchmarks like MMLU-Pro (multi-language understanding and reasoning tasks) and SuperGPQA (science and reasoning tasks).<\/p>\n\n\n\n<p>The results showed that R-Zero is a highly effective, model-agnostic framework. For instance, it boosted the Qwen3-4B-Base model\u2019s score by +6.49 on average across math reasoning benchmarks. The training process consistently and substantially improved performance, with gains accumulating over several iterations. The larger Qwen3-8B-Base model saw its average math score climb by +5.51 points after three iterations.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" height=\"564\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_f72f0b.png?w=800\" alt=\"\" class=\"wp-image-3016158\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_f72f0b.png 1306w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_f72f0b.png?resize=300,211 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_f72f0b.png?resize=768,541 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_f72f0b.png?resize=800,564 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_f72f0b.png?resize=400,282 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_f72f0b.png?resize=750,528 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_f72f0b.png?resize=578,407 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_f72f0b.png?resize=930,655 930w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/figure>\n\n\n\n<p>A key finding was the immediate performance leap after the first iteration, which validated the effectiveness of the Challenger\u2019s role in creating a high-quality learning curriculum. \u201cThis confirms that the intelligent curriculum generated by the RL-trained Challenger is significantly more effective than that of a non-trained generator,\u201d the researchers write in their paper.<\/p>\n\n\n\n<p>Notably, the skills learned from math problems were effectively transferred to general reasoning tasks, thereby enhancing the models\u2019 underlying capabilities. For example, the same Qwen3-4B-Base model showed an improvement of +7.54 on general-domain reasoning benchmarks. Another interesting finding is that R-Zero can serve as a decisive pre-training step. Models first improved by R-Zero achieved even higher performance when later fine-tuned on traditional labeled data, suggesting the framework acts as a performance amplifier.<\/p>\n\n\n\n<p>For enterprises, the \u201cfrom zero data\u201d approach could be a game-changer, especially in niche domains where high-quality data is scarce or non-existent. Huang highlights that R-Zero\u2019s main advantage is its ability to sidestep the most expensive and time-consuming part of AI development: data curation.<\/p>\n\n\n\n<p>\u201cOur approach entirely bypasses the fundamental bottleneck of having to find, label, and curate high-quality datasets,\u201d he said. \u201cThis is not just about a cost-saving measure; it\u2019s a pathway toward creating AI that can surpass human capabilities, because it is no longer limited by the scope of human knowledge or data.\u201d<\/p>\n\n\n\n<p>However, the co-evolutionary process also revealed a critical challenge. As the Challenger successfully generates progressively more difficult problems, the Solver\u2019s ability to produce reliable \u201ccorrect\u201d answers via majority vote begins to decline. The researchers found that the true accuracy of these self-generated labels dropped from 79% in the first iteration to 63% by the third<span style=\"box-sizing: border-box; margin: 0px; padding: 0px;\">, compared to a strong oracle LLM such as\u00a0GPT -4<\/span>. This decline in data quality is a key trade-off and a potential bottleneck for the system\u2019s long-term performance.<\/p>\n\n\n\n<p>Huang acknowledged that this is a fundamental problem for the self-evolving paradigm. \u201cOur work is a proof of concept that demonstrates the potential of this approach, but we acknowledge that maintaining stable, long-term improvement without plateauing is a significant hurdle,\u201d he said. \u201cSolving this problem will be a crucial next step for the entire research community.\u201d<\/p>\n\n\n\n<p>The researchers also highlight a key limitation of the framework: the current mechanism is best suited for domains like math where correctness can be objectively determined. So, how could this powerful paradigm be extended to more subjective enterprise tasks like generating marketing copy or summarizing reports?<\/p>\n\n\n\n<p>Huang suggests a potential path forward involves adding a third, co-evolving AI agent to the mix: a \u201cVerifier\u201d or \u201cCritic.\u201d<\/p>\n\n\n\n<p>\u201cInstead of evaluating for a simple \u2018correct\u2019 answer, this Verifier would be trained to evaluate the quality of the Solver\u2019s output based on more nuanced criteria,\u201d he explained. \u201cThe co-evolutionary dynamic would then involve the Challenger creating the prompt, the Solver generating the response, and the Verifier providing a quality signal, with all three models improving together.\u201d<\/p>\n\n\n\n<p>While this remains a direction for future research, it points toward a future where fully autonomous AI systems can master not just objective logic, but subjective reasoning as well.<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/forget-data-labeling-tencents-r-zero-shows-how-llms-can-train-themselves\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A new training framework developed by researchers at\u00a0Tencent AI Lab\u00a0and\u00a0Washington University in St. Louis\u00a0enables large language models (LLMs) to improve themselves without requiring\u00a0any human-labeled data. The technique, called R-Zero, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3451,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-3450","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/08\/LLM-challenger-and-solver-co-evolution.jpg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3450","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=3450"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3450\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/3451"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=3450"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=3450"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=3450"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69d79d7d46fa5cbf45858bd1. Config Timestamp: 2026-04-09 12:37:16 UTC, Cached Timestamp: 2026-04-29 22:44:18 UTC -->