{"id":769,"date":"2025-03-23T13:27:24","date_gmt":"2025-03-23T13:27:24","guid":{"rendered":"https:\/\/violethoward.com\/new\/nvidia-will-supercharge-humanoid-robot-development-with-isaac-gr00t-n1-foundation-model-for-human-like-reasoning\/"},"modified":"2025-03-23T13:27:24","modified_gmt":"2025-03-23T13:27:24","slug":"nvidia-will-supercharge-humanoid-robot-development-with-isaac-gr00t-n1-foundation-model-for-human-like-reasoning","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/nvidia-will-supercharge-humanoid-robot-development-with-isaac-gr00t-n1-foundation-model-for-human-like-reasoning\/","title":{"rendered":"Nvidia will supercharge humanoid robot development with Isaac GR00T N1 foundation model for human-like reasoning"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<p>OK, this is not a drill. The robots are coming. <\/p>\n\n\n\n<p>Nvidia announced a portfolio of technologies to supercharge humanoid robot development, including Nvidia Isaac GR00T N1, the world\u2019s first open, fully customizable foundation model for generalized humanoid reasoning and skills.<\/p>\n\n\n\n<p>The other technologies include simulation frameworks and blueprints such as the Nvidia Isaac GR00T Blueprint for generating synthetic data, as well as Newton, an open-source physics engine \u2014 under development with Google DeepMind and Disney Research \u2014 purpose-built for developing robots.<\/p>\n\n\n\n<p>Available now, GR00T N1 is the first of a family of fully customizable models that Nvidia will pretrain and release to worldwide robotics developers \u2014 accelerating the transformation of industries challenged by global labor shortages estimated at more than 50 million people.<\/p>\n\n\n\n<p>\u201cThe age of generalist robotics is here,\u201d said Jensen Huang, founder and CEO of Nvidia, in a statement. \u201cWith Nvidia Isaac GR00T N1 and new data-generation and robot-learning frameworks, robotics developers everywhere will open the next frontier in the age of AI.\u201d<\/p>\n\n\n\n<p>The company unveiled the news during Huang\u2019s keynote speech at the GTC 2025 event. <\/p>\n\n\n\n<p>\u201cThis could be the biggest industry of all,\u201d Huang said.<\/p>\n\n\n\n<p>He noted the reinforcement learning and verifiable rewards (in the form of physics) will drive the robot technology forward.<\/p>\n\n\n\n<p>\u201cWe need a physics engine designed for fine-grained soft and rigid bodies,\u201d he said. \u201cWe need it to be GPU accelerated so these virtuals can live in super linear time.\u201d<\/p>\n\n\n\n\n\n\n\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1200\" height=\"700\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/robots.jpg?w=800\" alt=\"Nvidia GR00T generates synthetic data for robots.\" class=\"wp-image-2990318\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/robots.jpg 1200w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/robots.jpg?resize=300,175 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/robots.jpg?resize=768,448 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/robots.jpg?resize=800,467 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/robots.jpg?resize=400,233 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/robots.jpg?resize=750,438 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/robots.jpg?resize=578,337 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/robots.jpg?resize=930,543 930w\" sizes=\"(max-width: 1200px) 100vw, 1200px\"\/><figcaption class=\"wp-element-caption\">Nvidia GR00T generates synthetic data for robots.<\/figcaption><\/figure>\n\n\n\n<p>The GR00T N1 foundation model features a dual-system architecture, inspired by principles of human cognition. \u201cSystem 1\u201d is a fast-thinking action model, mirroring human reflexes or intuition. \u201cSystem 2\u201d is a slow-thinking model for deliberate, methodical decision-making.<\/p>\n\n\n\n<p>Powered by a vision language model, System 2 reasons about its environment and the instructions it has received to plan actions. System 1 then translates these plans into precise, continuous robot movements. System 1 is trained on human demonstration data and a massive amount of synthetic data generated by the Nvidia Omniverse platform.<\/p>\n\n\n\n<p>GR00T N1 can easily generalize across common tasks \u2014 such as grasping, moving objects with one or both arms, and transferring items from one arm to another \u2014 or perform multistep tasks that require long context and combinations of general skills. These capabilities can be applied across use cases such as material handling, packaging and inspection.<\/p>\n\n\n\n<p>Developers and researchers can post-train GR00T N1 with real or synthetic data for their specific humanoid robot or task. <\/p>\n\n\n\n<p>In his GTC keynote, Huang demonstrated 1X\u2019s humanoid robot autonomously performing domestic tidying tasks using a post-trained policy built on GR00T N1. The robot\u2019s autonomous capabilities are the result of an AI training collaboration between 1X and Nvidia.<\/p><p>\u201cThe future of humanoids is about adaptability and learning,\u201d said Bernt B\u00f8rnich, CEO of 1X Technologies, in a statement. \u201cNvidia\u2019s GR00T N1 model provides a major breakthrough for robot reasoning and skills. With a minimal amount of post-training data, we were able to fully deploy on NEO Gamma \u2014 furthering our mission of creating robots that are not tools, but companions that can assist humans in meaningful, immeasurable ways.\u201d<\/p>\n\n\n\n<p>Among the additional leading humanoid developers worldwide with early access to GR00T N1 are Agility Robotics, Boston Dynamics, Mentee Robotics and Neura Robotics.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-nvidia-google-deepmind-and-disney-research-focus-on-physics\">Nvidia, Google DeepMind and Disney Research Focus on Physics<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"675\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/GR00T-Blog-Image.jpg?w=800\" alt=\"Nvidia GR00T makes it easier to design humanoid robots.\" class=\"wp-image-2990006\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/GR00T-Blog-Image.jpg 1200w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/GR00T-Blog-Image.jpg?resize=300,169 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/GR00T-Blog-Image.jpg?resize=768,432 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/GR00T-Blog-Image.jpg?resize=800,450 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/GR00T-Blog-Image.jpg?resize=400,225 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/GR00T-Blog-Image.jpg?resize=750,422 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/GR00T-Blog-Image.jpg?resize=578,325 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/01\/GR00T-Blog-Image.jpg?resize=930,523 930w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\"\/><figcaption class=\"wp-element-caption\">Nvidia Isaac GR00T makes it easier to design humanoid robots.<\/figcaption><\/figure>\n\n\n\n<p>Nvidia announced a collaboration with Google DeepMind and Disney Research to develop Newton, an open-source physics engine that lets robots learn how to handle complex tasks with greater precision.<\/p>\n\n\n\n<p>Built on the Nvidia Warp framework, Newton will be optimized for robot learning and compatible with simulation frameworks such as Google DeepMind\u2019s MuJoCo and Nvidia Isaac Lab. Additionally, the three companies plan to enable Newton to use Disney\u2019s physics engine.<\/p>\n\n\n\n<p>Google DeepMind and Nvidia are collaborating to develop MuJoCo-Warp, which is expected to accelerate robotics machine learning workloads by more than 70 times and will be available to developers through Google DeepMind\u2019s MJX open-source library, as well as through Newton.<\/p>\n\n\n\n<p><br\/>Disney Research will be one of the first to use Newton to advance its robotic<br\/>character platform that powers next-generation entertainment robots, such as the<br\/>expressive Star Wars-inspired BDX droids that joined Huang on stage during his GTC<br\/>keynote.<\/p><p>\u201cThe BDX droids are just the beginning. We\u2019re committed to bringing more characters to life in ways the world hasn\u2019t seen before, and this collaboration with Disney Research, Nvidia and Google DeepMind is a key part of that vision,\u201d said Kyle Laughlin, senior vice president at Walt Disney Imagineering Research &amp;<br\/>Development, in a statement. \u201cThis collaboration will allow us to create a new generation of robotic characters that are more expressive and engaging than ever before \u2014 and connect with our guests in ways that only Disney can.\u201d<\/p>\n\n\n\n<p>Nvidia and Disney Research, along with Intrinsic, announced an additional collaboration to build OpenUSD pipelines and best practices for robotics data workflows.<\/p>\n\n\n\n<p>More Data to Advance Robotics Post-Training<\/p>\n\n\n\n<p>Large, diverse, high-quality datasets are critical for robot development but costly to capture. For humanoids, real-world human demonstration data is limited by a person\u2019s 24-hour day.<\/p>\n\n\n\n<p>Announced today, the Nvidia Isaac GR00T Blueprint for synthetic manipulation motion generation helps address this challenge. Built on Omniverse and Nvidia Cosmos Transfer world foundation models, the blueprint lets developers generate exponentially large amounts of synthetic motion data for manipulation tasks from a small number of human demonstrations.<\/p>\n\n\n\n<p>Using the first components available for the blueprint, Nvidia generated 780,000 synthetic trajectories \u2014 the equivalent of 6,500 hours, or nine continuous months, of human demonstration data \u2014 in just 11 hours. Then, combining the synthetic data with real data, Nvidia improved GR00T N1\u2019s performance by 40%, compared with using only real data.<\/p>\n\n\n\n<p>To further equip the developer community with valuable training data, Nvidia is releasing the GR00T N1 dataset as part of a larger open-source physical AI dataset \u2014 also announced at GTC and now available on Hugging Face.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-availability\">Availability<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"637\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2024\/11\/Image-NVIDIA-Isaac-Lab-Project-GR00T-Models.jpg?w=800\" alt=\"Nvidia Isaac Lab Project GR00T Models\" class=\"wp-image-2982796\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2024\/11\/Image-NVIDIA-Isaac-Lab-Project-GR00T-Models.jpg 1200w, https:\/\/venturebeat.com\/wp-content\/uploads\/2024\/11\/Image-NVIDIA-Isaac-Lab-Project-GR00T-Models.jpg?resize=300,159 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2024\/11\/Image-NVIDIA-Isaac-Lab-Project-GR00T-Models.jpg?resize=768,408 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2024\/11\/Image-NVIDIA-Isaac-Lab-Project-GR00T-Models.jpg?resize=800,425 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2024\/11\/Image-NVIDIA-Isaac-Lab-Project-GR00T-Models.jpg?resize=400,212 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2024\/11\/Image-NVIDIA-Isaac-Lab-Project-GR00T-Models.jpg?resize=750,398 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2024\/11\/Image-NVIDIA-Isaac-Lab-Project-GR00T-Models.jpg?resize=578,307 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2024\/11\/Image-NVIDIA-Isaac-Lab-Project-GR00T-Models.jpg?resize=930,494 930w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\"\/><figcaption class=\"wp-element-caption\">Nvidia Isaac Lab Project GR00T Models<\/figcaption><\/figure>\n\n\n\n<p>Nvidia GR00T N1 training data and task evaluation scenarios are now available for download from Hugging Face and GitHub. The Nvidia Isaac GR00T Blueprint for synthetic manipulation motion generation is also now available as an interactive demo on build.nvidia.com or to download from GitHub.<\/p>\n\n\n\n<p>The Nvidia DGX Spark personal AI supercomputer, also announced today at GTC, provides developers a turnkey system to expand GR00T N1\u2019s capabilities for new robots, tasks and environments without extensive custom programming. The Newton physics engine is expected to be available later this year.<\/p>\n\n\n\n<p>At GTC 2025, Nvidia will hold Humanoid Developer Day sessions, including:<br\/>\u25cf \u201cAn Introduction to Building Humanoid Robots\u201d for a deep dive into Nvidia Isaac GR00T; <br\/>\u25cf \u201cInsights Into Disney\u2019s Robotic Character Platform\u201d to learn how Disney Research redefines entertainment robotics with BDX droids; <br\/>\u25cf \u201cAnnouncing Mujoco-Warp and Newton: How Google DeepMind and Nvidia are Supercharging Robotics Development\u201d for a deeper look into these new technologies and how Google deploys AI models to train AI-powered humanoids for real-world tasks.<br\/><\/p>\n<div id=\"boilerplate_2663995\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container \">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>GB Daily<\/strong><\/p>\n<p class=\"copy\">Stay in the know! Get the latest news in your inbox daily<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/games\/nvidia-will-supercharge-humanoid-robot-development-with-isaac-gr00t-n1-foundation-model-for-humanoid-reasoning\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>OK, this is not a drill. The robots are coming. Nvidia announced a portfolio of technologies to supercharge humanoid robot development, including Nvidia Isaac GR00T N1, the world\u2019s first open, fully customizable foundation model for generalized humanoid reasoning and skills. The other technologies include simulation frameworks and blueprints such as the Nvidia Isaac GR00T Blueprint [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":770,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-769","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/03\/GR00T-N1-Image.jpg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/769","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=769"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/769\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/770"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=769"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=769"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=769"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-28 23:16:26 UTC -->