{"id":3131,"date":"2025-08-13T18:25:39","date_gmt":"2025-08-13T18:25:39","guid":{"rendered":"https:\/\/violethoward.com\/new\/ai2s-molmoact-model-thinks-in-3d-to-challenge-nvidia-and-google-in-robotics-ai\/"},"modified":"2025-08-13T18:25:39","modified_gmt":"2025-08-13T18:25:39","slug":"ai2s-molmoact-model-thinks-in-3d-to-challenge-nvidia-and-google-in-robotics-ai","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/ai2s-molmoact-model-thinks-in-3d-to-challenge-nvidia-and-google-in-robotics-ai\/","title":{"rendered":"Ai2&#8217;s MolmoAct model \u2018thinks in 3D\u2019 to challenge Nvidia and Google in robotics AI"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> <em>Subscribe Now<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>Physical AI, where robotics and foundation models come together, is fast becoming a growing space with companies like Nvidia, Google and Meta releasing research and experimenting in melding large language models (LLMs) with robots.\u00a0<\/p>\n\n\n\n<p>New research from the Allen Institute for AI (Ai2) aims to challenge Nvidia and Google in physical AI with the release of MolmoAct 7B, a new open-source model that allows robots to \u201creason in space. MolmoAct, based on Ai2\u2019s open source Molmo, \u201cthinks\u201d in three dimensions. It is also releasing its training data. Ai2 has an Apache 2.0 license for the model, while the datasets are licensed under CC BY-4.0.\u00a0<\/p>\n\n\n\n<p>Ai2 classifies MolmoAct as an Action Reasoning Model, in which foundation models reason about actions within a physical, 3D space.<\/p>\n\n\n\n<p>What this means is that MolmoAct can use its reasoning capabilities to understand the physical world, plan how it occupies space and then take that action.\u00a0<\/p>\n\n\n\n<div id=\"boilerplate_2803147\" class=\"post-boilerplate boilerplate-speedbump\">\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong\/><strong>AI Scaling Hits Its Limits<\/strong><\/p>\n\n\n\n<p>Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Turning energy into a strategic advantage<\/li>\n\n\n\n<li>Architecting efficient inference for real throughput gains<\/li>\n\n\n\n<li>Unlocking competitive ROI with sustainable AI systems<\/li>\n<\/ul>\n\n\n\n<p><strong>Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<\/div><p>\u201cMolmoAct has reasoning in 3D space capabilities versus traditional vision-language-action (VLA) models,\u201d Ai2 told VentureBeat in an email. \u201cMost robotics models are VLAs that don\u2019t think or reason in space, but MolmoAct has this capability, making it more performant and generalizable from an architectural standpoint.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-physical-understanding-nbsp\">Physical understanding\u00a0<\/h2>\n\n\n\n<p>Since robots exist in the physical world, Ai2 claims MolmoAct helps robots take in their surroundings and make better decisions on how to interact with them.\u00a0<\/p>\n\n\n\n<p>\u201cMolmoAct could be applied anywhere a machine would need to reason about its physical surroundings,\u201d the company said. \u201cWe think about it mainly in a home setting because that\u2019s where the greatest challenge lies for robotics, because there things are irregular and constantly changing, but MolmoAct can be applied anywhere.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><p>\n<iframe loading=\"lazy\" title=\"Introducing MolmoAct: A model that reasons in 3D space\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/-_wag1X25OE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/p><\/figure>\n\n\n\n<p>MolmoAct can understand the physical world by outputting \u201cspatially grounded perception tokens,\u201d which are tokens pretrained and extracted using a vector-quantized variational autoencoder or a model that converts data inputs, such as video, into tokens. The company said these tokens differ from those used by VLAs in that they are not text inputs.\u00a0<\/p>\n\n\n\n<p>These enable MolmoAct to gain spatial understanding and encode geometric structures. With these, the model estimates the distance between objects.\u00a0<\/p>\n\n\n\n<p>Once it has an estimated distance, MolmoAct then predicts a sequence of \u201cimage-space\u201d waypoints or points in the area where it can set a path to. After that, the model will begin outputting specific actions, such as dropping an arm by a few inches or stretching out.\u00a0<\/p>\n\n\n\n<p>Ai2\u2019s researchers said they were able to get the model to adapt to different embodiments (i.e., either a mechanical arm or a humanoid robot) \u201cwith only minimal fine-tuning.\u201d<\/p>\n\n\n\n<p>Benchmarking testing conducted by Ai2 showed MolmoAct 7B had a task success rate of 72.1%, beating models from Google, Microsoft and Nvidia.\u00a0<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXcWULlJnfMGKtmFPSYylYYwWeyLRgTDjiuHqecYO0QlCpe5Nf3PO7c4nQf_S6SwwtMfY5cZEgvC8gKJtKR826H-us9_veuuLXr3oo-FXd__bNQmOLj3Si6c7xkf87akPM8tDcKgvw?key=CazhOUyJIyTX--kzhSz2fg\" alt=\"\"\/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-a-small-step-forward\">A small step forward<\/h2>\n\n\n\n<p>Ai2\u2019s research is the latest to take advantage of the unique benefits of LLMs and VLMs, especially as the pace of innovation in generative AI continues to grow. Experts in the field see work from Ai2 and other tech companies as building blocks.\u00a0<\/p>\n\n\n\n<p>Alan Fern, professor at the Oregon State University College of Engineering, told VentureBeat that Ai2\u2019s research \u201crepresents a natural progression in enhancing VLMs for robotics and physical reasoning.\u201d<\/p>\n\n\n\n<p>\u201cWhile I wouldn\u2019t call it revolutionary, it\u2019s an important step forward in the development of more capable 3D physical reasoning models,\u201d Fern said. \u201cTheir focus on truly 3D scene understanding, as opposed to relying on 2D models, marks a notable shift in the right direction. They\u2019ve made improvements over prior models, but these benchmarks still fall short of capturing real-world complexity and remain relatively controlled and toyish in nature.\u201d<\/p>\n\n\n\n<p>He added that while there\u2019s still room for improvement on the benchmarks, he is \u201ceager to test this new model on some of our physical reasoning tasks.\u201d\u00a0<\/p>\n\n\n\n<p>Daniel Maturana, co-founder of the start-up Gather AI, praised the openness of the data, noting that \u201cthis is great news because developing and training these models is expensive, so this is a strong foundation to build on and fine-tune for other academic labs and even for dedicated hobbyists.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-increasing-interest-in-physical-ai\">Increasing interest in physical AI<\/h2>\n\n\n\n<p>It has been a long-held dream for many developers and computer scientists to create more intelligent, or at least more spatially aware, robots.\u00a0<\/p>\n\n\n\n<p>However, building robots that process what they can \u201csee\u201d quickly and move and react smoothly gets difficult. Before the advent of LLMs, scientists had to code every single movement. This naturally meant a lot of work and less flexibility in the types of robotic actions that can occur. Now, LLM-based methods allow robots (or at least robotic arms) to determine the following possible actions to take based on objects it is interacting with.<\/p>\n\n\n\n<p>Google Research\u2019s SayCan helps a robot reason about tasks using an LLM, enabling the robot to determine the sequence of movements required to achieve a goal. Meta and New York University\u2019s OK-Robot uses visual language models for movement planning and object manipulation.<\/p>\n\n\n\n<p>Hugging Face released a $299 desktop robot in an effort to democratize robotics development. Nvidia, which proclaimed physical AI to be the next big trend, released several models to fast-track robotic training, including Cosmos-Transfer1.\u00a0<\/p>\n\n\n\n<p>OSU\u2019s Fern said there\u2019s more interest in physical AI even though demos remain limited. However, the quest to achieve general physical intelligence, which eliminates the need to individually program actions for robots, is becoming easier.\u00a0<\/p>\n\n\n\n<p>\u201cThe landscape is more challenging now, with less low-hanging fruit. On the other hand, large physical intelligence models are still in their early stages and are much more ripe for rapid advancements, which makes this space particularly exciting,\u201d he said.\u00a0<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/ai2s-molmoact-model-thinks-in-3d-to-challenge-nvidia-and-google-in-robotics-ai\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Physical AI, where robotics and foundation models come together, is fast becoming a growing space with companies like Nvidia, Google and Meta releasing research and experimenting in melding large [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3132,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-3131","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/08\/crimedy7_illustration_of_a_robotics_factory_-ar_169_-v_7_99a85bc8-7300-47dd-8df6-3e5ec373022f_2.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3131","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=3131"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3131\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/3132"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=3131"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=3131"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=3131"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 19:29:44 UTC -->