{"id":638,"date":"2025-03-15T10:21:28","date_gmt":"2025-03-15T10:21:28","guid":{"rendered":"https:\/\/violethoward.com\/new\/moonvalleys-marey-is-a-state-of-the-art-ai-video-model-trained-on-fully-licensed-data\/"},"modified":"2025-03-15T10:21:28","modified_gmt":"2025-03-15T10:21:28","slug":"moonvalleys-marey-is-a-state-of-the-art-ai-video-model-trained-on-fully-licensed-data","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/moonvalleys-marey-is-a-state-of-the-art-ai-video-model-trained-on-fully-licensed-data\/","title":{"rendered":"Moonvalley&#8217;s Marey is a state-of-the-art AI video model trained on FULLY LICENSED data"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>A few years ago, there was no such thing as a \u201cgenerative AI video model.\u201d<\/p>\n\n\n\n<p>Today, there are dozens, including many capable of rendering ultra-high-definition, ultra-realistic Hollywood-caliber video in seconds from text prompts or user-uploaded images and existing video clips. If you\u2019ve read VentureBeat in the last few months, you\u2019ve no doubt come across articles about these models and the companies behind them, from Runway\u2019s Gen-3 to Google\u2019s Veo 2 to OpenAI\u2019s long-delayed but finally available Sora to Luma AI, Pika, and Chinese upstarts Kling and Hailuo. Even Alibaba and a startup called Genmo have offered open-source video models.<\/p>\n\n\n\n<p>Already, these models have been used to make portions of major blockbusters, from<em> Everything, Everywhere All At Once<\/em> to <em>HBO\u2019s True Detective: Night Country<\/em> to music videos and TV commercials from Toys R\u2019 Us and Coca Cola. But despite Hollywood\u2019s and filmmakers\u2019 relatively rapid embrace of AI, there\u2019s still one big potential looming issue: copyright concerns.<\/p>\n\n\n\n<p>As best as we can tell, given that most of the AI video model startups don\u2019t publicly share precise details of their training data, most are trained on vast swaths of videos uploaded to the web or collected from other archival sources, including those with copyrights whose owners may or may not have actually granted express permission to the AI video companies to train on them. In fact, Runway is among the companies facing a class action lawsuit (still working its way through the courts) over this very issue, and Nvidia reportedly scraped a huge swath of YouTube videos as well for this purpose. The dispute is ongoing as to whether scraping data including videos constitutes fair and transformational use.<\/p>\n\n\n\n<p>But now there\u2019s a new alternative for those concerned about copyright and not wanting to use models where there is a question mark. A startup called Moonvalley \u2014 founded by former Google DeepMinders and researchers from Meta, Microsoft and TikTok, among others \u2014 has introduced Marey, a generative AI video model designed for Hollywood studios, filmmakers and enterprise brands. Positioned as a \u201cclean\u201d state-of-the-art foundational AI video model, Marey is trained exclusively on owned and licensed data, offering an ethical alternative to AI models developed using scraped content.<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video controls=\"\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/03\/MAREY-SIZZLE-COLORED-1.mp4\"\/><\/figure>\n\n\n\n<p>\u201cPeople said it wasn\u2019t technically feasible to build a cutting-edge AI video model without using scraped data,\u201d said Moonvalley CEO and cofounder Naeem Talukdar in a recent video call interview with VentureBeat. \u201cWe proved otherwise.\u201d<\/p>\n\n\n\n<p>Marey, available now on an invitation-only waitlist basis, joins Adobe\u2019s Firefly Video model, which that long established software vendor says is also enterprise-grade \u2014 having been trained only on licensed data and Adobe Stock data (to the consternation of some contributors) \u2014 and provides enterprises indemnification for using. Moonvalley also provides indemnification on clause 7 of this document, saying it will defend its customers at its own expense.<\/p>\n\n\n\n<p>Moonvalley is hoping these features will make Marey appealing to big studios \u2014 even as others such as Runway make deals with them \u2014 and filmmakers, among the countless and ever-growing array of new AI video creation options.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-more-ethical-ai-video\">More \u2018ethical\u2019 AI video?<\/h2>\n\n\n\n<p>Marey is the result of a collaboration between Moonvalley and Asteria, an artist-led AI film and animation studio. The model is built to assist rather than replace creative professionals, providing filmmakers with new tools for AI-driven video production while maintaining traditional industry standards.<\/p>\n\n\n\n<p>\u201cOur conviction was that you\u2019re not going to get mainstream adoption in this industry unless you do this with the industry,\u201d Talukdar said. \u201cThe industry has been loud and clear that in order for them to actually use these models, we need to figure out how to build a clean model. And up until today, the top track was you couldn\u2019t do it.\u201d<\/p>\n\n\n\n<p>Rather than scraping the internet for content, Moonvalley built direct relationships with creators to license their footage. The company took several months to establish these partnerships, ensuring all data used for training was legally acquired and fully licensed.<\/p>\n\n\n\n<p>Moonvalley\u2019s licensing strategy is also designed to support content creators by compensating them for their contributions.<\/p>\n\n\n\n<p>\u201cMost of our relationships are actually coming inbound now that people have started to hear about what we\u2019re doing,\u201d Talukdar said. \u201cFor small-town creators, a lot of their footage is just sitting around. We want to help them monetize it, and we want to do artist-focused models. It ends up being a very good relationship.\u201d<\/p>\n\n\n\n<p>Talukdar told VentureBeat that while the company is still assessing and revising its compensation models, it generally compensates creators based on the duration of their footage, paying them an hourly or minutely rate under fixed-term licensing agreements (e.g., 12 or four months). This allows for potential recurring payments if the content continues to be used.<\/p>\n\n\n\n<p>The company\u2019s goal is to make high-end video production more accessible and cost-effective, allowing filmmakers, studios and advertisers to explore AI-generated storytelling without legal or ethical concerns.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-more-cinematographic-control-beyond-text-prompts-images-and-camera-directions\">More cinematographic control \u2014 beyond text prompts, images and camera directions<\/h2>\n\n\n\n<p>Talukdar explained that Moonvalley took a different approach with its Marey AI video model than existing AI video models by focusing on professional-grade production rather than consumer applications.<\/p>\n\n\n\n<p>\u201cMost generative video companies today are more consumer-focused,\u201d he said. \u201cThey build simple models where you prompt a chatbot, generate some clips and add cool effects. Our focus is different: What\u2019s the technology needed for Hollywood studios? What do major brands need to make Super Bowl commercials?\u201d<\/p>\n\n\n\n<p>Marey introduces several advancements in AI-generated video, including:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Native HD generation<\/strong> \u2014 Generates high-definition video without relying on upscaling, reducing visual artifacts<\/li>\n\n\n\n<li><strong>Extended video length<\/strong> \u2014 Unlike most AI video models, which generate only a few seconds of footage, Marey can create 30-second sequences in a single pass.<\/li>\n\n\n\n<li><strong>Layer-based editing<\/strong> \u2014 Unlike other generative video models, Marey allows users to separately edit the foreground, midground and background, providing more precise control over video composition.<\/li>\n\n\n\n<li><strong>Storyboard and sketch-based inputs<\/strong> \u2014 Instead of relying only on text prompts (as many AI models do), Marey enables filmmakers to create using storyboards, sketches and even live-action references, making it more intuitive for professionals.<\/li>\n\n\n\n<li><strong>More responsive to conditioning inputs<\/strong> \u2014 The model was designed to better interpret external inputs like drawings and motion references, making AI-generated video more controllable.<\/li>\n\n\n\n<li><strong>\u201cGenerative-native\u201d video editor<\/strong> \u2014 Moonvalley is developing companion software for Marey, which functions as a generative-native video editing tool that helps users manage projects and timelines more effectively.<\/li>\n<\/ul>\n\n\n\n<p>\u201cThe model itself is just built very heavily around controllability,\u201d Talukdar explained. \u201cYou need to have significantly more controls around the output \u2014 being able to change the characters. It\u2019s the first model that allows you to do layer-based editing, so you can edit the foreground, mid-ground and background separately. It\u2019s also the first model built for Hollywood, purpose-built for production.\u201d<\/p>\n\n\n\n<p>In addition, he told VentureBeat that Marey relies on a diffusion-transformer hybrid model that combines diffusion and transformer-based architectures.<\/p>\n\n\n\n<p>\u201cThe models are diffusion-transformer models, so it\u2019s the transformer architecture, and then you have diffusion as part of the layers,\u201d Talukdar said. \u201cWhen you introduce controllability, it\u2019s usually through those layers that you do it.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-funded-by-big-name-vcs-but-not-as-much-as-other-ai-video-startups-yet\">Funded by big-name VCs but not as much as other AI video startups (yet)<\/h2>\n\n\n\n<p>Moonvalley is also this week announcing a $70 million seed round led by Bessemer Venture Partners, Khosla Ventures and General Catalyst. Investors Hemant Taneja, Samir Kaul and Byron Deeter have also joined the company\u2019s board of directors.<\/p>\n\n\n\n<p>Talukdar noted that Moonvalley\u2019s funding is substantially less than some of its competitors, so far \u2014 Runway is reported to have raised $270 million total across several rounds \u2014 but that the company has optimized its resources by assembling an elite team of AI researchers and engineers.<\/p>\n\n\n\n<p>\u201cWe raised around $70 million, quite a bit less than our competitors, certainly,\u201d he said. \u201cBut that really boils down to the team \u2014 having a team that can build that architecture significantly more efficiently, compute, and all those different things.\u201d<\/p>\n\n\n\n<p>Marey is currently in a limited-access phase, with select studios and filmmakers testing the model. Moonvalley plans to gradually expand access over the coming weeks.<\/p>\n\n\n\n<p>\u201cRight now, there\u2019s a number of studios that are getting access to it, and we have an alpha group with a couple dozen filmmakers using it,\u201d Talukdar confirmed. \u201cThe hope is that it\u2019ll be fully available within a couple of weeks, worst case within a couple of months.\u201d<\/p>\n\n\n\n<p>With the launch of Marey, Moonvalley and Asteria aim to position themselves at the forefront of AI-assisted filmmaking, offering studios and brands a solution that integrates AI without compromising creative integrity. But with AI video startup rivals such as Runway, Pika and Hedra continuing to add new features like character voice and movements, the field is becoming more competitive.<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/moonvalleys-marey-is-a-state-of-the-art-ai-video-model-trained-on-fully-licensed-data\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A few years ago, there was no such thing as a \u201cgenerative AI video model.\u201d Today, there are dozens, including many capable of rendering ultra-high-definition, ultra-realistic Hollywood-caliber video in seconds from text prompts or user-uploaded images [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":639,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-638","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/03\/moonvalley-marey.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/638","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=638"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/638\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/639"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=638"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=638"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=638"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69b0ea1f46fa5c3231e56837. Config Timestamp: 2026-03-11 04:05:51 UTC, Cached Timestamp: 2026-04-08 06:22:18 UTC -->