{"id":2348,"date":"2025-07-08T22:08:32","date_gmt":"2025-07-08T22:08:32","guid":{"rendered":"https:\/\/violethoward.com\/new\/chinese-researchers-unveil-memos-the-first-memory-operating-system-that-gives-ai-human-like-recall\/"},"modified":"2025-07-08T22:08:32","modified_gmt":"2025-07-08T22:08:32","slug":"chinese-researchers-unveil-memos-the-first-memory-operating-system-that-gives-ai-human-like-recall","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/chinese-researchers-unveil-memos-the-first-memory-operating-system-that-gives-ai-human-like-recall\/","title":{"rendered":"Chinese researchers unveil MemOS, the first &#8216;memory operating system&#8217; that gives AI human-like recall"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> <em>Subscribe Now<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>A team of researchers from leading institutions including Shanghai Jiao Tong University and Zhejiang University has developed what they\u2019re calling the first \u201cmemory operating system\u201d for artificial intelligence, addressing a fundamental limitation that has hindered AI systems from achieving human-like persistent memory and learning.<\/p>\n\n\n\n<p>The system, called MemOS, treats memory as a core computational resource that can be scheduled, shared, and evolved over time \u2014 much like how traditional operating systems manage CPU and storage resources. The research, published July 4th on arXiv, demonstrates significant performance improvements over existing approaches, including a 159% boost in temporal reasoning tasks compared to OpenAI\u2019s memory systems.<\/p>\n\n\n\n<p>\u201cLarge Language Models (LLMs) have become an essential infrastructure for Artificial General Intelligence (AGI), yet their lack of well-defined memory management systems hinders the development of long-context reasoning, continual personalization, and knowledge consistency,\u201d the researchers write in their paper.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-ai-systems-struggle-with-persistent-memory-across-conversations\">AI systems struggle with persistent memory across conversations<\/h2>\n\n\n\n<p>Current AI systems face what researchers call the \u201cmemory silo\u201d problem \u2014 a fundamental architectural limitation that prevents them from maintaining coherent, long-term relationships with users. Each conversation or session essentially starts from scratch, with models unable to retain preferences, accumulated knowledge, or behavioral patterns across interactions. This creates a frustrating user experience where an AI assistant might forget a user\u2019s dietary restrictions mentioned in one conversation when asked about restaurant recommendations in the next.<\/p>\n\n\n\n<p>While some solutions like Retrieval-Augmented Generation (RAG) attempt to address this by pulling in external information during conversations, the researchers argue these remain \u201cstateless workarounds without lifecycle control.\u201d The problem runs deeper than simple information retrieval \u2014 it\u2019s about creating systems that can genuinely learn and evolve from experience, much like human memory does.<\/p>\n\n\n\n<p>\u201cExisting models mainly rely on static parameters and short-lived contextual states, limiting their ability to track user preferences or update knowledge over extended periods,\u201d the team explains. This limitation becomes particularly apparent in enterprise settings, where AI systems are expected to maintain context across complex, multi-stage workflows that might span days or weeks.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-new-system-delivers-dramatic-improvements-in-ai-reasoning-tasks\">New system delivers dramatic improvements in AI reasoning tasks<\/h2>\n\n\n\n<p>MemOS introduces a fundamentally different approach through what the researchers call \u201cMemCubes\u201d \u2014 standardized memory units that can encapsulate different types of information and be composed, migrated, and evolved over time. These range from explicit text-based knowledge to parameter-level adaptations and activation states within the model, creating a unified framework for memory management that previously didn\u2019t exist.<\/p>\n\n\n\n<p>Testing on the LOCOMO benchmark, which evaluates memory-intensive reasoning tasks, MemOS consistently outperformed established baselines across all categories. The system achieved a 38.98% overall improvement compared to OpenAI\u2019s memory implementation, with particularly strong gains in complex reasoning scenarios that require connecting information across multiple conversation turns.<\/p>\n\n\n\n<p>\u201cMemOS (MemOS-0630) consistently ranks first in all categories, outperforming strong baselines such as mem0, LangMem, Zep, and OpenAI-Memory, with especially large margins in challenging settings like multi-hop and temporal reasoning,\u201d according to the research. The system also delivered substantial efficiency improvements, with up to 94% reduction in time-to-first-token latency in certain configurations through its innovative KV-cache memory injection mechanism.<\/p>\n\n\n\n<p>These performance gains suggest that the memory bottleneck has been a more significant limitation than previously understood. By treating memory as a first-class computational resource, MemOS appears to unlock reasoning capabilities that were previously constrained by architectural limitations.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-technology-could-reshape-how-businesses-deploy-artificial-intelligence\">The technology could reshape how businesses deploy artificial intelligence<\/h2>\n\n\n\n<p>The implications for enterprise AI deployment could be transformative, particularly as businesses increasingly rely on AI systems for complex, ongoing relationships with customers and employees. MemOS enables what the researchers describe as \u201ccross-platform memory migration,\u201d allowing AI memories to be portable across different platforms and devices, breaking down what they call \u201cmemory islands\u201d that currently trap user context within specific applications.<\/p>\n\n\n\n<p>Consider the current frustration many users experience when insights explored in one AI platform can\u2019t carry over to another. A marketing team might develop detailed customer personas through conversations with ChatGPT, only to start from scratch when switching to a different AI tool for campaign planning. MemOS addresses this by creating a standardized memory format that can move between systems.<\/p>\n\n\n\n<p>The research also outlines potential for \u201cpaid memory modules,\u201d where domain experts could package their knowledge into purchasable memory units. The researchers envision scenarios where \u201ca medical student in clinical rotation may wish to study how to manage a rare autoimmune condition. An experienced physician can encapsulate diagnostic heuristics, questioning paths, and typical case patterns into a structured memory\u201d that can be installed and used by other AI systems.<\/p>\n\n\n\n<p>This marketplace model could fundamentally alter how specialized knowledge is distributed and monetized in AI systems, creating new economic opportunities for experts while democratizing access to high-quality domain knowledge. For enterprises, this could mean rapidly deploying AI systems with deep expertise in specific areas without the traditional costs and timelines associated with custom training.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-three-layer-design-mirrors-traditional-computer-operating-systems\">Three-layer design mirrors traditional computer operating systems<\/h2>\n\n\n\n<p>The technical architecture of MemOS reflects decades of learning from traditional operating system design, adapted for the unique challenges of AI memory management. The system employs a three-layer architecture: an interface layer for API calls, an operation layer for memory scheduling and lifecycle management, and an infrastructure layer for storage and governance.<\/p>\n\n\n\n<p>The system\u2019s MemScheduler component dynamically manages different types of memory \u2014 from temporary activation states to permanent parameter modifications \u2014 selecting optimal storage and retrieval strategies based on usage patterns and task requirements. This represents a significant departure from current approaches, which typically treat memory as either completely static (embedded in model parameters) or completely ephemeral (limited to conversation context).<\/p>\n\n\n\n<p>\u201cThe focus shifts from how much knowledge the model learns once to whether it can transform experience into structured memory and repeatedly retrieve and reconstruct it,\u201d the researchers note, describing their vision for what they call \u201cMem-training\u201d paradigms. This architectural philosophy suggests a fundamental rethinking of how AI systems should be designed, moving away from the current paradigm of massive pre-training toward more dynamic, experience-driven learning.<\/p>\n\n\n\n<p>The parallels to operating system development are striking. Just as early computers required programmers to manually manage memory allocation, current AI systems require developers to carefully orchestrate how information flows between different components. MemOS abstracts this complexity, potentially enabling a new generation of AI applications that can be built on top of sophisticated memory management without requiring deep technical expertise.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-researchers-release-code-as-open-source-to-accelerate-adoption\">Researchers release code as open source to accelerate adoption<\/h2>\n\n\n\n<p>The team has released MemOS as an open-source project, with full code available on GitHub and integration support for major AI platforms including HuggingFace, OpenAI, and Ollama. This open-source strategy appears designed to accelerate adoption and encourage community development, rather than pursuing a proprietary approach that might limit widespread implementation.<\/p>\n\n\n\n<p>\u201cWe hope MemOS helps advance AI systems from static generators to continuously evolving, memory-driven agents,\u201d project lead Zhiyu Li commented in the GitHub repository. The system currently supports Linux platforms, with Windows and macOS support planned, suggesting the team is prioritizing enterprise and developer adoption over immediate consumer accessibility.<\/p>\n\n\n\n<p>The open-source release strategy reflects a broader trend in AI research where foundational infrastructure improvements are shared openly to benefit the entire ecosystem. This approach has historically accelerated innovation in areas like deep learning frameworks and could have similar effects for memory management in AI systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-tech-giants-race-to-solve-ai-memory-limitations\">Tech giants race to solve AI memory limitations<\/h2>\n\n\n\n<p>The research arrives as major AI companies grapple with the limitations of current memory approaches, highlighting just how fundamental this challenge has become for the industry. OpenAI recently introduced memory features for ChatGPT, while Anthropic, Google, and other providers have experimented with various forms of persistent context. However, these implementations have generally been limited in scope and often lack the systematic approach that MemOS provides.<\/p>\n\n\n\n<p>The timing of this research suggests that memory management has emerged as a critical competitive battleground in AI development. Companies that can solve the memory problem effectively may gain significant advantages in user retention and satisfaction, as their AI systems will be able to build deeper, more useful relationships over time.<\/p>\n\n\n\n<p>Industry observers have long predicted that the next major breakthrough in AI wouldn\u2019t necessarily come from larger models or more training data, but from architectural innovations that better mimic human cognitive capabilities. Memory management represents exactly this type of fundamental advancement \u2014 one that could unlock new applications and use cases that aren\u2019t possible with current stateless systems.<\/p>\n\n\n\n<p>The development represents part of a broader shift in AI research toward more stateful, persistent systems that can accumulate and evolve knowledge over time \u2014 capabilities seen as essential for artificial general intelligence. For enterprise technology leaders evaluating AI implementations, MemOS could represent a significant advancement in building AI systems that maintain context and improve over time, rather than treating each interaction as isolated.<\/p>\n\n\n\n<p>The research team indicates they plan to explore cross-model memory sharing, self-evolving memory blocks, and the development of a broader \u201cmemory marketplace\u201d ecosystem in future work. But perhaps the most significant impact of MemOS won\u2019t be the specific technical implementation, but rather the proof that treating memory as a first-class computational resource can unlock dramatic improvements in AI capabilities. In an industry that has largely focused on scaling model size and training data, MemOS suggests that the next breakthrough might come from better architecture rather than bigger computers.<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/chinese-researchers-unveil-memos-the-first-memory-operating-system-that-gives-ai-human-like-recall\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A team of researchers from leading institutions including Shanghai Jiao Tong University and Zhejiang University has developed what they\u2019re calling the first \u201cmemory operating system\u201d for artificial intelligence, addressing [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2349,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-2348","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/07\/nuneybits_Vector_art_of_digital_brain_storing_conversations_b9763f63-73bf-4f46-931e-72e29f178c88.web.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/2348","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=2348"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/2348\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/2349"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=2348"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=2348"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=2348"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 12:52:46 UTC -->