{"id":3272,"date":"2025-08-23T03:35:31","date_gmt":"2025-08-23T03:35:31","guid":{"rendered":"https:\/\/violethoward.com\/new\/opencuas-open-source-computer-use-agents-rival-proprietary-models-from-openai-and-anthropic\/"},"modified":"2025-08-23T03:35:31","modified_gmt":"2025-08-23T03:35:31","slug":"opencuas-open-source-computer-use-agents-rival-proprietary-models-from-openai-and-anthropic","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/opencuas-open-source-computer-use-agents-rival-proprietary-models-from-openai-and-anthropic\/","title":{"rendered":"OpenCUA\u2019s open source computer-use agents rival proprietary models from OpenAI and Anthropic"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> <em>Subscribe Now<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>A new framework from researchers at The University of Hong Kong (HKU) and collaborating institutions provides an open source foundation for creating robust AI agents that can operate computers. The framework, called OpenCUA, includes the tools, data, and recipes for scaling the development of computer-use agents (CUAs).<\/p>\n\n\n\n<p>Models trained using this framework perform strongly on CUA benchmarks, outperforming existing open source models and competing closely with closed agents from leading AI labs like OpenAI and Anthropic.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-challenge-of-building-computer-use-agents\">The challenge of building computer-use agents<\/h2>\n\n\n\n<p>Computer-use agents are designed to autonomously complete tasks on a computer, from navigating websites to operating complex software. They can also help automate workflows in the enterprise. However, the most capable CUA systems are proprietary, with critical details about their training data, architectures, and development processes kept private.<\/p>\n\n\n\n<p>\u201cAs the lack of transparency limits technical advancements and raises safety concerns, the research community needs truly open CUA frameworks to study their capabilities, limitations, and risks,\u201d the researchers state in their paper.<\/p>\n\n\n\n<div id=\"boilerplate_2803147\" class=\"post-boilerplate boilerplate-speedbump\">\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong\/><strong>AI Scaling Hits Its Limits<\/strong><\/p>\n\n\n\n<p>Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Turning energy into a strategic advantage<\/li>\n\n\n\n<li>Architecting efficient inference for real throughput gains<\/li>\n\n\n\n<li>Unlocking competitive ROI with sustainable AI systems<\/li>\n<\/ul>\n\n\n\n<p><strong>Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<\/div><p>At the same time, open source efforts face their own set of hurdles. There has been no scalable infrastructure for collecting the diverse, large-scale data needed to train these agents. Existing open source datasets for graphical user interfaces (GUIs) have limited data, and many research projects provide insufficient detail about their methods, making it difficult for others to replicate their work.<\/p>\n\n\n\n<p>According to the paper, \u201cThese limitations collectively hinder advances in general-purpose CUAs and restrict a meaningful exploration of their scalability, generalizability, and potential learning approaches.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-introducing-opencua\">Introducing OpenCUA<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" height=\"412\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?w=800\" alt=\"\" class=\"wp-image-3016003\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png 6837w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?resize=300,155 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?resize=768,396 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?resize=800,412 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?resize=1536,792 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?resize=2048,1056 2048w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?resize=400,206 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?resize=750,387 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?resize=578,298 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_7fdde4.png?resize=930,479 930w\" sizes=\"(max-width: 800px) 100vw, 800px\"\/><figcaption class=\"wp-element-caption\"><em>OpenCUA framework Source: XLANG Lab at HKU<\/em><\/figcaption><\/figure>\n\n\n\n<p>OpenCUA is an open source framework designed to address these challenges by scaling both the data collection and the models themselves. At its core is the AgentNet Tool for recording human demonstrations of computer tasks on different operating systems.<\/p>\n\n\n\n<p>The tool streamlines data collection by running in the background on an annotator\u2019s personal computer, capturing screen videos, mouse and keyboard inputs, and the underlying accessibility tree, which provides structured information about on-screen elements. This raw data is then processed into \u201cstate-action trajectories,\u201d pairing a screenshot of the computer (the state) with the user\u2019s corresponding action (a click, key press, etc.). Annotators can then review, edit, and submit these demonstrations.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"684\" height=\"336\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_05f5eb.png\" alt=\"\" class=\"wp-image-3016004\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_05f5eb.png 684w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_05f5eb.png?resize=300,147 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_05f5eb.png?resize=100,50 100w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_05f5eb.png?resize=400,196 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_05f5eb.png?resize=578,284 578w\" sizes=\"auto, (max-width: 684px) 100vw, 684px\"\/><figcaption class=\"wp-element-caption\"><em>AgentNet tool Source: XLang Lab at HKU<\/em><\/figcaption><\/figure>\n\n\n\n<p>Using this tool, the researchers collected the AgentNet dataset, which contains over 22,600 task demonstrations across Windows, macOS, and Ubuntu, spanning more than 200 applications and websites. \u201cThis dataset authentically captures the complexity of human behaviors and environmental dynamics from users\u2019 personal computing environments,\u201d the paper notes.<\/p>\n\n\n\n<p>Recognizing that screen-recording tools raise significant data privacy concerns for enterprises, the researchers designed the AgentNet Tool with security in mind. Xinyuan Wang, co-author of the paper and PhD student at HKU, explained that they implemented a multi-layer privacy protection framework. \u201cFirst, annotators themselves can fully observe the data they generate\u2026 before deciding whether to submit it,\u201d he told VentureBeat. The data then undergoes manual verification for privacy issues and automated scanning by a large model to detect any remaining sensitive content before release. \u201cThis layered process ensures enterprise-grade robustness for environments handling sensitive customer or financial data,\u201d Wang added.<\/p>\n\n\n\n<p>To accelerate evaluation, the team also curated AgentNetBench, an offline benchmark that provides multiple correct actions for each step, offering a more efficient way to measure an agent\u2019s performance.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-a-new-recipe-for-training-agents\">A new recipe for training agents<\/h2>\n\n\n\n<p>The OpenCUA framework introduces a novel pipeline for processing data and training computer-use agents. The first step converts the raw human demonstrations into clean state-action pairs suitable for training vision-language models (VLMs). However, the researchers found that simply training models on these pairs yields limited performance gains, even with large amounts of data.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" height=\"199\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_fb4236.png?w=800\" alt=\"\" class=\"wp-image-3016005\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_fb4236.png 1216w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_fb4236.png?resize=300,75 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_fb4236.png?resize=768,191 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_fb4236.png?resize=800,199 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_fb4236.png?resize=400,99 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_fb4236.png?resize=750,186 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_fb4236.png?resize=578,144 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_fb4236.png?resize=930,231 930w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><figcaption class=\"wp-element-caption\"><em>OpenCUA chain-of-thought pipeline Source: XLang Lab at HKU<\/em><\/figcaption><\/figure>\n\n\n\n<p>The key insight was to augment these trajectories with chain-of-thought (CoT) reasoning. This process generates a detailed \u201cinner monologue\u201d for each action, which includes planning, memory, and reflection. This structured reasoning is organized into three levels: a high-level observation of the screen, reflective thoughts that analyze the situation and plan the next steps, and finally, the concise, executable action. This approach helps the agent develop a deeper understanding of the tasks.<\/p>\n\n\n\n<p>\u201cWe find natural language reasoning crucial for generalizable computer-use foundation models, helping CUAs internalize cognitive capabilities,\u201d the researchers write.<\/p>\n\n\n\n<p>This data synthesis pipeline is a general framework that can be adapted by companies to train agents on their own unique internal tools. According to Wang, an enterprise can record demonstrations of its proprietary workflows and use the same \u201creflector\u201d and \u201cgenerator\u201d pipeline to create the necessary training data. \u201cThis allows them to bootstrap a high-performing agent tailored to their internal tools without needing to handcraft reasoning traces manually,\u201d he explained.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-putting-opencua-to-the-test\">Putting OpenCUA to the test<\/h2>\n\n\n\n<p>The researchers applied the OpenCUA framework to train a range of open source VLMs, including variants of Qwen and Kimi-VL, with parameter sizes from 3 billion to 32 billion. The models were evaluated on a suite of online and offline benchmarks that test their ability to perform tasks and understand GUIs.<\/p>\n\n\n\n<p>The 32-billion-parameter model, OpenCUA-32B, established a new state-of-the-art success rate among open source models on the OSWorld-Verified benchmark. It also surpassed OpenAI\u2019s GPT-4o-based CUA and significantly closed the performance gap with Anthropic\u2019s leading proprietary models.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" height=\"338\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?w=800\" alt=\"\" class=\"wp-image-3016002\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png 3439w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?resize=300,127 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?resize=768,325 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?resize=800,338 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?resize=1536,649 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?resize=2048,866 2048w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?resize=400,169 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?resize=750,317 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?resize=578,244 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/08\/image_e2e708.png?resize=930,393 930w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><figcaption class=\"wp-element-caption\"><em>OpenCUA shows massive improvement over base models (left) while competing with leading CUA models (right) Source: XLANG Lab at HKU<\/em><\/figcaption><\/figure>\n\n\n\n<p>For enterprise developers and product leaders, the research offers several key findings. The OpenCUA method is broadly applicable, improving performance on models with different architectures (both dense and mixture-of-experts) and sizes. The trained agents also show strong generalization, performing well across a diverse range of tasks and operating systems.<\/p>\n\n\n\n<p>According to Wang, the framework is particularly suited for automating repetitive, labor-intensive enterprise workflows. \u201cFor example, in the AgentNet dataset, we already capture a few demonstrations of launching EC2 instances on Amazon AWS and configuring annotation parameters on MTurk,\u201d he told VentureBeat. \u201cThese tasks involve many sequential steps but follow repeatable patterns.\u201d<\/p>\n\n\n\n<p>However, Wang noted that bridging the gap to live deployment requires addressing key challenges around safety and reliability. \u201cThe biggest challenge in real deployment is safety and reliability: the agent must avoid mistakes that could inadvertently alter system settings or trigger harmful side effects beyond the intended task,\u201d he said.<\/p>\n\n\n\n<p>The researchers have released the code, dataset, and weights for their models.<\/p>\n\n\n\n<p>As open source agents built on frameworks like OpenCUA become more capable, they could fundamentally evolve the relationship between knowledge workers and their computers. Wang envisions a future where proficiency in complex software becomes less important than the ability to clearly articulate goals to an AI agent.<\/p>\n\n\n\n<p>He described two primary modes of work: \u201coffline automation, where the agent leverages its broader software knowledge to pursue a task end-to-end,\u201d and \u201conline collaboration, where the agent responds in real-time and works side by side with the human, much like a colleague.\u201d Basically, the humans will provide the strategic \u201cwhat,\u201d while increasingly sophisticated AI agents handle the operational \u201chow.\u201d<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/opencuas-open-source-computer-use-agents-rival-proprietary-models-from-openai-and-anthropic\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A new framework from researchers at The University of Hong Kong (HKU) and collaborating institutions provides an open source foundation for creating robust AI agents that can operate computers. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3273,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-3272","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/08\/computer-use-agent.jpg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3272","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=3272"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3272\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/3273"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=3272"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=3272"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=3272"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 21:16:19 UTC -->