Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n
\n<\/div>\u201cWe view browser-using AI as inevitable: so much work happens in browsers that giving Claude the ability to see what you\u2019re looking at, click buttons, and fill forms will make it substantially more useful,\u201d Anthropic stated in its announcement.<\/p>\n\n\n\n
However, the company\u2019s internal testing revealed concerning security vulnerabilities that highlight the double-edged nature of giving AI systems direct control over user interfaces. In adversarial testing, Anthropic found that malicious actors could embed hidden instructions in websites, emails, or documents to trick AI systems into harmful actions without users\u2019 knowledge\u2014a technique called prompt injection.<\/p>\n\n\n\n
Without safety mitigations, these attacks succeeded 23.6% of the time when deliberately targeting the browser-using AI. In one example, a malicious email masquerading as a security directive instructed Claude to delete the user\u2019s emails \u201cfor mailbox hygiene,\u201d which the AI obediently executed without confirmation.<\/p>\n\n\n\n
\u201cThis isn\u2019t speculation: we\u2019ve run \u2018red-teaming\u2019 experiments to test Claude for Chrome and, without mitigations, we\u2019ve found some concerning results,\u201d the company acknowledged.<\/p>\n\n\n\n
OpenAI and Microsoft rush to market while Anthropic takes measured approach to computer-control technology<\/h2>\n\n\n\n
Anthropic\u2019s measured approach comes as competitors have moved more aggressively into the computer-control space. OpenAI launched its \u201cOperator\u201d agent in January, making it available to all users of its $200-per-month ChatGPT Pro service. Powered by a new \u201cComputer-Using Agent\u201d model, Operator can perform tasks like booking concert tickets, ordering groceries, and planning travel itineraries.<\/p>\n\n\n\n
Microsoft followed in April with computer use capabilities integrated into its Copilot Studio platform, targeting enterprise customers with UI automation tools that can interact with both web applications and desktop software. The company positioned its offering as a next-generation replacement for traditional robotic process automation (RPA) systems.<\/p>\n\n\n\n
The competitive dynamics reflect broader tensions in the AI industry, where companies must balance the pressure to ship cutting-edge capabilities against the risks of deploying insufficiently tested technology. OpenAI\u2019s more aggressive timeline has allowed it to capture early market share, while Anthropic\u2019s cautious approach may limit its competitive position but could prove advantageous if safety concerns materialize.<\/p>\n\n\n\n
\u201cBrowser-using agents powered by frontier models are already emerging, making this work especially urgent,\u201d Anthropic noted, suggesting the company feels compelled to enter the market despite unresolved safety issues.<\/p>\n\n\n\n
Why computer-controlling AI could revolutionize enterprise automation and replace expensive workflow software<\/h2>\n\n\n\n
The emergence of computer-controlling AI systems could fundamentally reshape how businesses approach automation and workflow management. Current enterprise automation typically requires expensive custom integrations or specialized robotic process automation software that breaks when applications change their interfaces.<\/p>\n\n\n\n
Computer-use agents promise to democratize automation by working with any software that has a graphical user interface, potentially automating tasks across the vast ecosystem of business applications that lack formal APIs or integration capabilities.<\/p>\n\n\n\n
Salesforce researchers recently demonstrated this potential with their CoAct-1 system, which combines traditional point-and-click automation with code generation capabilities. The hybrid approach achieved a 60.76% success rate on complex computer tasks while requiring significantly fewer steps than pure GUI-based agents, suggesting substantial efficiency gains are possible.<\/p>\n\n\n\n
\u201cFor enterprise leaders, the key lies in automating complex, multi-tool processes where full API access is a luxury, not a guarantee,\u201d explained Ran Xu, Director of Applied AI Research at Salesforce, pointing to customer support workflows that span multiple proprietary systems as prime use cases.<\/p>\n\n\n\n
University researchers release free alternative to Big Tech\u2019s proprietary computer-use AI systems<\/h2>\n\n\n\n
The dominance of proprietary systems from major tech companies has prompted academic researchers to develop open alternatives. The University of Hong Kong recently released OpenCUA, an open-source framework for training computer-use agents that rivals the performance of proprietary models from OpenAI and Anthropic.<\/p>\n\n\n\n
The OpenCUA system, trained on over 22,600 human task demonstrations across Windows, macOS, and Ubuntu, achieved state-of-the-art results among open-source models and performed competitively with leading commercial systems. This development could accelerate adoption by enterprises hesitant to rely on closed systems for critical automation workflows.<\/p>\n\n\n\n
Anthropic\u2019s safety testing reveals AI agents can be tricked into deleting files and stealing data<\/h2>\n\n\n\n
Anthropic has implemented several layers of protection for Claude for Chrome, including site-level permissions that allow users to control which websites the AI can access, mandatory confirmations before high-risk actions like making purchases or sharing personal data, and blocking access to categories like financial services and adult content.<\/p>\n\n\n\n
The company\u2019s safety improvements reduced prompt injection attack success rates from 23.6% to 11.2% in autonomous mode, though executives acknowledge this remains insufficient for widespread deployment. On browser-specific attacks involving hidden form fields and URL manipulation, new mitigations reduced the success rate from 35.7% to zero.<\/p>\n\n\n\n
However, these protections may not scale to the full complexity of real-world web environments, where new attack vectors continue to emerge. The company plans to use insights from the pilot program to refine its safety systems and develop more sophisticated permission controls.<\/p>\n\n\n\n
\u201cNew forms of prompt injection attacks are also constantly being developed by malicious actors,\u201d Anthropic warned, highlighting the ongoing nature of the security challenge.<\/p>\n\n\n\n
The rise of AI agents that click and type could fundamentally reshape how humans interact with computers<\/h2>\n\n\n\n
The convergence of multiple major AI companies around computer-controlling agents signals a significant shift in how artificial intelligence systems will interact with existing software infrastructure. Rather than requiring businesses to adopt new AI-specific tools, these systems promise to work with whatever applications companies already use.<\/p>\n\n\n\n
This approach could dramatically lower the barriers to AI adoption while potentially displacing traditional automation vendors and system integrators. Companies that have invested heavily in custom integrations or RPA platforms may find their approaches obsoleted by general-purpose AI agents that can adapt to interface changes without reprogramming.<\/p>\n\n\n\n
For enterprise decision-makers, the technology presents both opportunity and risk. Early adopters could gain significant competitive advantages through improved automation capabilities, but the security vulnerabilities demonstrated by companies like Anthropic suggest caution may be warranted until safety measures mature.<\/p>\n\n\n\n
The limited pilot of Claude for Chrome represents just the beginning of what industry observers expect to be a rapid expansion of computer-controlling AI capabilities across the technology landscape, with implications that extend far beyond simple task automation to fundamental questions about human-computer interaction and digital security.<\/p>\n\n\n\n
As Anthropic noted in its announcement: \u201cWe believe these developments will open up new possibilities for how you work with Claude, and we look forward to seeing what you\u2019ll create.\u201d Whether those possibilities ultimately prove beneficial or problematic may depend on how successfully the industry addresses the security challenges that have already begun to emerge.<\/p>\n
\n
\n
Daily insights on business use cases with VB Daily<\/strong><\/p>\nIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n
Read our Privacy Policy<\/p>\n
\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n
An error occured.<\/p>\n<\/p><\/div>\n
\n\t\t\t\t\t

\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n
\r\n
Source link <\/a>","protected":false},"excerpt":{"rendered":"Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic has begun testing a Chrome browser extension that allows its Claude AI assistant to take control of users\u2019 web browsers, marking the company\u2019s entry into an increasingly crowded […]<\/p>\n","protected":false},"author":1,"featured_media":3390,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-3389","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/08\/nuneybits_a_simple_hand-drawn_illustration_of_an_open_web_page__c908bef6-37a0-486e-bcb2-59cef9c8c501.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3389","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=3389"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3389\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/3390"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=3389"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=3389"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=3389"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}