{"id":3268,"date":"2025-08-22T22:06:46","date_gmt":"2025-08-22T22:06:46","guid":{"rendered":"https:\/\/violethoward.com\/new\/mcp-universe-benchmark-shows-gpt-5-fails-more-than-half-of-real-world-orchestration-tasks\/"},"modified":"2025-08-22T22:06:46","modified_gmt":"2025-08-22T22:06:46","slug":"mcp-universe-benchmark-shows-gpt-5-fails-more-than-half-of-real-world-orchestration-tasks","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/mcp-universe-benchmark-shows-gpt-5-fails-more-than-half-of-real-world-orchestration-tasks\/","title":{"rendered":"MCP-Universe benchmark shows GPT-5 fails more than half of real-world orchestration tasks"},"content":{"rendered":" \r\n
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> Subscribe Now<\/em><\/p>\n\n\n\n The adoption of interoperability standards, such as the Model Context Protocol (MCP), can provide enterprises with insights into how agents and models function outside their walled confines. However, many benchmarks fail to capture real-life interactions with MCP.\u00a0<\/p>\n\n\n\n Salesforce AI Research developed a new open-source benchmark it calls MCP-Universe, which aims to track LLMs as these interact with MCP servers in the real world, arguing that it will paint a better picture of real-life and real-time interactions of models with tools enterprises actually use. In its initial testing, it found that models like OpenAI\u2019s recently released GPT-5 are strong, but still do not perform as well in real-life scenarios.\u00a0<\/p>\n\n\n\n \u201cExisting benchmarks predominantly focus on isolated aspects of LLM performance, such as instruction following, math reasoning, or function calling, without providing a comprehensive assessment of how models interact with real-world MCP servers across diverse scenarios,\u201d Salesforce said in a paper.\u00a0<\/p>\n\n\n\n MCP-Universe captures model performance through tool usage, multi-turn tool calls, long context windows and large tool spaces. It\u2019s grounded on existing MCP servers with access to actual data sources and environments.\u00a0<\/p>\n\n\n\n AI Scaling Hits Its Limits<\/strong><\/p>\n\n\n\n Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:<\/p>\n\n\n\n Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n Junnan Li, director of AI research at Salesforce, told VentureBeat that many models \u201cstill face limitations that hold them back on enterprise-grade tasks.\u201d<\/p>\n\n\n\n \u201cTwo of the biggest are: Long context challenges, models can lose track of information or struggle to reason consistently when handling very long or complex inputs,\u201d Li said. \u201cAnd, Unknown tool challenges, models often aren\u2019t able to seamlessly use unfamiliar tools or systems in the way humans can adapt on the fly. This is why it\u2019s crucial not to take a DIY approach with a single model to power agents alone, but instead, to rely on a platform that combines data context, enhanced reasoning, and trust guardrails to truly meet the needs of enterprise AI.\u201d<\/p>\n\n\n\n MCP-Universe joins other MCP-based proposed benchmarks, such as\u00a0MCP-Radar\u00a0from the University of Massachusetts Amherst and Xi\u2019an Jiaotong University, as well as the<\/span> Beijing University of Posts and Telecommunications\u2019 MCPWorld. It also builds on MCPEvals, which Salesforce released in July, which focuses mainly on agents. Li said the biggest difference between MCP-Universe and MCPEvals is that the latter is evaluated with synthetic tasks.\u00a0<\/p>\n\n\n\n MCP-Universe evaluates how well each model performs a series of tasks that mimic those undertaken by enterprises. Salesforce said it designed MCP-Universe to encompass six core domains used by enterprises: location navigation, repository management, financial analysis, 3D design, browser automation and web search. It accessed 11 MCP servers for a total of 231 tasks.\u00a0<\/p>\n\n\n\n Salesforce said that it had to design new MCP tasks that reflect real use cases. For each domain, they created four to five kinds of tasks that the researchers think LLMs can easily complete. For example, the researchers assigned the models a goal that involved route planning, identifying the optimal stops and then locating the destination.\u00a0<\/p>\n\n\n\n Each model is evaluated on how they completed the tasks. Li and his team opted to follow an execution-based evaluation paradigm rather than the more common LLM-as-a-judge system. The researchers noted the LLM-as-a-judge paradigm \u201cis not well-suited for our MCP-Universe scenario, since some tasks are designed to use real-time data, while the knowledge of the LLM judge is static.\u201d<\/p>\n\n\n\n Salesforce researchers used three types of evaluators: format evaluators to see if the agents and models follow format requirements, static evaluators to assess correctness over time and dynamic evaluators for fluctuating answers like flight prices or GitHub issues.<\/p>\n\n\n\n \u201cMCP-Universe focuses on creating challenging real-world tasks with execution-based evaluators, which can stress-test the agent in complex scenarios. Furthermore, MCP-Universe offers an extendable framework\/codebase for building and evaluating agents,\u201d Li said.\u00a0<\/p>\n\n\n\n To test MCP-Universe, Salesforce evaluated several popular proprietary and open-source models. These include Grok-4 from xAI, Anthropic\u2019s Claude-4 Sonnet and Claude 3.7 Sonnet, OpenAI\u2019s GPT-5, o4-mini, o3, GPT-4.1, GPT-4o, GPT-oss, Google\u2019s Gemini 2.5 Pro and Gemini 2.5 Fkash, GLM-4.5 from Zai, Moonshot\u2019s Kimi-K2, Qwen\u2019s Qwen3 Coder and Qwen3-235B-A22B-Instruct-2507 and DeepSeek-V3-0304 from DeepSeek. Each model tested had at least 120B parameters.<\/p>\n\n\n\n In its testing, Salesforce found GPT-5 had the best success rate, especially for financial analysis tasks. Grok-4 followed, beating all the models for browser automation, and Claude-4.0 Sonnet rounds out the top three, although it did not post any performance numbers higher than either of the models it follows. Among open-source models, GLM-4.5 performed the best.\u00a0<\/p>\n\n\n\n However, MCP-Universe showed the models had difficulty handling long contexts, especially for location navigation, browser automation and financial analysis, with efficiency falling significantly. The moment the LLMs encounter unknown tools, their performance also drops. The LLMs demonstrated difficulty in completing more than half of the tasks that enterprises typically perform.<\/p>\n\n\n\n \u201cThese findings highlight that current frontier LLMs still fall short in reliably executing tasks across diverse real-world MCP tasks. Our MCP-Universe benchmark, therefore, provides a challenging and necessary testbed for evaluating LLM performance in areas underserved by existing benchmarks,\u201d the paper said.\u00a0<\/p>\n\n\n\n Li told VentureBeat that he hopes enterprises will use MCP-Universe to gain a deeper understanding of where agents and models fail on tasks so that they can improve either their frameworks or the implementation of their MCP tools.\u00a0<\/p>\n
\n<\/div>
\n\n\n\n\n
\n<\/div>How it works<\/h2>\n\n\n\n
\n
<\/figure>\n\n\n\nEven the big models have trouble<\/h2>\n\n\n\n
<\/figure>\n\n\n\n
<\/figure>\n\n\n\n