{"id":2860,"date":"2025-07-29T02:30:01","date_gmt":"2025-07-29T02:30:01","guid":{"rendered":"https:\/\/violethoward.com\/new\/chinese-startup-z-ai-launches-powerful-open-source-glm-4-5-model-family-with-powerpoint-creation\/"},"modified":"2025-07-29T02:30:01","modified_gmt":"2025-07-29T02:30:01","slug":"chinese-startup-z-ai-launches-powerful-open-source-glm-4-5-model-family-with-powerpoint-creation","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/chinese-startup-z-ai-launches-powerful-open-source-glm-4-5-model-family-with-powerpoint-creation\/","title":{"rendered":"Chinese startup Z.ai launches powerful open source GLM-4.5 model family with PowerPoint creation"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> <em>Subscribe Now<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>Another week in the summer of 2025 has begun, and in a continuation of the trend from last week, with it arrives more powerful Chinese open source AI models. <\/p>\n\n\n\n<p>Little-known (at least to us here in the West) Chinese startup Z.ai has introduced two new open source LLMs \u2014 <strong>GLM-4.5<\/strong> and <strong>GLM-4.5-Air<\/strong> \u2014 casting them as go-to solutions for AI reasoning, agentic behavior, and coding. <\/p>\n\n\n\n<p>And according to Z.ai\u2019s blog post, the models perform near the top of the pack of other proprietary LLM leaders in the U.S.<\/p>\n\n\n\n<p>For example, the flagship GLM-4.5 matches or outperforms leading proprietary models like <strong>Claude 4 Sonnet<\/strong>, <strong>Claude 4 Opus<\/strong>, and <strong>Gemini 2.5 Pro<\/strong> on evaluations such as <strong>BrowseComp<\/strong>, <strong>AIME24<\/strong>, and <strong>SWE-bench Verified<\/strong>, while ranking third overall across a dozen competitive tests. <\/p>\n\n\n\n<div id=\"boilerplate_2803147\" class=\"post-boilerplate boilerplate-speedbump\">\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong>The AI Impact Series Returns to San Francisco &#8211; August 5<\/strong><\/p>\n\n\n\n<p>The next phase of AI is here &#8211; are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows &#8211; from real-time decision-making to end-to-end automation.<\/p>\n\n\n\n<p>Secure your spot now &#8211; space is limited: https:\/\/bit.ly\/3GuuPLF<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<\/div><p>Its lighter-weight sibling, GLM-4.5-Air, also performs within the top six, offering strong results relative to its smaller scale.<\/p>\n\n\n\n<p>Both models feature dual operation modes: a thinking mode for complex reasoning and tool use, and a non-thinking mode for instant response scenarios. They can a<strong>utomatically generate complete PowerPoint presentations from a single title or prompt,<\/strong> making them useful for meeting preparation, education, and internal reporting.<\/p>\n\n\n\n<p>They further offer creative writing, emotionally aware copywriting, and script generation to create branded content for social media and the web. Moreover, z.ai says they support virtual character development and turn-based dialogue systems for customer support, roleplaying, fan engagement, or digital persona storytelling.<\/p>\n\n\n\n<p>While both models support reasoning, coding, and agentic capabilities, GLM-4.5-Air is designed for teams seeking a lighter-weight, more cost-efficient alternative with faster inference and lower resource requirements. <\/p>\n\n\n\n<p>Z.ai also lists several specialized models in the GLM-4.5 family on its API, including <strong>GLM-4.5-X<\/strong> and <strong>GLM-4.5-AirX<\/strong> for ultra-fast inference, and <strong>GLM-4.5-Flash<\/strong>, a free variant optimized for coding and reasoning tasks.<\/p>\n\n\n\n<p>They\u2019re available now to use directly on Z.ai and through the Z.ai application programming interface (API) for developers to connect to third-party apps, and their code is available on HuggingFace and ModelScope. The company also provides multiple integration routes, including support for inference via vLLM and SGLang.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-licensing-and-api-pricing\">Licensing and API pricing<\/h2>\n\n\n\n<p>GLM-4.5 and GLM-4.5-Air are released under the <strong>Apache 2.0 license<\/strong>, a permissive and commercially friendly open-source license. <\/p>\n\n\n\n<p>This allows developers and organizations to freely <strong>use, modify, self-host, fine-tune, and redistribute<\/strong> the models for both research and commercial purposes.<\/p>\n\n\n\n<p>For those who don\u2019t want to download the model code or weights and self-host or deploy on their own, z.ai\u2019s cloud-based API offers the model for the following prices. <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>GLM-4.5<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>$0.60<\/strong> \/ <strong>$2.20 per 1 million input\/output tokens<\/strong><\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>GLM-4.5-Air<\/strong>:\n<ul class=\"wp-block-list\">\n<li><strong>$0.20 \/<\/strong> <strong>$1.10 per 1M input\/output tokens<\/strong><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>A CNBC article on the models reported that z.ai would charge only $0.11 \/ $0.28 per million input\/output tokens, which is also supported by a Chinese graphic the company posted on its API documentation for the \u201cAir model.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" height=\"289\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/benchmark2-1.png?w=800\" alt=\"\" class=\"wp-image-3014808\" style=\"width:840px;height:auto\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/benchmark2-1.png 1280w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/benchmark2-1.png?resize=300,108 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/benchmark2-1.png?resize=768,277 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/benchmark2-1.png?resize=800,289 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/benchmark2-1.png?resize=400,144 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/benchmark2-1.png?resize=750,271 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/benchmark2-1.png?resize=578,209 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/benchmark2-1.png?resize=930,336 930w\" sizes=\"(max-width: 800px) 100vw, 800px\"\/><\/figure>\n\n\n\n<p>However, this appears to be the case only for inputting up to 32,000 tokens and outputting 200 tokens at a single time. (Recall tokens are the numerical designations the LLM uses to represent different semantic concepts and word components, the LLM\u2019s native language, with each token translating to a word or portion of a word). <\/p>\n\n\n\n<p>In fact, the Chinese graphic reveals far more detailed pricing for both models per batches of tokens inputted\/outputted. I\u2019ve tried to translate it below:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" height=\"315\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png?w=800\" alt=\"\" class=\"wp-image-3014815\" style=\"width:839px;height:auto\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png 1980w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png?resize=300,118 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png?resize=768,303 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png?resize=800,315 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png?resize=1536,605 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png?resize=400,158 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png?resize=750,295 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png?resize=578,228 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/glm-4-5-table-2.png?resize=930,366 930w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/figure>\n\n\n\n<p>Another note: since z.ai is based in China, those in the West who are focused on data sovereignty will want to due diligence through internal policies to pursue using the API, as it may be subject to Chinese content restrictions. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-competitive-performance-on-third-party-benchmarks-approaching-that-of-leading-closed-proprietary-llms\">Competitive performance on third-party benchmarks, approaching that of leading closed\/proprietary LLMs<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" height=\"549\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?w=800\" alt=\"\" class=\"wp-image-3014793\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png 4464w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?resize=300,206 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?resize=768,527 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?resize=800,549 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?resize=1536,1055 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?resize=2048,1407 2048w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?resize=400,275 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?resize=750,515 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?resize=578,397 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/bench.png?resize=930,639 930w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/figure>\n\n\n\n<p>GLM-4.5 ranks third across 12 industry benchmarks measuring agentic, reasoning, and coding performance\u2014trailing only OpenAI\u2019s GPT-4 and xAI\u2019s Grok 4. GLM-4.5-Air, its more compact sibling, lands in sixth position.<\/p>\n\n\n\n<p>In agentic evaluations, GLM-4.5 matches Claude 4 Sonnet in performance and exceeds Claude 4 Opus in web-based tasks. It achieves a 26.4% accuracy on the BrowseComp benchmark, compared to Claude 4 Opus\u2019s 18.8%. In the reasoning category, it scores competitively on tasks such as MATH 500 (98.2%), AIME24 (91.0%), and GPQA (79.1%).<\/p>\n\n\n\n<p>For coding, GLM-4.5 posts a 64.2% success rate on SWE-bench Verified and 37.5% on Terminal-Bench. In pairwise comparisons, it outperforms Qwen3-Coder with an 80.8% win rate and beats Kimi K2 in 53.9% of tasks. Its agentic coding ability is enhanced by integration with tools like Claude Code, Roo Code, and CodeGeex.<\/p>\n\n\n\n<p>The model also leads in tool-calling reliability, with a success rate of 90.6%, edging out Claude 4 Sonnet and the new-ish Kimi K2.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-part-of-the-wave-of-open-source-chinese-llms\">Part of the wave of open source Chinese LLMs<\/h2>\n\n\n\n<p>The release of GLM-4.5 arrives amid a surge of competitive open-source model launches in China, most notably from <strong>Alibaba\u2019s Qwen Team<\/strong>. <\/p>\n\n\n\n<p>In the span of a single week, Qwen released <strong>four new open-source LLMs<\/strong>, including the reasoning-focused <strong>Qwen3-235B-A22B-Thinking-2507<\/strong>, which now tops or matches leading models such as OpenAI\u2019s o4-mini and Google\u2019s Gemini 2.5 Pro on reasoning benchmarks like AIME25, LiveCodeBench, and GPQA.<\/p>\n\n\n\n<p>This week, Alibaba continued the trend with the release of Wan 2.2, a powerful new open source video model. <\/p>\n\n\n\n<p>Alibaba\u2019s new models are, like z.ai, licensed under <strong>Apache 2.0<\/strong>, allowing commercial usage, self-hosting, and integration into proprietary systems. <\/p>\n\n\n\n<p>The broad availability and permissive licensing of Alibaba\u2019s offerings and Chinese startup Moonshot before it with its Kimi K2 model reflects an ongoing strategic effort by Chinese AI companies to position open-source infrastructure as a viable alternative to closed U.S.-based models.<\/p>\n\n\n\n<p>It also places pressure on the U.S.-based model provider efforts to compete in open source. Meta has been on a hiring spree after its Llama 4 model family debuted earlier this year to a mixed response from the AI community, including a hefty dose of criticism for what some AI power users saw as benchmark gaming and inconsistent performance.<\/p>\n\n\n\n<p>Meanwhile, OpenAI co-founder and CEO Sam Altman recently announced that OpenAI\u2019s long-awaited and much-hyped frontier open source LLM \u2014 its first since before ChatGPT launched in late 2022 \u2014 would be delayed from its originally planned July release to an as-yet unspecified later date.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-architecture-and-training-lessons-revealed\">Architecture and training lessons revealed<\/h2>\n\n\n\n<p>GLM-4.5 is built with 355 billion total and 32 billion active parameters. Its counterpart, GLM-4.5-Air, offers a lighter-weight design at 106 billion total and 12 billion active parameters. <\/p>\n\n\n\n<p>Both use a Mixture-of-Experts (MoE) architecture, optimized with loss-free balance routing, sigmoid gating, and increased depth for enhanced reasoning. <\/p>\n\n\n\n<p>The self-attention block includes Grouped-Query Attention and a higher number of attention heads. A Multi-Token Prediction (MTP) layer enables speculative decoding during inference.<\/p>\n\n\n\n<p>Pre-training spans 22 trillion tokens split between general-purpose and code\/reasoning corpora. Mid-training adds 1.1 trillion tokens from repo-level code data, synthetic reasoning inputs, and long-context\/agentic sources.<\/p>\n\n\n\n<p>Z.ai\u2019s post-training process for GLM-4.5 relied upon a reinforcement learning phase powered by its in-house RL infrastructure, <em>slime<\/em>, which separates data generation and model training processes to optimize throughput on agentic tasks. <\/p>\n\n\n\n<p>Among the techniques they used were mixed-precision rollouts and adaptive curriculum learning.<br\/>The former help the model train faster and more efficiently by using lower-precision math when generating data, without sacrificing much accuracy. <\/p>\n\n\n\n<p>Meanwhile, adaptive curriculum learning means the model starts with easier tasks and gradually moves to harder ones, helping it learn more complex tasks gradually over time.<\/p>\n\n\n\n<p>GLM-4.5\u2019s architecture prioritizes computational efficiency. According to CNBC, Z.ai CEO <strong>Zhang Peng<\/strong> stated that the model runs on just eight <strong>Nvidia H20 GPUs<\/strong> \u2014 custom silicon designed for the Chinese market to comply with U.S. export controls. That\u2019s roughly half the hardware requirement of DeepSeek\u2019s comparable models.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-interactive-demos\">Interactive demos<\/h2>\n\n\n\n<p>Z.ai highlights full-stack development, slide creation, and interactive artifact generation as demonstration areas on its blog post.<\/p>\n\n\n\n<p>Examples include a Flappy Bird clone, Pok\u00e9mon Pok\u00e9dex web app, and slide decks built from structured documents or web queries. <\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" height=\"600\" width=\"754\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-28-at-6.22.32%E2%80%AFPM.png?w=754\" alt=\"\" class=\"wp-image-3014798\" style=\"width:837px;height:auto\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-28-at-6.22.32\u202fPM.png 1018w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-28-at-6.22.32\u202fPM.png?resize=300,239 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-28-at-6.22.32\u202fPM.png?resize=768,611 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-28-at-6.22.32\u202fPM.png?resize=754,600 754w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-28-at-6.22.32\u202fPM.png?resize=400,318 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-28-at-6.22.32\u202fPM.png?resize=750,597 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-28-at-6.22.32\u202fPM.png?resize=578,460 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-28-at-6.22.32\u202fPM.png?resize=930,740 930w\" sizes=\"auto, (max-width: 754px) 100vw, 754px\"\/><\/figure>\n\n\n\n<p>Users can interact with these features on the Z.ai chat platform or through API integration.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-company-background-and-market-position\">Company background and market position<\/h2>\n\n\n\n<p>Z.ai was founded in 2019 under the name Zhipu, and has since grown into one of China\u2019s most prominent AI startups, according to CNBC.<\/p>\n\n\n\n<p>The company has raised over $1.5 billion from investors including Alibaba, Tencent, Qiming Venture Partners, and municipal funds from Hangzhou and Chengdu, with additional backing from Aramco-linked Prosperity7 Ventures.<\/p>\n\n\n\n<p>Its GLM-4.5 launch coincides with the World Artificial Intelligence Conference in Shanghai, where multiple Chinese firms showcased advancements. Z.ai was also named in a June OpenAI report highlighting Chinese progress in AI, and has since been added to a U.S. entity list limiting business with American firms.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-it-means-for-enterprise-technical-decision-makers\">What it means for enterprise technical decision-makers<\/h2>\n\n\n\n<p>For senior AI engineers, data engineers, and AI orchestration leads tasked with building, deploying, or scaling language models in production, the GLM-4.5 family\u2019s release under the <strong>Apache 2.0 license<\/strong> presents a meaningful shift in options. <\/p>\n\n\n\n<p>The model offers performance that rivals top proprietary systems across reasoning, coding, and agentic benchmarks \u2014 yet comes with full weight access, commercial usage rights, and flexible deployment paths, including cloud, private, or on-prem environments.<\/p>\n\n\n\n<p>For those managing LLM lifecycles \u2014 whether leading model fine-tuning, orchestrating multi-stage pipelines, or integrating models with internal tools \u2014 GLM-4.5 and GLM-4.5-Air reduce barriers to testing and scaling.<\/p>\n\n\n\n<p>The models support standard OpenAI-style interfaces and tool-calling formats, making it easier to evaluate in sandboxed environments or drop into existing agent frameworks.<\/p>\n\n\n\n<p>GLM-4.5 also supports <strong>streaming output, context caching, and structured JSON responses<\/strong>, enabling smoother integration with enterprise systems and real-time interfaces. For teams building autonomous tools, its deep thinking mode provides more precise control over multi-step reasoning behavior.<\/p>\n\n\n\n<p>For teams under budget constraints or those seeking to avoid vendor lock-in, the pricing structure undercuts major proprietary alternatives like DeepSeek and Kimi K2. This matters for organizations where usage volume, long-context tasks, or data sensitivity make open deployment a strategic necessity.<\/p>\n\n\n\n<p>For professionals in AI infrastructure and orchestration, such as those implementing CI\/CD pipelines, monitoring models in production, or managing GPU clusters, GLM-4.5\u2019s support for vLLM, SGLang, and mixed-precision inference aligns with current best practices in efficient, scalable model serving. Combined with open-source RL infrastructure (slime) and a modular training stack, the model\u2019s design offers flexibility for tuning or extending in domain-specific environments.<\/p>\n\n\n\n<p>In short, GLM-4.5\u2019s launch gives enterprise teams a viable, high-performing foundation model they can <strong>control, adapt, and scale<\/strong>, without being tied to proprietary APIs or pricing structures. It\u2019s a compelling option for teams balancing innovation, performance, and operational constraints.<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/chinese-startup-z-ai-launches-powerful-open-source-glm-4-5-model-family-with-powerpoint-creation\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Another week in the summer of 2025 has begun, and in a continuation of the trend from last week, with it arrives more powerful Chinese open source AI models. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2861,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-2860","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/07\/benchmark2-1.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/2860","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=2860"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/2860\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/2861"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=2860"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=2860"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=2860"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 17:06:03 UTC -->