{"id":2222,"date":"2025-07-01T21:37:30","date_gmt":"2025-07-01T21:37:30","guid":{"rendered":"https:\/\/violethoward.com\/new\/the-hidden-costs-of-ai-securing-inference-in-an-age-of-attacks\/"},"modified":"2025-07-01T21:37:30","modified_gmt":"2025-07-01T21:37:30","slug":"the-hidden-costs-of-ai-securing-inference-in-an-age-of-attacks","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/the-hidden-costs-of-ai-securing-inference-in-an-age-of-attacks\/","title":{"rendered":"The Hidden Costs of AI: Securing Inference in an Age of Attacks"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><em>This article is part of VentureBeat\u2019s special issue, \u201cThe Real Cost of AI: Performance, Efficiency and ROI at Scale.\u201d\u00a0Read more\u00a0from this special issue.<\/em><\/p>\n<p>AI\u2019s promise is undeniable, but so are its blindsiding security costs at the inference layer. New attacks targeting AI\u2019s operational side are quietly inflating budgets, jeopardizing regulatory compliance and eroding customer trust, all of which threaten the return on investment (ROI) and total cost of ownership of enterprise AI deployments.<\/p>\n<p>AI has captivated the enterprise with its potential for game-changing insights and efficiency gains. Yet, as organizations rush to operationalize their models, a sobering reality is emerging: The inference stage, where AI translates investment into real-time business value, is under siege. This critical juncture is driving up the total cost of ownership (TCO) in ways that initial business cases failed to predict.<\/p>\n<p>Security executives and CFOs who greenlit AI projects for their transformative upside are now grappling with the hidden expenses of defending these systems. Adversaries have discovered that inference is where AI \u201ccomes alive\u201d for a business, and it\u2019s precisely where they can inflict the most damage. The result is a cascade of cost inflation: Breach containment can exceed $5 million per incident in regulated sectors, compliance retrofits run into the hundreds of thousands and trust failures can trigger stock hits or contract cancellations that decimate projected AI ROI. Without cost containment at inference, AI becomes an ungovernable budget wildcard.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-the-unseen-battlefield-ai-inference-and-exploding-tco\">The unseen battlefield: AI inference and exploding TCO<\/h2>\n<p>AI inference is rapidly becoming the \u201cnext insider risk,\u201d Cristian Rodriguez, field CTO for the Americas at CrowdStrike, told the audience at RSAC 2025.<\/p>\n<p>Other technology leaders echo this perspective and see a common blind spot in enterprise strategy. Vineet Arora, CTO at WinWire, notes that many organizations \u201cfocus intensely on securing the infrastructure around AI while inadvertently sidelining inference.\u201d This oversight, he explains, \u201cleads to underestimated costs for continuous monitoring systems, real-time threat analysis and rapid patching mechanisms.\u201d<\/p>\n<p>Another critical blind spot, according to Steffen Schreier, SVP of product and portfolio at Telesign, a Proximus Global company, is \u201cthe assumption that third-party models are thoroughly vetted and inherently safe to deploy.\u201d<\/p>\n<p>He warned that in reality, \u201cthese models often haven\u2019t been evaluated against an organization\u2019s specific threat landscape or compliance needs,\u201d which can lead to harmful or non-compliant outputs that erode brand trust. Schreier told VentureBeat that \u201cinference-time vulnerabilities \u2014 like prompt injection, output manipulation or context leakage \u2014 can be exploited by attackers to produce harmful, biased or non-compliant outputs. This poses serious risks, especially in regulated industries, and can quickly erode brand trust.\u201d<\/p>\n<p>When inference is compromised, the fallout hits multiple fronts of TCO. Cybersecurity budgets spiral, regulatory compliance is jeopardized and customer trust erodes. Executive sentiment reflects this growing concern. In CrowdStrike\u2019s State of AI in Cybersecurity survey, only 39% of respondents felt generative AI\u2019s rewards clearly outweigh the risks, while 40% judged them comparable. This ambivalence underscores a critical finding: Safety and privacy controls have become top requirements for new gen AI initiatives, with a striking 90% of organizations now implementing or developing policies to govern AI adoption. The top concerns are no longer abstract; 26% cite sensitive data exposure and 25% fear adversarial attacks as key risks.<\/p>\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" height=\"447\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-1-2.jpg?w=800\" alt=\"\" class=\"wp-image-3010705\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-1-2.jpg 1499w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-1-2.jpg?resize=300,168 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-1-2.jpg?resize=768,429 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-1-2.jpg?resize=800,447 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-1-2.jpg?resize=400,224 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-1-2.jpg?resize=750,419 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-1-2.jpg?resize=578,323 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-1-2.jpg?resize=930,520 930w\" sizes=\"(max-width: 800px) 100vw, 800px\"\/><\/figure>\n<p><em>Security leaders exhibit mixed sentiments regarding the overall safety of gen AI, with top concerns centered on the exposure of sensitive data to LLMs (26%) and adversarial attacks on AI tools (25%). <\/em><\/p>\n<h2 class=\"wp-block-heading\" id=\"h-anatomy-of-an-inference-attack\">Anatomy of an inference attack<\/h2>\n<p>The unique attack surface exposed by running AI models is being aggressively probed by adversaries. To defend against this, Schreier advises, \u201cit is critical to treat every input as a potential hostile attack.\u201d Frameworks like the OWASP Top 10 for Large Language Model (LLM) Applications catalogue these threats, which are no longer theoretical but active attack vectors impacting the enterprise:<\/p>\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Prompt injection (LLM01) and insecure output handling (LLM02):<\/strong> Attackers manipulate models via inputs or outputs. Malicious inputs can cause the model to ignore instructions or divulge proprietary code. Insecure output handling occurs when an application blindly trusts AI responses, allowing attackers to inject malicious scripts into downstream systems.<\/li>\n<li><strong>Training data poisoning (LLM03) and model poisoning:<\/strong> Attackers corrupt training data by sneaking in tainted samples, planting hidden triggers. Later, an innocuous input can unleash malicious outputs.<\/li>\n<li><strong>Model denial of service (LLM04):<\/strong> Adversaries can overwhelm AI models with complex inputs, consuming excessive resources to slow or crash them, resulting in direct revenue loss.<\/li>\n<li><strong>Supply chain and plugin vulnerabilities (LLM05 and LLM07):<\/strong> The AI ecosystem is built on shared components. <span style=\"box-sizing: border-box; margin: 0px; padding: 0px;\">For instance, a vulnerability in the Flowise LLM tool<\/span> exposed private AI dashboards and sensitive data, including GitHub tokens and OpenAI API keys, on 438 servers.<\/li>\n<li><strong>Sensitive information disclosure (LLM06):<\/strong> Clever querying can extract confidential information from an AI model if it was part of its training data or is present in the current context.<\/li>\n<li><strong>Excessive agency (LLM08) and Overreliance (LLM09):<\/strong> Granting an AI agent unchecked permissions to execute trades or modify databases is a recipe for disaster if manipulated.<\/li>\n<li><strong>Model theft (LLM10):<\/strong> An organization\u2019s proprietary models can be stolen through sophisticated extraction techniques \u2014 a direct assault on its competitive advantage.<\/li>\n<\/ol>\n<p>Underpinning these threats are foundational security failures. Adversaries often log in with leaked credentials. In early 2024, 35% of cloud intrusions involved valid user credentials, and new, unattributed cloud attack attempts spiked 26%, according to the CrowdStrike 2025 Global Threat Report. A deepfake campaign resulted in a fraudulent $25.6 million transfer, while AI-generated phishing emails have demonstrated a 54% click-through rate, more than four times higher than those written by humans.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" height=\"451\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-2.jpg?w=800\" alt=\"\" class=\"wp-image-3010706\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-2.jpg 1480w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-2.jpg?resize=300,169 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-2.jpg?resize=768,433 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-2.jpg?resize=800,450 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-2.jpg?resize=400,225 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-2.jpg?resize=750,423 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-2.jpg?resize=578,326 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-2.jpg?resize=930,524 930w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/figure>\n<p><em>The OWASP framework illustrates how various LLM attack vectors target different components of an AI application, from prompt injection at the user interface to data poisoning in the training models and sensitive information disclosure from the datastore. <\/em><\/p>\n<h2 class=\"wp-block-heading\" id=\"h-back-to-basics-foundational-security-for-a-new-era\">Back to basics: Foundational security for a new era<\/h2>\n<p>Securing AI requires a disciplined return to security fundamentals \u2014 but applied through a modern lens. \u201cI think that we need to take a step back and ensure that the foundation and the fundamentals of security are still applicable,\u201d Rodriguez argued. \u201cThe same approach you would have to securing an OS is the same approach you would have to securing that AI model.\u201d<\/p>\n<p>This means enforcing unified protection across every attack path, with rigorous data governance, robust cloud security posture management (CSPM), and identity-first security through cloud infrastructure entitlement management (CIEM) to lock down the cloud environments where most AI workloads reside. As identity becomes the new perimeter, AI systems must be governed with the same strict access controls and runtime protections as any other business-critical cloud asset.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-the-specter-of-shadow-ai-unmasking-hidden-risks\">The specter of \u201cshadow AI\u201d: Unmasking hidden risks<\/h2>\n<p>Shadow AI, or the unsanctioned use of AI tools by employees, creates a massive, unknown attack surface. A financial analyst using a free online LLM for confidential documents can inadvertently leak proprietary data. As Rodriguez warned, queries to public models can \u201cbecome another\u2019s answers.\u201d Addressing this requires a combination of clear policy, employee education, and technical controls like AI security posture management (AI-SPM) to discover and assess all AI assets, sanctioned or not.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-fortifying-the-future-actionable-defense-strategies\">Fortifying the future: Actionable defense strategies<\/h2>\n<p>While adversaries have weaponized AI, the tide is beginning to turn. As Mike Riemer, Field CISO at Ivanti, observes, defenders are beginning to \u201charness the full potential of AI for cybersecurity purposes to analyze vast amounts of data collected from diverse systems.\u201d This proactive stance is essential for building a robust defense, which requires several key strategies:<\/p>\n<p><strong>Budget for inference security from day zero:<\/strong> The first step, according to Arora, is to begin with \u201ca comprehensive risk-based assessment.\u201d He advises mapping the entire inference pipeline to identify every data flow and vulnerability. \u201cBy linking these risks to possible financial impacts,\u201d he explains, \u201cwe can better quantify the cost of a security breach\u201d and build a realistic budget.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>To structure this more systematically, CISOs and CFOs should start with a risk-adjusted ROI model. One approach:<\/p>\n<p><em>Security ROI = (estimated breach cost \u00d7 annual risk probability) \u2013 total security investment<\/em><\/p>\n<p>For example, if an LLM inference attack could result in a $5 million loss and the likelihood is 10%, the expected loss is $500,000. A $350,000 investment in inference-stage defenses would yield a net gain of $150,000 in avoided risk. This model enables scenario-based budgeting tied directly to financial outcomes.<\/p>\n<\/blockquote>\n<p><strong>Enterprises allocating less than 8 to 12% of their AI project budgets to inference-stage security are often blindsided later by breach recovery and compliance costs<\/strong>. A Fortune 500 healthcare provider CIO, interviewed by VentureBeat and requesting anonymity, now allocates 15% of their total gen AI budget to post-training risk management, including runtime monitoring, AI-SPM platforms and compliance audits. A practical budgeting model should allocate across four cost centers: runtime monitoring (35%), adversarial simulation (25%), compliance tooling (20%) and user behavior analytics (20%).<\/p>\n<p>Here\u2019s a sample allocation snapshot for a $2 million enterprise AI deployment based on VentureBeat\u2019s ongoing interviews with CFOs, CIOs and CISOs actively budgeting to support AI projects:<\/p>\n<figure class=\"wp-block-table\">\n<table class=\"has-fixed-layout\">\n<thead>\n<tr>\n<th>Budget category<\/th>\n<th>Allocation<\/th>\n<th>Use case example<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Runtime monitoring<\/td>\n<td>$300,000<\/td>\n<td>Behavioral anomaly detection (API spikes)<\/td>\n<\/tr>\n<tr>\n<td>Adversarial simulation<\/td>\n<td>$200,000<\/td>\n<td>Red team exercises to probe prompt injection<\/td>\n<\/tr>\n<tr>\n<td>Compliance tooling<\/td>\n<td>$150,000<\/td>\n<td>EU AI Act alignment, SOC 2 inference validations<\/td>\n<\/tr>\n<tr>\n<td>User behavior analytics<\/td>\n<td>$150,000<\/td>\n<td>Detect misuse patterns in internal AI use<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<p>These investments reduce downstream breach remediation costs, regulatory penalties and SLA violations, all helping to stabilize AI TCO.<\/p>\n<p><strong>Implement runtime monitoring and validation: <\/strong>Begin by tuning anomaly detection to detect behaviors at the inference layer, such as abnormal API call patterns, output entropy shifts or query frequency spikes. Vendors like DataDome and Telesign now offer real-time behavioral analytics tailored to gen AI misuse signatures.<\/p>\n<p>Teams should monitor entropy shifts in outputs, track token irregularities in model responses and watch for atypical frequency in queries from privileged accounts. Effective setups include streaming logs into SIEM tools (such as Splunk or Datadog) with tailored gen AI parsers and establishing real-time alert thresholds for deviations from model baselines.<\/p>\n<p><strong>Adopt a zero-trust framework for AI:<\/strong> Zero-trust is non-negotiable for AI environments. It operates on the principle of \u201cnever trust, always verify.\u201d By adopting this architecture, Riemer notes, organizations can ensure that \u201conly authenticated users and devices gain access to sensitive data and applications, regardless of their physical location.\u201d<\/p>\n<p>Inference-time zero-trust should be enforced at multiple layers:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Identity<\/strong>: Authenticate both human and service actors accessing inference endpoints.<\/li>\n<li><strong>Permissions<\/strong>: Scope LLM access using role-based access control (RBAC) with time-boxed privileges.<\/li>\n<li><strong>Segmentation<\/strong>: Isolate inference microservices with service mesh policies and enforce least-privilege defaults through cloud workload protection platforms (CWPPs).<\/li>\n<\/ul>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" height=\"446\" width=\"800\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-3.jpg?w=800\" alt=\"\" class=\"wp-image-3010708\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-3.jpg 1490w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-3.jpg?resize=300,167 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-3.jpg?resize=768,428 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-3.jpg?resize=800,446 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-3.jpg?resize=400,223 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-3.jpg?resize=750,418 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-3.jpg?resize=578,322 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/06\/figure-3.jpg?resize=930,518 930w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\"\/><\/figure>\n<p><em>A proactive AI security strategy requires a holistic approach, encompassing visibility and supply chain security during development, securing infrastructure and data and implementing robust safeguards to protect AI systems in runtime during production. <\/em><\/p>\n<h2 class=\"wp-block-heading\" id=\"h-protecting-ai-roi-a-ciso-cfo-collaboration-model\">Protecting AI ROI: A CISO\/CFO collaboration model<\/h2>\n<p>Protecting the ROI of enterprise AI requires actively modeling the financial upside of security. Start with a baseline ROI projection, then layer in cost-avoidance scenarios for each security control. Mapping cybersecurity investments to avoided costs including incident remediation, SLA violations and customer churn, turns risk reduction into a measurable ROI gain.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Enterprises should model three ROI scenarios that include baseline, with security investment and post-breach recovery to show cost avoidance clearly. For example, a telecom deploying output validation prevented 12,000-plus misrouted queries per month, saving $6.3 million annually in SLA penalties and call center volume. Tie investments to avoided costs across breach remediation, SLA non-compliance, brand impact and customer churn to build a defensible ROI argument to CFOs.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-checklist-cfo-grade-roi-protection-model\">Checklist: CFO-Grade ROI protection model<\/h2>\n<p>CFOs need to communicate with clarity on how security spending protects the bottom line. To safeguard AI ROI at the inference layer, security investments must be modeled like any other strategic capital allocation: With direct links to TCO, risk mitigation and revenue preservation.<\/p>\n<p>Use this checklist to make AI security investments defensible in the boardroom \u2014 and actionable in the budget cycle.<\/p>\n<ol class=\"wp-block-list\">\n<li>Link every AI security spend to a projected TCO reduction category (compliance, breach remediation, SLA stability).<\/li>\n<li>Run cost-avoidance simulations with 3-year horizon scenarios: baseline, protected and breach-reactive.<\/li>\n<li>Quantify financial risk from SLA violations, regulatory fines, brand trust erosion and customer churn.<\/li>\n<li>Co-model inference-layer security budgets with both CISOs and CFOs to break organizational silos.<\/li>\n<li>Present security investments as growth enablers, not overhead, showing how they stabilize AI infrastructure for sustained value capture.<\/li>\n<\/ol>\n<p>This model doesn\u2019t just defend AI investments; it defends budgets and brands and can protect and grow boardroom credibility.<\/p>\n<\/blockquote>\n<h2 class=\"wp-block-heading\" id=\"h-concluding-analysis-a-strategic-imperative\">Concluding analysis: A strategic imperative<\/h2>\n<p>CISOs must present AI risk management as a business enabler, quantified in terms of ROI protection, brand trust preservation and regulatory stability. As AI inference moves deeper into revenue workflows, protecting it isn\u2019t a cost center; it\u2019s the control plane for AI\u2019s financial sustainability. Strategic security investments at the infrastructure layer must be justified with financial metrics that CFOs can act on.<\/p>\n<p>The path forward requires organizations to balance investment in AI innovation with an equal investment in its protection. This necessitates a new level of strategic alignment. As Ivanti CIO Robert Grazioli told VentureBeat: \u201cCISO and CIO alignment will be critical to effectively safeguard modern businesses.\u201d This collaboration is essential to break down the data and budget silos that undermine security, allowing organizations to manage the true cost of AI and turn a high-risk gamble into a sustainable, high-ROI engine of growth.<\/p>\n<p>Telesign\u2019s Schreier added: \u201cWe view AI inference risks through the lens of digital identity and trust. We embed security across the full lifecycle of our AI tools \u2014 using access controls, usage monitoring, rate limiting and behavioral analytics to detect misuse and protect both our customers and their end users from emerging threats.\u201d<\/p>\n<p>He continued: \u201cWe approach output validation as a critical layer of our AI security architecture, particularly because many inference-time risks don\u2019t stem from how a model is trained, but how it behaves in the wild.\u201d<\/p>\n<\/p><\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/venturebeat.com\/security\/how-runtime-attacks-turn-profitable-ai-into-budget-black-holes\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This article is part of VentureBeat\u2019s special issue, \u201cThe Real Cost of AI: Performance, Efficiency and ROI at Scale.\u201d\u00a0Read more\u00a0from this special issue. AI\u2019s promise is undeniable, but so are its blindsiding security costs at the inference layer. New attacks targeting AI\u2019s operational side are quietly inflating budgets, jeopardizing regulatory compliance and eroding customer trust, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2223,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-2222","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/07\/teal-The-AI-inference-trap_-How-runtime-attacks-turn-profitable-AI-into-budget-black-holes.jpg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/2222","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=2222"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/2222\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/2223"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=2222"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=2222"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=2222"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69d79d7d46fa5cbf45858bd1. Config Timestamp: 2026-04-09 12:37:16 UTC, Cached Timestamp: 2026-04-29 12:06:40 UTC -->