{"id":1023,"date":"2025-04-06T04:49:30","date_gmt":"2025-04-06T04:49:30","guid":{"rendered":"https:\/\/violethoward.com\/new\/metas-answer-to-deepseek-is-here-llama-4-launches-with-long-context-scout-and-maverick-models-and-2t-parameter-behemoth-on-the-way\/"},"modified":"2025-04-06T04:49:30","modified_gmt":"2025-04-06T04:49:30","slug":"metas-answer-to-deepseek-is-here-llama-4-launches-with-long-context-scout-and-maverick-models-and-2t-parameter-behemoth-on-the-way","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/metas-answer-to-deepseek-is-here-llama-4-launches-with-long-context-scout-and-maverick-models-and-2t-parameter-behemoth-on-the-way\/","title":{"rendered":"Meta&#8217;s answer to DeepSeek is here: Llama 4 launches with long context Scout and Maverick models, and 2T parameter Behemoth on the way!"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>The entire AI landscape shifted back in January 2025 after a then little-known Chinese AI startup DeepSeek (a subsidiary of the Hong Kong-based quantitative analysis firm High-Flyer Capital Management) launched its powerful open source language reasoning model DeepSeek R1 publicly to the world, besting U.S. giants such as Meta. <\/p>\n\n\n\n<p>As DeepSeek usage spread rapidly among researchers and enterprises, Meta was reportedly sent into panic mode upon learning that this new R1 model had been trained for a fraction of the cost of many other leading models yet outclassed them for as little as several million dollars \u2014 what it pays some of its own AI team leaders.<\/p>\n\n\n\n<p>Meta\u2019s whole generative AI strategy had until that point been predicated on releasing best-in-class open source models under its brand name \u201cLlama\u201d for researchers and companies to build upon freely (at least, if they had fewer than 700 million monthly users, at which point they are supposed to contact Meta for special paid licensing terms). <\/p>\n\n\n\n<p>Yet DeepSeek R1\u2019s astonishingly good performance on a far smaller budget had allegedly shaken the company leadership and forced some kind of reckoning, with the last version of Llama, 3.3, having been released just a month prior in December 2024 yet already looking outdated.<\/p>\n\n\n\n<p>Now we know the fruits of that reckoning: today, Meta founder and CEO Mark Zuckerberg took to his Instagram account to announced a new Llama 4 series of models, with two of them \u2014 the 400-billion parameter Llama 4 Maverick and 109-billion parameter Llama 4 Scout \u2014 available today for developers to download and begin using or fine-tuning now on llama.com and AI code sharing community Hugging Face.<\/p>\n\n\n\n<p>A massive 2-trillion parameter Llama 4 Behemoth is also being previewed today, though Meta\u2019s blog post on the releases said it was still being trained, and gave no indication of when it might be released. (Recall parameters refer to the settings that govern the model\u2019s behavior and that generally more mean a more powerful and complex all around model.)<\/p>\n\n\n\n<p>One headline feature of these models is that they are all multimodal \u2014 trained on, and therefore, capable of receiving and generating text, video, and imagery (hough audio was not mentioned).<\/p>\n\n\n\n<p>Another is that they have incredibly long context windows \u2014 1 million tokens for Llama 4 Maverick and 10 million for Llama 4 Scout \u2014 which is equivalent to about 1,500 and 15,000 pages of text, respectively, all of which the model can handle in a single input\/output interaction. That means a user could theoretically upload or paste up to 7,500 pages-worth-of text and receive that much in return from Llama 4 Scout, which would be handy for information-dense fields such as medicine, science, engineering, mathematics, literature etc.<\/p>\n\n\n\n<p>Here\u2019s what else we\u2019ve learned about this release so far:<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-all-in-on-mixture-of-experts\">All-in on mixture-of-experts<\/h2>\n\n\n\n<p>All three models use the \u201cmixture-of-experts (MoE)\u201d architecture approach popularized in earlier model releases from OpenAI and Mistral, which essentially combines multiple smaller models specialized (\u201cexperts\u201d) in different tasks, subjects and media formats into a unified whole, larger model. Each Llama 4 release is said to be therefore a mixture of 128 different experts, and more efficient to run because only the expert needed for a particular task, plus a \u201cshared\u201d expert, handles each token, instead of the entire model having to run for each one. <\/p>\n\n\n\n<p>As the Llama 4 blog post notes:<\/p>\n\n\n\n<p><em>As a result, while all parameters are stored in memory, only a subset of the total parameters are activated while serving these models. This improves inference efficiency by lowering model serving costs and latency\u2014Llama 4 Maverick can be run on a single [Nvidia] H100 DGX host for easy deployment, or with distributed inference for maximum efficiency.<\/em><\/p>\n\n\n\n<p>Both Scout and Maverick are available to the public for self-hosting, while no hosted API or pricing tiers have been announced for official Meta infrastructure. Instead, Meta focuses on distribution through open download and integration with Meta AI in WhatsApp, Messenger, Instagram, and web.<\/p>\n\n\n\n<p>Meta estimates the inference cost for Llama 4 Maverick at $0.19 to $0.49 per 1 million tokens (using a 3:1 blend of input and output). This makes it substantially cheaper than proprietary models like GPT-4o, which is estimated to cost $4.38 per million tokens, based on community benchmarks.<\/p>\n\n\n\n\n\n\n\n<p>All three Llama 4 models\u2014especially Maverick and Behemoth\u2014are explicitly designed for reasoning, coding, and step-by-step problem solving \u2014 though they don\u2019t appear to exhibit the chains-of-thought of dedicated reasoning models such as the OpenAI \u201co\u201d series, nor DeepSeek R1. <\/p>\n\n\n\n<p>Instead, they seem designed to compete more directly with \u201cclassical,\u201d non-reasoning LLMs and multimodal models such as OpenAI\u2019s GPT-4o and DeepSeek\u2019s V3 \u2014 with the exception of Llama 4 Behemoth, which <em>does<\/em> appear to threaten DeepSeek R1 (more on this below!)<\/p>\n\n\n\n<p>In addition, for Llama 4, Meta built custom post-training pipelines focused on enhancing reasoning, such as:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Removing over 50% of \u201ceasy\u201d prompts during supervised fine-tuning.<\/li>\n\n\n\n<li>Adopting a continuous reinforcement learning loop with progressively harder prompts.<\/li>\n\n\n\n<li>Using pass@k evaluation and curriculum sampling to strengthen performance in math, logic, and coding.<\/li>\n\n\n\n<li>Implementing MetaP, a new technique that lets engineers tune hyperparameters (like per-layer learning rates) on models and apply them to other model sizes and types of tokens while preserving the intended model behavior.<\/li>\n<\/ul>\n\n\n\n<p>MetaP is of particular interest as it could be used going forward to set hyperparameters on on model and then get many other types of models out of it, increasing training efficiency. <\/p>\n\n\n\n<p>As my VentureBeat colleague and LLM expert Ben Dickson opined ont the new MetaP technique: \u201cThis can save a lot of time and money. It means that they run experiments on the smaller models instead of doing them on the large-scale ones.\u201d<\/p>\n\n\n\n<p>This is especially critical when training models as large as Behemoth, which uses 32K GPUs and FP8 precision, achieving 390 TFLOPs\/GPU over more than 30 trillion tokens\u2014more than double the Llama 3 training data.<\/p>\n\n\n\n<p>In other words: the researchers can tell the model broadly how they want it to act, and apply this to larger and smaller version  of the model, and across different forms of media.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-a-powerful-but-not-yet-the-most-powerful-model-family\">A powerful \u2013 but not yet <em>the<\/em> <em>most<\/em> powerful \u2014 model family<\/h2>\n\n\n\n<p>In his announcement video on Instagram (a Meta subsidiary, naturally), Meta CEO Mark Zuckerberg said that the company\u2019s \u201cgoal is to build the world\u2019s leading AI, open source it, and make it universally accessible so that everyone in the world benefits\u2026I\u2019ve said for a while that I think open source AI is going to become the leading models, and with Llama 4, that is starting to happen.\u201d<\/p>\n\n\n\n<p>It\u2019s a clearly carefully worded statement, as is Meta\u2019s blog post calling Llama 4 Scout, \u201cthe best multimodal model in the world <em>in its class<\/em> and is more powerful than all previous generation Llama models,\u201d (emphasis added by me). <\/p>\n\n\n\n<p>In other words, these are very powerful models, near the top of the heap compared to others in their parameter-size class, but not necessarily setting new performance records. Nonetheless, Meta was keen to trumpet the models its new Llama 4 family beats, among them:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-llama-4-behemoth\">Llama 4 Behemoth<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Outperforms GPT-4.5, Gemini 2.0 Pro, and Claude Sonnet 3.7 on:\n<ul class=\"wp-block-list\">\n<li>MATH-500 (95.0)<\/li>\n\n\n\n<li>GPQA Diamond (73.7)<\/li>\n\n\n\n<li>MMLU Pro (82.2)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1920\" height=\"1016\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png?w=800\" alt=\"\" class=\"wp-image-3003436\" style=\"width:840px;height:auto\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png 1920w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png?resize=300,159 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png?resize=768,406 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png?resize=800,423 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png?resize=1536,813 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png?resize=400,212 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png?resize=750,397 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png?resize=578,306 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png?resize=930,492 930w\" sizes=\"(max-width: 1920px) 100vw, 1920px\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-llama-4-maverick\">Llama 4 Maverick<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Beats GPT-4o and Gemini 2.0 Flash on most multimodal reasoning benchmarks:\n<ul class=\"wp-block-list\">\n<li>ChartQA, DocVQA, MathVista, MMMU<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Competitive with DeepSeek v3.1 (45.8B params) while using less than half the active parameters (17B)<\/li>\n\n\n\n<li>Benchmark scores:\n<ul class=\"wp-block-list\">\n<li>ChartQA: 90.0 (vs. GPT-4o\u2019s 85.7)<\/li>\n\n\n\n<li>DocVQA: 94.4 (vs. 92.8)<\/li>\n\n\n\n<li>MMLU Pro: 80.5<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Cost-effective: $0.19\u2013$0.49 per 1M tokens<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1920\" height=\"1638\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png?w=703\" alt=\"\" class=\"wp-image-3003437\" style=\"width:840px;height:auto\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png 1920w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png?resize=300,256 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png?resize=768,655 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png?resize=703,600 703w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png?resize=1536,1310 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png?resize=400,341 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png?resize=750,640 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png?resize=578,493 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/489031501_1656960988514372_2535138154557835854_n.png?resize=930,793 930w\" sizes=\"auto, (max-width: 1920px) 100vw, 1920px\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-llama-4-scout\">Llama 4 Scout<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Matches or outperforms models like Mistral 3.1, Gemini 2.0 Flash-Lite, and Gemma 3 on:\n<ul class=\"wp-block-list\">\n<li>DocVQA: 94.4<\/li>\n\n\n\n<li>MMLU Pro: 74.3<\/li>\n\n\n\n<li>MathVista: 70.7<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Unmatched 10M token context length\u2014ideal for long documents, codebases, or multi-turn analysis<\/li>\n\n\n\n<li>Designed for efficient deployment on a single H100 GPU<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1920\" height=\"1359\" src=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png?w=800\" alt=\"\" class=\"wp-image-3003438\" style=\"width:840px;height:auto\" srcset=\"https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png 1920w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png?resize=300,212 300w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png?resize=768,544 768w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png?resize=800,566 800w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png?resize=1536,1087 1536w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png?resize=400,283 400w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png?resize=750,531 750w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png?resize=578,409 578w, https:\/\/venturebeat.com\/wp-content\/uploads\/2025\/04\/488658055_1347378876402143_3412007366291908454_n.png?resize=930,658 930w\" sizes=\"auto, (max-width: 1920px) 100vw, 1920px\"\/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-but-after-all-that-how-does-llama-4-stack-up-to-deepseek\">But after all that, how does Llama 4 stack up to DeepSeek?<\/h2>\n\n\n\n<p>But of course, there are a whole other class of reasoning-heavy models such as DeepSeek R1, OpenAI\u2019s \u201co\u201d series (like GPT-4o), Gemini 2.0, and Claude Sonnet. <\/p>\n\n\n\n<p>Using the highest-parameter model benchmarked\u2014Llama 4 Behemoth\u2014and comparing it to the intial DeepSeek R1 release chart for R1-32B and OpenAI o1 models, here\u2019s how Llama 4 Behemoth stacks up:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Benchmark<\/th><th>Llama 4 Behemoth<\/th><th>DeepSeek R1<\/th><th>OpenAI o1-1217<\/th><\/tr><\/thead><tbody><tr><td>MATH-500<\/td><td>95.0<\/td><td>97.3<\/td><td>96.4<\/td><\/tr><tr><td>GPQA Diamond<\/td><td>73.7<\/td><td>71.5<\/td><td>75.7<\/td><\/tr><tr><td>MMLU<\/td><td>82.2<\/td><td>90.8<\/td><td>91.8<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>What can we conclude?<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>MATH-500: Llama 4 Behemoth is slightly <em>behind<\/em> DeepSeek R1 and OpenAI o1.<\/li>\n\n\n\n<li>GPQA Diamond: Behemoth is <em>ahead of DeepSeek R<\/em>1, but behind OpenAI o1.<\/li>\n\n\n\n<li>MMLU: Behemoth trails both, but still outperforms Gemini 2.0 Pro and GPT-4.5.<\/li>\n<\/ul>\n\n\n\n<p>Takeaway: While DeepSeek R1 and OpenAI o1 edge out Behemoth on a couple metrics, Llama 4 Behemoth remains highly competitive and performs at or near the top of the reasoning leaderboard in its class.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-safety-and-less-political-bias\">Safety and less political \u2018bias\u2019<\/h2>\n\n\n\n<p>Meta also emphasized model alignment and safety by introducing tools like Llama Guard, Prompt Guard, and CyberSecEval to help developers detect unsafe input\/output or adversarial prompts, and implementing Generative Offensive Agent Testing (GOAT) for automated red-teaming.<\/p>\n\n\n\n<p>The company also claims Llama 4 shows substantial improvement on \u201cpolitical bias\u201d and says \u201cspecifically, [leading LLMs] historically have leaned left when it comes to debated political and social topics,\u201d that that Llama 4 does better at courting the right wing\u2026in keeping with Zuckerberg\u2019s embrace of Republican U.S. president Donald J. Trump and his party following the 2024 election.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-where-llama-4-stands-so-far\">Where Llama 4 stands so far<\/h2>\n\n\n\n<p>Meta\u2019s Llama 4 models bring together efficiency, openness, and high-end performance across multimodal and reasoning tasks. <\/p>\n\n\n\n<p>With Scout and Maverick now publicly available and Behemoth previewed as a state-of-the-art teacher model, the Llama ecosystem is positioned to offer a competitive open alternative to top-tier proprietary models from OpenAI, Anthropic, DeepSeek, and Google.<\/p>\n\n\n\n<p>Whether you\u2019re building enterprise-scale assistants, AI research pipelines, or long-context analytical tools, Llama 4 offers flexible, high-performance options with a clear orientation toward reasoning-first design.<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div><template id="jN5irXBvdC59mgn5ggwj"></template><\/script>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/metas-answer-to-deepseek-is-here-llama-4-launches-with-long-context-scout-and-maverick-models-and-2t-parameter-behemoth-on-the-way\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The entire AI landscape shifted back in January 2025 after a then little-known Chinese AI startup DeepSeek (a subsidiary of the Hong Kong-based quantitative analysis firm High-Flyer Capital Management) launched its powerful open source language reasoning [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1024,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-1023","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/04\/489511937_1627813884508038_4209289296588372348_n.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/1023","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=1023"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/1023\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/1024"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=1023"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=1023"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=1023"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 01:22:45 UTC -->