{"id":1427,"date":"2025-04-25T22:50:46","date_gmt":"2025-04-25T22:50:46","guid":{"rendered":"https:\/\/violethoward.com\/new\/liquid-ai-is-revolutionizing-llms-to-work-on-edge-devices-like-smartphones-with-new-hyena-edge-model\/"},"modified":"2025-04-25T22:50:46","modified_gmt":"2025-04-25T22:50:46","slug":"liquid-ai-is-revolutionizing-llms-to-work-on-edge-devices-like-smartphones-with-new-hyena-edge-model","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/liquid-ai-is-revolutionizing-llms-to-work-on-edge-devices-like-smartphones-with-new-hyena-edge-model\/","title":{"rendered":"Liquid AI is revolutionizing LLMs to work on edge devices like smartphones with new ‘Hyena Edge’ model"},"content":{"rendered":" \r\n
\n\t\t\t\t
\n

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n


\n<\/div>

Liquid AI, the Boston-based foundation model startup spun out of the Massachusetts Institute of Technology (MIT), is seeking to move the tech industry beyond its reliance on the Transformer architecture underpinning most popular large language models (LLMs) such as OpenAI\u2019s GPT series and Google\u2019s Gemini family.<\/p>\n\n\n\n

Yesterday, the company announced \u201cHyena Edge,\u201d a new convolution-based, multi-hybrid model designed for smartphones and other edge devices in advance of the International Conference on Learning Representations (ICLR) 2025. <\/p>\n\n\n\n

The conference, one of the premier events for machine learning research, is taking place this year in Vienna, Austria. <\/p>\n\n\n\n

New convolution-based model promises faster, more memory-efficient AI at the edge<\/h2>\n\n\n\n

Hyena Edge is engineered to outperform strong Transformer baselines on both computational efficiency and language model quality.<\/p>\n\n\n\n

In real-world tests on a Samsung Galaxy S24 Ultra smartphone, the model delivered lower latency, smaller memory footprint, and better benchmark results compared to a parameter-matched Transformer++ model.<\/p>\n\n\n\n

A new architecture for a new era of edge AI<\/h2>\n\n\n\n

Unlike most small models designed for mobile deployment \u2014 including SmolLM2, the Phi models, and Llama 3.2 1B \u2014 Hyena Edge steps away from traditional attention-heavy designs. Instead, it strategically replaces two-thirds of grouped-query attention (GQA) operators with gated convolutions from the Hyena-Y family.<\/p>\n\n\n\n

The new architecture is the result of Liquid AI\u2019s Synthesis of Tailored Architectures (STAR) framework, which uses evolutionary algorithms to automatically design model backbones and was announced back in December 2024.<\/p>\n\n\n\n

STAR explores a wide range of operator compositions, rooted in the mathematical theory of linear input-varying systems, to optimize for multiple hardware-specific objectives like latency, memory usage, and quality.<\/p>\n\n\n\n

Benchmarked directly on consumer hardware<\/h2>\n\n\n\n

To validate Hyena Edge\u2019s real-world readiness, Liquid AI ran tests directly on the Samsung Galaxy S24 Ultra smartphone. <\/p>\n\n\n\n

Results show that Hyena Edge achieved up to 30% faster prefill and decode latencies compared to its Transformer++ counterpart, with speed advantages increasing at longer sequence lengths. <\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Prefill latencies at short sequence lengths also outpaced the Transformer baseline \u2014 a critical performance metric for responsive on-device applications.<\/p>\n\n\n\n

In terms of memory, Hyena Edge consistently used less RAM during inference across all tested sequence lengths, positioning it as a strong candidate for environments with tight resource constraints.<\/p>\n\n\n\n

Outperforming Transformers on language benchmarks<\/h2>\n\n\n\n

Hyena Edge was trained on 100 billion tokens and evaluated across standard benchmarks for small language models, including Wikitext, Lambada, PiQA, HellaSwag, Winogrande, ARC-easy, and ARC-challenge. <\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

On every benchmark, Hyena Edge either matched or exceeded the performance of the GQA-Transformer++ model, with noticeable improvements in perplexity scores on Wikitext and Lambada, and higher accuracy rates on PiQA, HellaSwag, and Winogrande.<\/p>\n\n\n\n

These results suggest that the model\u2019s efficiency gains do not come at the cost of predictive quality \u2014 a common tradeoff for many edge-optimized architectures.<\/p>\n\n\n\n

Hyena Edge Evolution: A look at performance and operator trends<\/h2>\n\n\n\n

For those seeking a deeper dive into Hyena Edge\u2019s development process, a recent video walkthrough provides a compelling visual summary of the model\u2019s evolution. <\/p>\n\n\n\n

\n