The trend of AI researchers developing new, small open source generative models that outperform far larger, proprietary peers continued this week with yet another staggering advancement. Alexia Jolicoeur-Martineau, Senior AI Researcher at Samsung's Advanced Institute of Technology (SAIT) in Montreal, Canada, has introduced the Tiny Recursion Model (TRM) — a neural network so small it…
How a semiconductor veteran turned over a century of horticultural wisdom into AI-led competitive advantage For decades, a ritual played out across ScottsMiracle-Gro’s media facilities. Every few weeks, workers walked acres of towering compost and wood chip piles with nothing more than measuring sticks. They wrapped rulers around each mound, estimated height, and did what…
Check your research, MIT: 95% of AI projects aren’t failing — far from it. According to new data from G2, nearly 60% of companies already have AI agents in production, and fewer than 2% actually fail once deployed. That paints a very different picture from recent academic forecasts suggesting widespread AI project stagnation. As one…
Presented by Zendesk Zendesk powers nearly 5 billion resolutions every year for over 100,000 customers around the world, with about 20,000 of its customers (and growing) using its AI services. Zendesk is poised to generate about $200 million in AI-related revenue this year, double than some of its largest competitors, while investing $400 million dollars…
It seems like almost every week for the last two years since ChatGPT launched, new large language models (LLMs) from rival labs or from OpenAI itself have been released. Enterprises are hard pressed to keep up with the massive pace of change, let alone understand how to adapt to it — which of these new…
Researchers at Nvidia have developed a new technique that flips the script on how large language models (LLMs) learn to reason. The method, called reinforcement learning pre-training (RLP), integrates RL into the initial training phase rather than saving it for the end. This approach encourages the model to “think for itself before predicting what comes…
Enterprises expanding AI deployments are hitting an invisible performance wall. The culprit? Static speculators that can't keep up with shifting workloads. Speculators are smaller AI models that work alongside large language models during inference. They draft multiple tokens ahead, which the main model then verifies in parallel. This technique (called speculative decoding) has become essential…
Presented by Certinia Every professional services leader knows the feeling: a pipeline full of promising deals, but a bench that’s already stretched thin. That’s because growth has always been tied to a finite supply of consultants with finite availability to work on projects. Even with strong market demand, most firms only capture 10-20% of their…
The friction of having to open a separate chat window to prompt an agent could be a hassle for many enterprises. And AI companies are seeing an opportunity to bring more and more AI services into one platform, even integrating into where employees do their work. OpenAI’s ChatGPT, although still a separate window, is gradually…
OpenAI’s annual developer conference on Monday was a spectacle of ambitious AI product launches, from an app store for ChatGPT to a stunning video-generation API that brought creative concepts to life. But for the enterprises and technical leaders watching closely, the most consequential announcement was the quiet general availability of Codex, the company's AI software…