Category: AI & Automation


  • AI and Buy Now, Pay Later Transform Holiday Shopping for Merchants

    As the holiday season approaches, small business owners need to harness new tools and methods to capture consumer attention and maximize sales. According to PayPal’s recent 2025 Holiday Shopping Survey, shoppers are increasingly turning to artificial intelligence (AI) and flexible payment options like Buy Now, Pay Later (BNPL) to enhance their shopping experiences. The findings…

  • 7 Game-Changing Content Marketing Trends to Watch

    In 2023, several content marketing trends are emerging that you should be aware of. AI-driven content creation offers efficiency, whereas real-time personalization can greatly boost user engagement. Short-form video content is gaining traction, and optimizing for voice search is becoming crucial for visibility. Furthermore, user-generated content builds trust, and collaborating with micro-influencers can extend your…

  • Inside Celosphere 2025: Why there’s no ‘enterprise AI’ without process intelligence

    Presented by Celonis AI adoption is accelerating, but results often lag expectations. And enterprise leaders are under pressure to prove measurable ROI from the AI solutions — especially as the use of autonomous agents rises and global tariffs disrupt supply chains. The issue isn’t the AI itself, says Alex Rinke, co-founder and co-CEO of Celonis,…

  • Google's 'Watch & Learn' framework cracks the data bottleneck for training computer-use agents

    A new framework developed by researchers at Google Cloud and DeepMind aims to address one of the key challenges of developing computer use agents (CUAs): Gathering high-quality training examples at scale. The framework, dubbed Watch & Learn (W&L), addresses the problem of training data generation in a way that doesn’t require human annotation and can…

  • Nvidia researchers unlock 4-bit LLM training that matches 8-bit performance

    Researchers at Nvidia have developed a novel approach to train large language models (LLMs) in 4-bit quantized format while maintaining their stability and accuracy at the level of high-precision models. Their technique, NVFP4, makes it possible to train models that not only outperform other leading 4-bit formats but match the performance of the larger 8-bit…

  • Meta researchers open the LLM black box to repair flawed AI reasoning

    Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its mistakes. Called Circuit-based Reasoning Verification (CRV), the method looks inside an LLM to monitor its internal “reasoning circuits” and detect signs of computational…

  • Why IT leaders should pay attention to Canva’s ‘imagination era’ strategy

    The rise of AI marks a critical shift away from decades defined by information-chasing and a push for more and more compute power.  Canva co-founder and CPO Cameron Adams refers to this dawning time as the “imagination era.” Meaning: Individuals and enterprises must be able to turn creativity into action with AI.   Canva hopes to…

  • Agentic AI is all about the context — engineering, that is

    Presented by Elastic As organizations scramble to enact agentic AI solutions, accessing proprietary data from all the nooks and crannies will be key By now, most organizations have heard of agentic AI, which are systems that “think” by autonomously gathering tools, data and other sources of information to return an answer. But here’s the rub:…

  • From static classifiers to reasoning engines: OpenAI’s new model rethinks content moderation

    Enterprises, eager to ensure any AI models they use adhere to safety and safe-use policies, fine-tune LLMs so they do not respond to unwanted queries.  However, much of the safeguarding and red teaming happens before deployment, “baking in” policies before users fully test the models’ capabilities in production. OpenAI believes it can offer a more…

  • Anthropic scientists hacked Claude’s brain — and it noticed. Here’s why that’s huge

    When researchers at Anthropic injected the concept of "betrayal" into their Claude AI model's neural networks and asked if it noticed anything unusual, the system paused before responding: "I'm experiencing something that feels like an intrusive thought about 'betrayal'." The exchange, detailed in new research published Wednesday, marks what scientists say is the first rigorous…