{"id":4182,"date":"2025-11-01T23:45:58","date_gmt":"2025-11-01T23:45:58","guid":{"rendered":"https:\/\/violethoward.com\/new\/large-reasoning-models-almost-certainly-can-think\/"},"modified":"2025-11-01T23:45:58","modified_gmt":"2025-11-01T23:45:58","slug":"large-reasoning-models-almost-certainly-can-think","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/large-reasoning-models-almost-certainly-can-think\/","title":{"rendered":"Large reasoning models almost certainly can think"},"content":{"rendered":"


\n
<\/p>\n

Recently, there has been a lot of hullabaloo about the idea that large reasoning models (LRM) are unable to think. This is mostly due to a research article published by Apple, "The Illusion of Thinking<\/u>" Apple argues that LRMs must not be able to think; instead, they just perform pattern-matching. The evidence they provided is that LRMs with chain-of-thought (CoT) reasoning are unable to carry on the calculation using a predefined algorithm as the problem grows.<\/p>\n

This is a fundamentally flawed argument. If you ask a human who already knows the algorithm for solving the Tower-of-Hanoi problem to solve a Tower-of-Hanoi problem with twenty discs, for instance, he or she would almost certainly fail to do so. By that logic, we must conclude that humans cannot think either. However, this argument only points to the idea that there is no evidence that LRMs cannot think. This alone certainly does not mean that LRMs can think \u2014 just that we cannot be sure they don\u2019t.<\/p>\n

In this article, I will make a bolder claim: LRMs almost certainly can think. I say \u2018almost\u2019 because there is always a chance that further research would surprise us. But I think my argument is pretty conclusive.<\/p>\n

What is thinking?<\/h2>\n

Before we try to understand if LRMs can think, we need to define what we mean by thinking. But first, we have to make sure that humans can think per the definition. We will only consider thinking in relation to problem solving, which is the matter of contention.<\/p>\n

1. Problem representation (frontal and parietal lobes)<\/b><\/p>\n

When you think about a problem, the process engages your prefrontal cortex. This region is responsible for working memory, attention and executive functions \u2014 capacities that let you hold the problem in mind, break it into sub-components and set goals. Your parietal cortex helps encode symbolic structure for math or puzzle problems.<\/p>\n

2. Mental simulation (morking Memory and inner speech)<\/b><\/p>\n

This has two components: One is an auditory loop that lets you talk to yourself \u2014 very similar to CoT generation. The other is visual imagery, which allows you to manipulate objects visually. Geometry was so important for navigating the world that we developed specialized capabilities for it. The auditory part is linked to Broca\u2019s area and the auditory cortex, both reused from language centers. The visual cortex and parietal areas primarily control the visual component.<\/p>\n

3. Pattern matching and retrieval (Hippocampus and Temporal Lobes)<\/b><\/p>\n

These actions depend on past experiences and stored knowledge from long-term memory:<\/p>\n