Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Are open-ear headphones viable in 2025? Listen for the first time, this pair gave a bold statement

    November 10, 2025

    I saw the future of TV in Samsung’s South Korea lab — and I’m excited for these 3 things

    November 9, 2025

    Very few people are talking about this budget laptop from Lenovo that over-delivers

    November 9, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»AI/ML»Large reasoning models can almost certainly think
    AI/ML

    Large reasoning models can almost certainly think

    PineapplesUpdateBy PineapplesUpdateNovember 1, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Large reasoning models can almost certainly think
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Large reasoning models can almost certainly think

    Recently, there has been a lot of uproar about the idea that large logic models (LRMs) are incapable of thinking. This is mostly due to a research article published by Apple, "illusion of thinking" Apple argues that the LRM should not be able to think; Instead, they simply pattern-match. The evidence they provide is that LRMs with chain-of-thought (COT) logic are unable to perform calculations using predefined algorithms as the problem increases.

    This is fundamentally flawed logic. For example, if you ask a human who already knows the algorithms for solving the Tower-of-Hanoi problem to solve the Tower-of-Hanoi problem with twenty disks, he or she will almost certainly fail to do so. By that logic, we must conclude that humans cannot think. However, this argument merely points to the idea that there is no evidence that LRMs cannot think. This alone certainly does not mean that LRMs can think – just that we cannot be sure that they do not think so.

    In this article, I will make a bold claim: LRMs can almost certainly think. I say ‘almost’ because there is always the possibility that further research will surprise us. But I think my argument is quite conclusive.

    What are you thinking?

    Before we attempt to understand whether LRMs can think, we need to define what we mean by thinking. But first, we have to be sure that humans can think by definition. We will only consider thinking in terms of problem solving, which is a matter of controversy.

    1. Problem representation (frontal and parietal lobes)

    When you think about a problem, this process engages your prefrontal cortex. This area is responsible for working memory, attention, and executive functions – abilities that let you keep a problem in mind, break it down into subcomponents, and set goals. Your parietal cortex helps encode symbolic structure for math or puzzle problems.

    2. Mental simulation (destroying memory and inner speech)

    It has two components: There is a listening loop that lets you talk to yourself – similar to COT generation. The second is visualization, which allows you to manipulate objects visually. Geometry was so important to navigating the world that we developed special abilities for it. The auditory part is connected to Broca’s area and the auditory cortex, both of which are reused from language centres. The visual cortex and parietal areas primarily control the visual component.

    3. Pattern Matching and Retrieval (Hippocampus and Temporal Lobes)

    These actions depend on knowledge stored from past experiences and long-term memory:

    • The hippocampus helps in retrieving related memories and facts.

    • The temporal lobe brings about semantic knowledge – meanings, rules, categories.

    This is similar to how neural networks rely on their training to process tasks.

    4. Monitoring and Evaluation (Anterior Cingulate Cortex)

    Our anterior cingulate cortex (ACC) monitors for errors, conflicts or impasses – this is where you see conflicts or impasses. This process is essentially based on pattern matching from prior experience.

    5. Insight or Reframing (Default Mode Network and Right Hemisphere)

    When you’re stuck, your brain can shift default mode – A more comfortable, internally directed network. This is when you step back, let go of the current thread and sometimes ‘suddenly’ see a new angle (the classic “aha!” moment).

    This is similar to how DeepSeek-R1 Was trained for COT reasoning without COT examples in its training data. Remember, the brain is constantly learning as it processes data and solves problems.

    On the contrary, LRM Changes based on real-world feedback are not allowed during prediction or generation. But with DeepSeek-R1’s COT training, learning Did This happened when it attempted to solve problems – essentially updating while reasoning.

    Similarities between COT logic and biological thinking

    LRM does not have all the faculties mentioned above. For example, an LRM is unlikely to have much visual logic in its circuits, although a little may go a long way. But it certainly does not generate intermediate images in COT generation.

    Most humans can create spatial models in their minds to solve problems. Does this mean that we can conclude that LRMs cannot think? I would disagree. Some humans also have difficulty forming spatial models of the concepts they think about. This situation is called aphasiaPeople with this condition can think perfectly fine. In fact, they live life as if they lack no ability. Many of them are actually great at symbolic logic and quite good at math – often enough to compensate for their lack of visual logic. We can hope that our neural network models will also be able to overcome this limitation.

    If we take a more abstract view of the human thought process described earlier, we can see it mainly involves the following things:

    1. Pattern-matching is used to recall learned experience, problem representation, and monitoring and evaluating thought chains.

    2. Working memory is to store all the intermediate steps.

    3. Backtracking findings conclude that the COT is not going anywhere and moves back to some reasonable point.

    Pattern-matching in LRM comes from its training. The entire purpose of training is to learn both knowledge of the world and the patterns for effectively processing that knowledge. Since LRM is a layered network, the entire working memory is required to fit within a single layer. The weights store knowledge of the world and the patterns to be followed, while processing occurs between layers using learned patterns stored as model parameters.

    Note that even in COT, the entire text – including the input, the COT, and part of the already generated output – must fit in each layer. Working memory is only one layer (in the case of the attention mechanism, it includes the KV-cache).

    COT is, in fact, very similar to what we do when we are talking to ourselves (which is almost always). We almost always express our ideas verbally, and a COT reasoner does the same.

    There is also good evidence that the COT reasoner may take retreat steps when a certain line of reasoning appears redundant. In fact, this is what Apple researchers saw when they tried to ask LRMs to solve large examples of simple puzzles. LRM correctly recognized that trying to solve the puzzles directly would not fit in their working memory, so they tried to figure out better shortcuts like a human would. This is even more evidence that LRMs are thinkers, not just blind followers of predefined patterns.

    But why would the next-token-prophet learn to think?

    Neural networks of sufficient size can learn any computation, including thinking.But next word-prediction systems can also learn to think. Tell me in detail.

    A common idea is that LRMs cannot think because, at the end of the day, they are simply predicting the next token; It’s just a ‘glorious autocomplete’. This view is fundamentally wrong – not that it is ‘self-contained’, but that ‘self-contained’ does not require thinking. In fact, prediction of the next word is far from a limited representation of thought. On the contrary, it is the most general form of knowledge representation that one could hope for. Let me explain.

    Whenever we want to represent any knowledge, we need a language or system of symbolism to do so. Various formal languages ​​exist which are very precise in terms of expression. However, such languages ​​are fundamentally limited in the type of knowledge they can represent.

    For example, first-order predicate logic cannot represent the properties of all predicates that satisfy a certain property, because it does not allow more predicates than predicates.

    Of course, there are higher-order predicate computations that can represent predicates on predicates to arbitrary depth. But still they cannot express ideas which lack precision or which are abstract in nature.

    However, natural language is full of expressive power – you can describe any concept at any level of detail or abstraction. In fact, you can even describe the concepts About this Natural language is used only in natural language. This makes it a strong candidate for knowledge representation.

    The challenge, of course, is that this expressive richness makes it difficult to process information encoded in natural language. But we don’t need to figure out how to do it manually – we can simply program the machine using the data, through a process called training.

    The next-token prediction machine essentially computes a probability distribution over the next token, given the context of the previous token. Any machine that aims to accurately calculate this probability must represent world knowledge in some form.

    A simple example: Consider the incomplete sentence, "The world’s highest mountain peak is Mt…." – To predict the next word as Everest, the model must have this knowledge stored somewhere. If the task requires the model to calculate an answer or solve a puzzle, the next-token predictor needs to output COT tokens to advance the reasoning.

    This implies that, even if it is predicting one token at a time, the model must internally represent at least the next few tokens in its working memory – to ensure that it remains on the logical path.

    If you think about it, humans also predict the next token – whether during speech or when thinking using the inner voice. An ideal autocomplete system that always outputs the correct token and gives the correct answer would have to be omniscient. Of course, we’ll never reach that point – because not every answer is computable.

    However, a parameterized model that can represent knowledge by tuning its parameters, and that can learn through data and reinforcement, can certainly learn to think.

    Does it produce a thinking effect?

    At the end of the day, the ultimate test of thought is the system’s ability to solve problems that require thinking. If a system can answer previously unseen questions that require some level of reasoning, it will have learned to think – or at least reason – to arrive at the answer.

    We know that proprietary LRMs perform very well on certain logic benchmarks. However, since there is a possibility that some of these models were fixed on the benchmark test sets via backdoors, we will focus only on Open-source model For fairness and transparency.

    We evaluate them using the following benchmarks:

    As one can see, in some benchmarks, LRMs are able to solve a large number of logic-based queries. While it is true that they still lag human performance in many respects, it is important to note that the human baseline often comes from individuals specifically trained on those benchmarks. In fact, in some cases, LRMs perform better than the average untrained human.

    conclusion

    Based on benchmark results, the striking similarity between COT logic and biological logic, and the theoretical understanding that any system with sufficient representation capacity, sufficient training data, and sufficient computational power can perform any computable task – LRMs largely meet those criteria.

    Therefore it is fair to conclude that LRMs definitely have the ability to think.

    Debashish Ray Choudhury is a Senior Principal Engineer Talentica Software And a Ph.D. Candidate in Cryptography at IIT Bombay.

    Read more from us guest authorOr, consider submitting a post of your own! see our guidelines here,

    certainly large Models Reasoning
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAI researchers ‘incarnated’ LLM into a robot – and it started channeling Robin Williams
    Next Article Tired of Windows? Why is this ‘premium’ Chromebook my new favorite recommendation?
    PineapplesUpdate
    • Website

    Related Posts

    AI/ML

    Forget fine-tuning: SAP’s RPT-1 brings ready-to-use AI to business tasks

    November 4, 2025
    AI/ML

    ClickUp adds new AI assistant to better compete with Slack and Notion

    November 4, 2025
    AI/ML

    How to Prepare Your Company for a Passwordless Future – in 5 Steps

    November 4, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    10,000 steps or Japanese walk? We ask experts if you should walk ahead or fast

    June 16, 20250 Views

    FIFA Club World Cup Soccer: Stream Palmirus vs. Porto lives from anywhere

    June 16, 20250 Views

    What do chatbott is careful about punctuation? I tested it with chat, Gemini and Cloud

    June 16, 20250 Views
    Our Picks

    Are open-ear headphones viable in 2025? Listen for the first time, this pair gave a bold statement

    November 10, 2025

    I saw the future of TV in Samsung’s South Korea lab — and I’m excited for these 3 things

    November 9, 2025

    Very few people are talking about this budget laptop from Lenovo that over-delivers

    November 9, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2025 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.