Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Samsung showed me its secret HDR10+ Advanced TV samples – and I’m almost sold

    November 8, 2025

    Starbucks barista’s side hustle brings in $1 million a month

    November 8, 2025

    A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 – and it’s free

    November 8, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»AI/ML»Samsung AI researcher’s new, open reasoning model TRM outperforms larger models by 10,000 times on specific problems
    AI/ML

    Samsung AI researcher’s new, open reasoning model TRM outperforms larger models by 10,000 times on specific problems

    PineapplesUpdateBy PineapplesUpdateOctober 9, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Samsung AI researcher’s new, open reasoning model TRM outperforms larger models by 10,000 times on specific problems
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Samsung AI researcher’s new, open reasoning model TRM outperforms larger models by 10,000 times on specific problems

    AI researchers tend to make new developments, Small Open source generative models that far outperform larger, proprietary counterparts continued with another surprising advancement this week.

    Alexia Jolicoeur-MartineauSenior AI Researcher Samsung’s Advanced Institute of Technology (SAIT) in Montreal, Canada, Is Tiny Recursion Model (TRM) introduced – A neural network so small that it has only 7 million parameters (internal model settings), yet it competes with or surpasses state-of-the-art language models 10,000 times larger in terms of its parameter count, including OpenAI’s O3-Mini and Google’s Gemini 2.5 Pro, On some of the toughest reasoning standards in AI research.

    The goal is to show that very high-performance new AI models can be created without the massive investment in graphics processing units (GPUs) and electricity required to train the large, multi-trillion parameter flagship models that power many LLM chatbots today. The results are described in a research paper published on the open access website arxiv.org, titled "Less is more: recursive logic with small networks,"

    "The idea that one must rely on a massively grounded model trained for millions of dollars by a large corporation to solve difficult tasks is a trap," Jolicoeur-Martineau wrote on social network x, "Currently, there is too much focus on exploiting LLM rather than creating and expanding new directions of direction."

    Jolicoeur-Martineau also added: "By recursive logic, it turns out that ‘less is more’. A small model that is trained from the beginning, iterating on itself and updating its answers over time can achieve a lot without breaking the bank."

    TRM’s code is now available Github Under an enterprise-friendly, commercially viable MIT license – meaning anyone from researchers to companies can take it, modify it, and deploy it for their own purposes, even commercial applications.

    a big warning

    However, readers should be aware that TRM was specifically designed to perform well on structured, visual, grid-based problems such as Sudoku, mazes, and puzzles. ARC (Abstract and Reasoning Corpus)-AGI BenchmarkThe latter which provides tasks that should be easy for humans but hard for AI models, such as sorting colors on a grid based on prior, but not identical, solutions.

    From hierarchy to simplicity

    The TRM architecture represents a fundamental simplification.

    It is based on a technique called Hierarchical Logic Model (HRM) Introduced earlier this year, it showed that small networks could tackle logical puzzles like Sudoku and mazes.

    HRM depended on two collaborative networks – one operating at a higher frequency, the other at a lower – supported by biologically inspired arguments and mathematical justifications involving fixed-point theorems. Jolicoeur-Martineau found it unnecessarily complicated.

    TRM removes these elements. Instead of two networks, it uses one single two-layer model which iteratively refines its predictions.

    The model starts with an embedded question and initial answer, represented by the variables x, thisAnd JadeThrough a series of logic steps, it updates its internal latent representation Jade and refines the answer this Until it converges to a stable output. Each iteration corrects potential errors from the previous step, yielding a self-correcting reasoning process without additional hierarchy or mathematical overhead.

    How does recursion replace scale?

    This is the basic idea behind TRM Repetition can take the place of depth and shape.

    By reasoning iteratively on its own outputs, the network effectively simulates a much deeper architecture without the associated memory or computational cost. This iterative cycle, running over sixteen observation steps, allows the model to make progressively better predictions – similar in spirit to how larger language models use multi-stage “chain-of-thought” logic, but here achieved with a compact, feed-forward design.

    Simplicity is beneficial in both efficiency and generalization. The model uses fewer layers, no fixed-point approximation, and no dual network hierarchy. a light weight stopping mechanism It decides when to stop refinement, preventing calculation waste while maintaining accuracy.

    Performance that punches above its weight

    Despite its small footprint, TRM delivers benchmark results that rival or exceed models millions of times larger. In testing, the model achieved:

    • 87.4% accuracy But sudoku-extreme (More than 55% for HRM)

    • 85% accuracy But maze-hard puzzles

    • 45% accuracy But ARC-AGI-1

    • 8% accuracy But ARC-AGI-2

    These results exceed or closely match the performance of many high-level large language models DeepSeek R1, gemini 2.5 proAnd o3-miniDespite TRM using less than 0.01% of its parameters.

    Such results suggest that recursive reasoning, not scale, may be the key to tackling abstract and combinatorial reasoning problems – domains where even top-level generative models often stumble.

    Design philosophy: less is more

    The success of TRM stems from deliberate minimalism. Jolicoeur-Martineau found that reducing complexity led to better generalization.

    When the researcher increased the layer number or model size, performance degraded due to overfitting on small datasets.

    In contrast, the two-layer structure, combined with recursive depth and close supervisionOptimal results achieved.

    The model performed even better when self-attention was replaced Simple Multilayer Perceptron On small, fixed-context tasks like Sudoku.

    For larger grids such as ARC puzzles, self-attention remains valuable. These findings emphasize that model architecture should match data structure and scale rather than default for maximum efficiency.

    Training is small, thinking is big

    TRM is now officially available Open source under MIT license But GitHub,

    The repository includes full training and evaluation scripts, dataset builders for Sudoku, Maze and ARC-AGI, and reference configurations to reproduce published results.

    It also documents compute requirements ranging from a single NVIDIA L40S GPU for Sudoku training to a multi-GPU H100 setup for ARC-AGI experiments.

    The open release confirms that TRM is specially designed Structured, grid-based reasoning tasks Instead of general purpose modeling language.

    Each benchmark – Sudoku-Extreme, Maze-Hard and ARC-AGI – uses small, well-defined input-output grids, aligning with the model’s recursive supervision process.

    Training involves substantial data augmentation (such as color permutations and geometric transformations), underscoring that the efficiency of TRM lies in its parameter size rather than total computation demand.

    The simplicity and transparency of the model make it more accessible to researchers outside of large corporate laboratories. Its codebase builds directly on the earlier hierarchical logic model framework but removes HRM’s biological analogies, multiple network hierarchies, and fixed-point dependencies.

    In doing so, TRM provides a reproducible baseline for exploring recursive logic in small models – a counterpoint to the dominant “scale is all you need” philosophy.

    community response

    The release of TRM and its open-source codebase prompted immediate debate among AI researchers and practitioners on X. While many praised the achievement, others questioned how widely its methods could be generalized.

    Proponents hailed the TRM as proof that smaller models could outperform giants, calling it “10,000× smaller yet smarterAnd a potential step toward architecture that thinks instead of just scale.

    Critics counter that the scope of TRM is narrow – focused on Bounded, grid-based puzzles -And its compute savings come mainly from size, not total runtime.

    researcher yunmin cha noted that the training of TRM relies heavily on upscaling and recursive passes, leading to “the more computation, the more similar models.”

    Cancer Geneticist and Data Scientist che loveday Emphasized that TRM is a solverNot a chat model or text generator: it excels in structured logic but not in open-ended language.

    machine learning researcher Sebastian Rashka TRM was positioned as a significant simplification of HRM rather than a new form of general intelligence.

    He described its process as “a two-step loop that updates the internal logic state, then refines the answer.”

    Many researchers including Augustin NebelAgreed that the strength of the model lies in its clear logic structure, but noted that future work will need to show transfer to less-constrained problem types.

    The emerging consensus online is that TRM may be narrow, but its message is broad: careful iteration, not constant expansion, can drive the next wave of logic research.

    looking ahead

    While TRM is currently applicable to supervised logic tasks, its recursive framework opens up several future directions. Jolicoeur-Martineau suggests exploring Generative or multi-answer variantsWhere the model can generate multiple possible solutions rather than a single deterministic solution.

    Another open question involves scaling laws for recursion – determining how far the “less is more” principle can extend as model complexity or data size increases.

    Ultimately, the study provides both a practical tool and a conceptual reminder: Progress in AI does not always need to rely on large models. Sometimes, teaching a small network to think carefully and repeatedly can be more powerful than forcing a large network to think once.

    larger model Models open outperforms problems Reasoning Researchers Samsung specific times TRM
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleOpenAI’s affordable ChatGPIT Go initiative expands to 16 new countries in Asia
    Next Article Apple iPhone 17 Pro Max vs Samsung Galaxy S25 Ultra: I tested both flagships, and here’s who wins
    PineapplesUpdate
    • Website

    Related Posts

    Startups

    Samsung showed me its secret HDR10+ Advanced TV samples – and I’m almost sold

    November 8, 2025
    Startups

    A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 – and it’s free

    November 8, 2025
    Startups

    This $30 Gadget Keeps My Office and Workspace Organized at All Times – How It Works

    November 7, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    10,000 steps or Japanese walk? We ask experts if you should walk ahead or fast

    June 16, 20250 Views

    FIFA Club World Cup Soccer: Stream Palmirus vs. Porto lives from anywhere

    June 16, 20250 Views

    What do chatbott is careful about punctuation? I tested it with chat, Gemini and Cloud

    June 16, 20250 Views
    Our Picks

    Samsung showed me its secret HDR10+ Advanced TV samples – and I’m almost sold

    November 8, 2025

    Starbucks barista’s side hustle brings in $1 million a month

    November 8, 2025

    A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 – and it’s free

    November 8, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2025 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.