Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»AI/ML»The new AI architecture only argues 100x faster than LLM with only 1,000 training examples
    AI/ML

    The new AI architecture only argues 100x faster than LLM with only 1,000 training examples

    PineapplesUpdateBy PineapplesUpdateJuly 26, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    The new AI architecture only argues 100x faster than LLM with only 1,000 training examples
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Want smart insight into your inbox? Enterprise AI, only what matters to data and security leaders, sign up for our weekly newspapers. Subscribe now


    Singapore -based AI Startup Accepted intellect A new AI architecture has developed that may match, and in some cases large language models (LLM) on complex logic functions, all are quite small and more data-skilled.

    architecture, Known as Hierarchical logic model (HRM), inspired by how the human brain is separated separately System for slow, deliberate planning and sharp, intuitive computation. The model achieves impressive results with a fraction of data required by today’s LLM and a fraction of memory. In this efficiency, real -world enterprises may be important implications for AI applications where data is rare and computational resources are limited.

    Chain-off-three region boundaries

    When a complex problem is encountered, the current LLM largely depends on the chain-off-three (COT) promotion, breaking the problems in intermediate text-based stages, essentially forcing the model to “think” because it works towards a solution.

    While COT has improved the logic capabilities of LLMS, it has basic boundaries. In paperResearchers at Sapient Intellation argue that “cot is a crutch for logic, not a satisfactory solution. It depends on brittle, human-defined disintegration, where a single rape or a misunderstanding of steps can completely derail the logic process.”


    AI Impact series returns to San Francisco – 5 August

    The next phase of AI is here – are you ready? Leaders of Block, GSK and SAP include how autonomous agents are re-shaping the enterprise workflows-from the decision making of time-to-end and automation.

    Now secure your location – space is limited:


    This dependence on the creation of a clear language leads the model’s logic to the token level, often requires large -scale training data and produces long, slow reactions. This approach also ignores the type of “latent logic” that occur internal without explicitly expressed in the language.

    As researchers noted, “a more efficient approach is required to reduce these data requirements.”

    Brain -hiezen

    To move beyond COT, researchers discovered “latent arguments”, where instead of generating “thinking tokens”, the model in the internal, abstract representation of the problem. It is more align of how humans think; As stated in paper, “The brain for a long time, consistent series of logic with notable efficiency in an latent space, without frequent translation in the language.”

    However, it is challenging to achieve this level of deep, internal logic in AI. Just stacking more layers in a deep learning model often causes a “fading shield” problem, where learning signs weaken into layers, making training ineffective. An alternative, recurrent architecture that can suffer from the loop “initial convergence” on computation, where the model settles very quickly on a solution without searching the problem completely.

    The new AI architecture only argues 100x faster than LLM with only 1,000 training examples
    The hierarchical logic model (HRM) is inspired by the structure of the brain source: arxiv

    Looking for a better approach, the Sapient team turned into a neurology for a solution. “The human brain provides a compelling blueprint to obtain effective computational depth that is a lack of contemporary artificial models,” researchers write. “It organizes computations in cortical areas operated at different times, which enables deep, multi-step argument.”

    Inspired by this, he designed HRM with two coupled, recurrent modules: a high-level (H) module for slow, abstract plan, and a low-level (L) module for sharp, detailed calculations. This structure enables a process that the team calls “hierarchical convergence”. Easily, rapidly addresses a part of the L-model problem, until it reaches a stable, local solution, then executes several stages. At that point, the slow H-module takes this result, updates its overall strategy, and gives L-module a new, sophisticated sub-problem to work. It effectively resets the L-module, prevents it from getting stuck (initial convergence) and allows the entire system to perform a long sequence of steps of logic with a lean model architecture that does not suffer from the faded gradients.

    HRM (left) convergence the solution in the calculated cycles and avoids the initial convergence (center, RNNs) and missing gradients (right, classic dark nerve network): Arxiv

    According to the paper, “This process allows the HRM to sequence a separate, stable, nested computation, where the H-module directs the overall problem-solstation strategy and the L-module executes the intensive search or refinement required for each stage.” This allows the nested-loop design model to argue deeply in its latent location without the need for a longer cot prompt or large amounts of data.

    A natural question is whether it comes at the cost of interpretation “latent logic” interpretation. Guan Wang, founder and CEO of Sapient Intelligence, pushed the idea back, explaining that the internal processes of the model could be decoded and visualized, similarly how Cott provides a window in the thinking of a model. He also states that the cot itself can be misleading. “Cott does not really reflect the internal argument of a model,” this essentially remains a black box. “

    An example of HRM causes on a labyrinth problem in the source of separate compute bicycle: arxiv: arxiv

    HRM in action

    To test their model, researchers put HRMs against the benchmark, requiring extensive discovery and bacterching, such as abstract and Reasoning Corpus (Arc-AGI), extremely difficult Sudoku Puzzle and complex maze-samadhan tasks.

    Results suggest that HRM learns to solve problems that are also infallible for advanced LLM. For example, on the “Sudoku-Astrrimage” and “Mazalia-Hard” benchmark, the state-of-the-art cott models completely failed, scoring 0% accuracy. In contrast, HRM gained close accuracy after being trained on only 1,000 examples for each task.

    On the ARC-AGI benchmark, a test of abstract logic and generalization, 27m-parameter HRM scored 40.3%. It crosses COT-based models such as a very large O3-Mini-HIGH (34.5%) and Cloud 3.7 Sonnet (21.2%). This performance highlights the power and efficiency of its architecture, without a large pre-training corpus and with very limited data.

    HRM Complex Reasoning Outperforms large models on the task

    The model’s power displays when solving riddles, lies in a separate class of real -world implications. According to Wang, developers should continue using LLM for language-based or creative functions, but for “complex or determinable functions”, architecture such as HRM offers better performance with low hallucinations. He indicates “sequential problems required by complex decision making or long-term planning” in delayed areas such as data-cycling domains such as specially embodied AI and robotics, or scientific explorations.

    In these scenarios, HRM does not just solve problems; It learns to solve them better. “At the master level in our Sudoku experiments … HRM requires progressively less steps as training advances – to become an expert for a novice,” Wang explained.

    For the enterprise, this is the place where the efficiency of architecture turns directly into the bottom line. The serial, instead of a token-by-token generation cot, allows for parallel processing of HRM what Wang’s estimate may be “100x speedup at the time of completion of the work.” This means low estimates delay and ability to run powerful arguments on edge devices.

    Cost savings are also sufficient. Wang said, “Special logic engines like HRM offer large, expensive and delay-intensive API-based models to offer more promising options for specific complex arguments.” To keep the efficiency in perspective, he said that it takes about two GPU hours to train the model for professional-level Sudoku, and a fraction of the resources required for the foundation model on a large scale for a complex Arc-AGI benchmark between 50 and 200 GPU hours. It opens a route to solve special business problems, from logistics optimization to complex system diagnostics, where both data and budget are finite.

    Further, Sapient Intelligence is already working to develop HRM into a more common-purpose logic module by a particular problem-solution. “We are actively developing brain-inspired models manufactured on HRM,” Wang said, promising initial results in healthcare, climate forecast and robotics. He said that these next generation models will be quite different from today’s lesson-based systems, especially through incorporating self-rights capabilities.

    Work shows that for a class of problems pursuing today’s AI giants, the path ahead may not be a larger model, but the final argument engines intelligent, more structured architecture: the human brain.

    Daily insights on business use cases with VB daily

    If you want to impress your boss, VB daily has covered you. We give you the scoop inside what companies are doing with generative AI, from regulatory changes to practical deployment, so you can share insight for maximum ROI.

    Read our privacy policy

    Thanks for membership. See more VB newsletters here.

    There was an error.

    100x architecture argues examples faster LLM training
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAnalysts saw XRP hitting $ 4, Solana ETF Buzz Build as $ 250
    Next Article Evening reading – 25 July, 2025
    PineapplesUpdate
    • Website

    Related Posts

    Startups

    Stopping at the airport online? Cellular can be faster than Wi-Fi – here’s why

    December 17, 2025
    Startups

    OpenAI is training models to ‘confess’ to lies – what this means for the future of AI

    December 5, 2025
    Startups

    Google refused to analyze your emails for AI training – here’s what happened

    November 24, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    10,000 steps or Japanese walk? We ask experts if you should walk ahead or fast

    June 16, 20250 Views

    FIFA Club World Cup Soccer: Stream Palmirus vs. Porto lives from anywhere

    June 16, 20250 Views

    What do chatbott is careful about punctuation? I tested it with chat, Gemini and Cloud

    June 16, 20250 Views
    Our Picks

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2026 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.