Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»Startups»Will AI think like humans? We are not even close – and we are asking wrong questions
    Startups

    Will AI think like humans? We are not even close – and we are asking wrong questions

    PineapplesUpdateBy PineapplesUpdateJuly 24, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Will AI think like humans? We are not even close – and we are asking wrong questions
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Will AI think like humans? We are not even close – and we are asking wrong questions

    Wesnd61/Getty Images

    Artificial intelligence may have effective estimates powers, but do not trust that soon there is anything close to human logic forces. The so -called Artificial General Intelligence (AGI), or AI as a human being as a human in the same manner, is a long way to apply logic through changing tasks or environment, still a long way. Large reasoning model (LRMS)While not correct, provide a temporary step in that direction.

    In other words, do not rely on your dining-servant service robot to react properly to the kitchen fire or jump a pet on the table and dissolve the food.

    Also: The new AI lab of Meta is to provide ‘individual superintendent for all’ – whatever it means

    AI’s sacred grave is a long time to think and as much as possible – and industry leaders and experts agree that we still have a long way before reaching such intelligence. But big language models (LLMS) and their slightly more advanced LRM children work on future analysis based on data patterns, not logic like complicated humans.

    Nevertheless, the nonsense around the AGI and LRMS continues to grow, and it was unavoidable that the publicity would remove the actual available technology.

    “We are currently in the middle of an AI success theater plague,” Robert BlimoffChief Technology Officer and Executive VP in Akamai. “Headline-Hathiyana demo, anecdote victory, and an illusion of progress by exaggerated abilities. In fact, really intelligent, Thinking AI is a long way. ,

    recently paper Written by Apple’s researchers reduced the readiness of LRMS. Researchers concluded that LRMS, as they currently standing, are not actually giving too much argument above the standard LLM in broad use. (My ZDNET colleagues provide excellent overview of Lester Map and the findings of Sabrina Ortise paper.)

    Also: Apple’s ‘The Illusion of Thinking’ is shocking – but what did you remember here

    LRMs “are obtained from LLMS during the post-training phase, as seen in models like Deepseek-R1,” said the chief technology officer XUEDONG Huang in the zoom. “The current generation of LRMS optimize only for the final answer, not for the logic process, which can lead to the flaw or hallucinations intermediate stages.”

    LRMs employ the step-by-step chains of the idea, but “we should recognize that it is not equal to the real cognition, it only mimics it,” Ivana BartoletiChief AI Government Officer in Wipro. “It is likely that chain-off-thows techniques will improve, but this is important to keep our understanding of their current boundaries.”

    LRMS and LLMS are predicted engines, “not the problem solution,” Blamof said. “Their logic is done by mimicking the pattern, not by solving problems in the algorithm. So it looks like logic, but does not behave as logic. The future of logic in AI will not come to reach better data from LLM or LRM or to spend more time on logic.

    Also: 9 programming work you should not hand over to AI – and why

    Right now, a better word for AI’s argument capabilities may be “Danteed Intelligence” XiongVice President of AI Research in Salesforce. “This is where the AI systems excel in one task, but fail brilliantly in the other – especially within cases of enterprise use.”

    What are the cases of potential use for LRMS? And what is the benefit of adopting and maintaining these models? For the beginning, cases of use may look more like the expansion of the current LLM. They will arise in many areas – but it is complex. “The next limit of the logic model is arguing the tasks that are difficult to verify automatically – unlike mathematics or coding,” Daniel HoskeCTO in Cresta.

    Currently, available LRMs cover most of the cases of classic LLM – such as “Creative Writing, Planning and Coding, said. Petrose AffstathopolosVice President of Research at RSA Conference. “As the LRMS is improved and adopted, there will be a roof of what models can achieve independently and the boundaries of model-callps will be. Future systems will better learn how to use and integrate external devices such as search engines, physics simulation environment and coding or safety equipment.”

    Also: 5 tips for the manufacture of foundation model for AI

    Cases of early use for enterprise LRM include contact centers and basic knowledge functions. However, these implementation prevails with “subjective problems,” Hoske said. “Examples in examples, troubleshooting technical issues, or planning and executing a multi-step task, only with incomplete or partial knowledge given high-level goals.” As LRMS develops, these capabilities can improve, they predicted.

    Typically, “LRMS excel on tasks that are easily verified but hard-coding, complex QA, formal plans and steps to resolve steps-based problems,” said Huang. “These are exactly those domains where structured arguments, even if synthetic, intuition or cruel-bale token can improve prediction.”

    Efstathopoulos reported the concrete uses of AI in medical research, science and data analysis. “LRM research results are encouraging, models are already able to solve a-shot problem, to tackle complex arguments puzzles, refine the answers to the plan and mid-generation.” But it is still in a hurry in the game for LRMS, which may or may not be the best way to make AI completely arguing.

    Also: How can AI agents generate $ 450 billion by 2028 – and what is on the way

    Faith in the results emanating from LRMS can also be problematic, as it has been for classic LLMS. “What matters, if, beyond alone capabilities, these systems can continuously and firmly argue that beyond low tasks and trusted to make important business decisions,” said Zayon of the salesfors. “Today’s LLM, which is designed for logic, is still low.”

    This does not mean that language models are useless, Xiong insisted. “We are successfully deployed for coding assistance, material production and customer service automation where their current capabilities provide real values.”

    Human arguments are not without immense flaws and prejudice. Zoom’s Huang said, “We don’t need AI to think like AI – we need to think with it.” “The feeling of the human style brings cognitive prejudices and inefficiencies that we do not want in machines. The target is utility, not copy. An LRM that can be more transparent than humans, differently, more strictly, or even more transparent, can be more helpful in many real-disciplines applications.”

    Also: People do not trust AI, but they are using it anyway

    The goal of LRMS, and eventually AGI, is to build towards AI that is transparent about its limitations, reliable within defined capabilities, and designed to complement human intelligence rather than replaced, “Xiong said. Human inspection is essential, as is “recognized that human decisions, relevant understanding and moral arguments are irreparable,” he said.

    Want more stories about AI? Sign up for innovationOur weekly newspapers.

    close humans questions wrong
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBrave Block Windows Remember your browsing activity from screenshots
    Next Article Leave the iPad: This tablet is the smartest purchase for your children (and cheaper)
    PineapplesUpdate
    • Website

    Related Posts

    Startups

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026
    Startups

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026
    Startups

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Google tests AI-operated audio overview in search results for some questions

    June 16, 20250 Views

    Yes, this was the original voice of the Garat in the trailer for the thief VR

    June 16, 20250 Views

    A close look at the moons of Uranus reveals a stunning dark side

    June 16, 20250 Views
    Our Picks

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2026 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.