Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»Startups»AI’s ‘argument’ is not at all – how did this team remove industry publicity
    Startups

    AI’s ‘argument’ is not at all – how did this team remove industry publicity

    PineapplesUpdateBy PineapplesUpdateSeptember 6, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    AI’s ‘argument’ is not at all – how did this team remove industry publicity
    Share
    Facebook Twitter LinkedIn Pinterest Email

    AI’s ‘argument’ is not at all – how did this team remove industry publicity

    Pulse/Corbis through Getty Image

    Follow ZDNET: Add us as a favorite source On Google.


    Key takeaways of zdnet

    • We do not fully know how AI works, so we describe magical powers for this.
    • It is claimed that General AI can argue “Bhatur Mrijatrishna”.
    • We should always be special about what AI is doing and avoid hyperbole.

    Ever since artificial intelligence programs began to influence the general public, AI scholars have made claims for intensive importance of technology, even claiming the possibility of human understanding.

    Scholars made philosophical wax because even scientists who made AI models like GPT -5 of Openai did not really understand how the programs work -not in the form.

    Too: Openai’s Altman sees the ‘Superintendent’ around the corner – but it is less on the details

    AI’s ‘Black Box’ and Hype Machine

    AI programs like LLM are notorious “black boxes”. They get a lot that is impressive, but for most parts, we cannot see all what they are doing when they take an input, such as you type a prompt, and they produce an output, such as college term paper you requested or suggested for your new novel.

    In violation, scientists have implemented colloquial terms such as “logic” to describe the method of performance of programs. In this process, he has either vested or outright that the program can “think,” reasons, “and” know “the way humans do.

    Over the last two years, rhetoric has surpassed science as AI officials have used hyperbole to turn simple engineering achievements.

    Too: What is Openai’s GPT-5? Here you need to know about the latest model of the company

    Openai Press release last September Announcing his O1 Reasoning model, stated, “How a human can think for a long time before answering a difficult question, similar to it, O1 uses a series of ideas while trying to solve a problem,” so that “O1 learns to hoon his chain and uses strategies.”

    This was a small step for all types of wild claims from those human claims, such as OpenEE CEO Sam Altman CommentIn June, that “we are the past from the incident horizon; takeoff has begun. Humanity is close to digital superintendent.”

    (Disclosure: ZDNET’s original company Ziff Davis filed a case of April 2025 against Openai, alleging that it violates Ziff Davis copyright training and operating its AI system.)

    AI research backlash

    However, AI is a backlash building from scientists, arguing the perceptions of human intelligence through harsh technical investigations.

    In a paper Arxiv pre-print server published last month And not yet reviewed by colleagues, the authors – Chengshui Zhao and Arizona State University, separated the claims of logic through a simple experiment. The conclusion that they have concluded is that “chain-off-three Reasoning is a brittle mritrishna,” and this is not “a mechanism for real logical estimates, but a sophisticated form of structured pattern matching.”

    Too: Sam Altman says that eccentricity is imminent – why is it here

    The term “chain of thought” (cot) is usually used to describe the Verbose stream of the output, which you see when a big logic model, such as GPT-O1 or Deepseek V1, shows you how it works through a problem before giving the final answer.

    This stream of statements is not as deep or meaningful as it seems, and write the team. “The empirical successes of COT logic give rise to the perception that large language models (LLMs) are deliberately engaged in inferior processes,” they write.

    But, “an extended body of analysis suggests that LLM rely on surface-tier semantics and clues rather than logical processes,” they explain. “LLMs create superficial chains of arguments based on learned unions, often fails on actions that are distracted by comonation heroistics or familiar templates.”

    The term “chain of tokens” is a common way to refer to a series of inputs of elements such as an LLM, such as words or characters.

    What really do tests do

    To test the hypothesis that LLMs are only patterned, are not really arguments, they trained Open and train Open-source LLM, GPT-2 since 2019, starting from scratch, an approach that they say “data alchem”.

    Arizona-State-2025-Deta-Alchemi

    Aerizona state university

    The model was trained to manipulate the 26 letters of the English alphabet from the beginning, “A, B, C, etc..” The simplified corpus allowed the team to test the LLM with a set of very simple tasks. All tasks involve manipulating the sequences of letters, such as, for example, transferring each letter to a certain number of places, so that the “apple” becomes “EAPPL”.

    Too: Openai CEOs upwards for GPT-5, probable for new types of consumers hardware

    Using a limited number of tokens, and limited functions, the swept and the team are different, which tasks are exposed to the language model in their training data vs. which are seen only when the prepared models are tested, such as, “Transfer each element from 13 places.” This is a test of whether the language model can create a way to perform even if you encounter new, sometimes not encountered.

    They found that when the work training was not in the data, the language model failed to get those tasks correctly using a series of ideas. The AI ​​model tried to use the tasks that were in its training data, and its “logic” sound Good, but the answer that arose was wrong.

    As Jhao and the team said, “Try to normalize the logic path based on the most identical (…) seen during LLM training, which leads to correcting the logic path, yet the wrong answer.”

    Exclusiveness

    The author draws some lessons.

    First: “Guards against excessively and false confidence,” they recommend, because “the ability of LLM to produce ‘fluent rubbish’-deception-observed but logically flawed argument chain-can be more misleading and harmful than a clearly incorrect answer, as it projections a false intact of dependence.”

    In addition, try the tasks that are unlikely to be clearly vested in training data so that the AI ​​model is stress-tested.

    Too: Why GPT-5 has Rocky Rollout Reality Check that we need.

    What is important about the vision and team’s approach that it is cut through hyperbole and takes us back to the basics of understanding what AI is actually doing.

    When the original research on the chain-off-three, “Chain-off-three prompts alisits argued in a big language model“In 2022, Google’s Google Brain team was carried out by Jason Wei and his colleagues – research that has since been quoted more than 10,000 times – the authors did not make any claims about the real argument.

    Wei and the team noticed that an LLM inspired to list steps in a problem, such as an arithmetic word problem (if there are 10 cookies in the jar, and Sally takes out one, how many jars are left? “) To lead the more correct solutions on average.

    Google-2022- Example-chain-of-thott-prompts

    Google brain

    They were careful not to claim human abilities. “Although the series of ideas imitates the idea processes of human causes, it does not respond whether the nerve network is actually ‘logic’, which we leave as an open question,” he wrote at that time.

    Also: Will AI think like humans? We are not even close – and we are asking wrong questions

    Since then, various press releases from Altman’s claims and AI promoters have emphasized human -like nature of logic using accidental and dirty rhetoric that does not respect the purely technical details of the WeI and the team.

    Zhao and team work is a reminder that we should be specific, not superstitious, about what the machine is really doing, and avoid hyperbolic claims.

    AIs argument industry publicity remove team
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMaximum severity Argo CD API Dosha Leak Repository Credit
    Next Article In NATCON, the populist is right for the holy war against the right Big Tech
    PineapplesUpdate
    • Website

    Related Posts

    Startups

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026
    Startups

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026
    Startups

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Google tests AI-operated audio overview in search results for some questions

    June 16, 20250 Views

    Yes, this was the original voice of the Garat in the trailer for the thief VR

    June 16, 20250 Views

    Best LC10 loadout in call of duty: Warzone

    June 16, 20250 Views
    Our Picks

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2026 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.