Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»Startups»Your favorite AI tool just barely missed this security review – why that’s a problem
    Startups

    Your favorite AI tool just barely missed this security review – why that’s a problem

    PineapplesUpdateBy PineapplesUpdateDecember 4, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Your favorite AI tool just barely missed this security review – why that’s a problem
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Your favorite AI tool just barely missed this security review – why that’s a problem

    SAIGLOBALNT/iStock/Getty Images Plus

    Follow ZDNET: Add us as a favorite source On Google.


    ZDNET Highlights

    • Anthropic, Google DeepMind and OpenAI achieved the highest scores.
    • However, all three of them also received very low passing grades.
    • The “existential security” category received particularly low scores.

    The world’s top AI labs aren’t getting top marks in their efforts to prevent the technology’s worst possible outcomes, a new study has found.

    Run by the non-profit organization Future of Life Institute (FLI) Study We gathered a group of eight leading AI experts to assess the security policies of so many tech developers: Google DeepMind, Anthropic, OpenAI, Meta, XAI, DeepSeek, Z.AI, and Alibaba Cloud.

    Also: Is DeepSeek’s new model the latest setback for proprietary AI?

    Each company was assigned a letter grade according to six criteria, including “current losses” and “governance and accountability”. The assessments were based on publicly available materials such as policy documents and industry reports, as well as surveys completed by three of the eight companies.

    Anthropic, Google DeepMind, and OpenAI received the highest scores, but they still received what would be considered barely passing grades in an academic setting: C+, C+, and C, respectively. The other five received even lower scores: all Ds except Alibaba Cloud, which received the lowest grade of D-.

    screen-shot-2025-12-03-at-1-23-02-pm.png

    Credit: Future of Life Institute

    “Even the strongest performers lack the solid safeguards, independent oversight, and credible long-term risk-management strategies that such powerful systems demand, while the rest of the industry lags far behind on basic transparency and governance obligations,” FLI wrote in an article. report Summary of its findings. “This growing gap between capacity and security leaves the sector structurally unprepared for the risks it is actively creating.”

    Also: I Tested DeepSeek’s R1 and V3 Coding Skills – and We’re Not All Ruined (Yet)

    This is a disappointing performance review for some of the industry’s most widely used and powerful AI models. The results could also increase pressure on other companies to develop and implement effective security measures, at a time when competition among technology experts is increasingly intense.

    Existential risk?

    The most worrying finding from the new study is the group-wide disappointing scoring in the category of “existential security,” which FLI defines as “companies’ preparedness to manage extreme risks from future AI systems that may match or exceed human capabilities, including stated strategies and research for alignment and control.”

    screen-shot-2025-12-03-at-1-22-46-pm.png

    Credit: Future of Life Institute

    The question of whether AI could ever pose an existential threat to humanity on par with a global pandemic or nuclear weapons is debatable. So-called AI “boomers” dismiss such fears as alarmist, while arguing that the social and economic benefits of AI outweigh the potential downsides. Meanwhile, their “catastrophic” counterparts warn that technology could escape human control and potentially destroy us in ways that are difficult to predict.

    The debate about the impact of AI has been intensified by the tech industry’s recent adoption of “superintelligence” as a marketing buzzword and technological goalpost. You can think of this trend as artificial general intelligence (AGI) – an AI system that can match the human brain on any cognitive task – on steroids: a computer so much more advanced than our brain that it would exist at a completely different, apparently higher level of intelligence, like the difference between your own intelligence and that of a nematode.

    Meta and companies like Microsoft has clearly stated its ambitions to become the first person to create superintelligence. However, it’s not at all clear what that technology might look like when immediately implemented in consumer-facing products. The goal of the FLI study was to draw attention to the fact that companies are racing to create superintelligence in the absence of effective security protocols to keep such advanced systems out of control.

    Also: Mistral’s latest open-source release bets on smaller models more than larger ones – here’s why

    “I believe the best disinfectant is sunshine, by shining a light on what companies are doing we give them incentives to do better, we give governments incentives to regulate them better, and we actually increase the likelihood that we will have a good future with AI,” FLI president and MIT physicist Max Tegmark said in a Youtube video Summary of the new study’s findings.

    The nonprofit also published a statement in September, signed by AI “godfathers” Geoffrey Hinton and Yoshua Bengio, among other prominent tech figures, calling for an industry-wide pause on the development of superintelligence until industry leaders and policymakers can chart a safe path forward.

    moving forward

    Broadly speaking, FLI’s message to each of the eight companies included in the study is the same: now is the time to move beyond mere lip service about the need for effective AI guardrails and “build concrete, evidence-based safeguards” to prevent worst-case scenarios.

    The research also offered specific recommendations to each of the eight companies based on their individual grades. For example, this was the advice given to Anthropic, which scored the highest in all six categories: “Make thresholds and safeguards more concrete and measurable by replacing qualitative, loosely defined criteria with quantitative risk-tied thresholds and providing clear evidence and documentation that deployment and security safeguards can meaningfully reduce the risks they target.”

    But recommendations are just that, nothing more. In the absence of comprehensive federal oversight, it is difficult and perhaps impossible to hold all technology companies accountable to the same security standards.

    The regulatory guardrails currently in place around industries such as healthcare and air travel are meant to ensure that manufacturers make products that are safe for human use. For example, before a new pharmaceutical product can be legally marketed, drug developers must complete a multiphase clinical trial process mandated by the Food and Drug Administration.

    Also: What is sparsity? DeepSeek AI’s secret revealed by Apple researchers

    There is no such federal body to oversee the development of AI. The tech industry is a bit of a Wild West, where the responsibility to protect (or not) customers falls largely on the companies themselves, although some states have imposed their own regulations.

    However, there is growing public awareness of the negative impacts of AI on both a societal and individual level: OpenAI and Google are both currently embroiled in lawsuits alleging that their AI systems have led to suicides, and Anthropic’s cloud was reportedly used in September. Automate Cyber ​​Attacks From state-backed Chinese hackers.

    The result of this negativity is that, even in the absence of strong federal oversight, careless development of AI tools – including releasing new iterations of chatbots without equally sophisticated security mechanisms – can become so taboo in the AI ​​industry that developers are incentivized to take it seriously.

    However, for now, speed over safety still seems to be the guiding logic of the times.

    Takeaway for users

    The lack of federal regulation of the AI ​​industry, coupled with the race among tech developers to create more powerful systems, also means that users must educate themselves about how this technology may negatively impact them.

    Some early evidence suggests AI chatbots could be used for the long term distort someone’s world view, dull critical thinking skillsAnd take other psychological tolls. Meanwhile, the proliferation of AI tools and their integration into existing systems, which millions of people are already using, has made it harder to avoid the technology.

    While the FLI study likely won’t lead to a sudden sweeping change in tech developers’ attitudes toward AI security, it does provide a window into which companies are offering the most secure tools, and how those tools compare with each other across particular domains.

    For anyone interested not only in the potential existential harms of AI, but also in the risks it poses to individual users, we recommend reading Appendix A to the full report to get a good perspective on how each of the eight companies performed across specific security measures.

    barely favorite missed problem Review Security tool
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleI saw drone delivery launch in Atlanta – how they work and which cities are next
    Next Article The Ranch at Rock Creek’s brilliant 5-star business strategy
    PineapplesUpdate
    • Website

    Related Posts

    Startups

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026
    Startups

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026
    Startups

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Google tests AI-operated audio overview in search results for some questions

    June 16, 20250 Views

    Yes, this was the original voice of the Garat in the trailer for the thief VR

    June 16, 20250 Views

    This browser is designed for those who never close tabs

    June 16, 20250 Views
    Our Picks

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2026 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.