Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Best Reed-It-Letler Apps to cure your Longrade

    June 8, 2025

    How to see French Open Mains Final on 9Now (it’s free)

    June 8, 2025

    This retro hides an RTX 5060 TI of the 1980s style and packs the workstation-class power for creative professionals

    June 8, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»AI/ML»100 major AI scientists consider more ‘reliable, reliable, safe’ route for AI
    AI/ML

    100 major AI scientists consider more ‘reliable, reliable, safe’ route for AI

    PineapplesUpdateBy PineapplesUpdateMay 12, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    100 major AI scientists consider more ‘reliable, reliable, safe’ route for AI
    Share
    Facebook Twitter LinkedIn Pinterest Email

    100 major AI scientists consider more ‘reliable, reliable, safe’ route for AI

    Nanostockk / getty picture

    The debate on the risks and losses of artificial intelligence often focuses on what governments can do or what can do. However, there are equally important options that AI researchers themselves create.

    This week, in Singapore, more than 100 scientists around the world proposed guidelines how researchers should contact to make AI more “reliable, reliable and safe”.

    Also: Some secret AI companies can crush free society, researchers warned

    The recommendations come at a time when veterans of generic AIs like Openi and Google have reduced the rapid revelation about their AI models, so the public knows less and less how the work is held.

    Guidelines In Singapore last month increased by an exchange among scholars, one of the most prestigious conferences on AI, in combination with the International Conference on the learning representation – a major AI conference in Asia has been held for the first time.

    The document, “Global AI Safety Research Unanimous Conscience on Priorities,” Posted on the website At the Singapore conference on AI, a second AI conference is being held in Singapore this week.

    Among the publishers helping to prepare the draft of Singapore’s consensus, the founder of the AI ​​Institute of Canada, the founder of Mila is Yoshu Bengio; Stuart Russell, UC Berkeley, a professor of computer science, and an expert on “human centered AI”; The future of the Max Tegmark, the UK -based Think Tank -based Major Life Institute; And representatives of Massachusetts Institute of Technology, Deepmind Unit of Google, Microsoft, National University of Singapore and Singhhua University and National Academy of Sciences of China.

    To make the matter that research should have guidelines, Josephine Tio, Minister of Digital Development and Information of Singapore, said to present the work that people cannot vote for what kind of AI they want.

    “In democracy, general election is a way for citizens to choose a party that forms the government and to decide on their behalf,” Tea said. “But in AI development, citizens do not get to make a uniform option. Although we say that there is a democratization of technology, citizens will be at the end of AI opportunities and challenges, without any thing that who shapes its trajectory.”

    Singapore-Consensus-2025-three-ares-off-focus

    Singapore Consciousness

    Too: Google’s Gemini AI continues the dangerous barrier of technology

    Paper explains three categories that researchers should consider: how to identify risks, how to build AI systems, to avoid risks in this way, and how to control AI systems, meanings, ways to monitor and interfere in the matter of concerns about those AI systems.

    The authors wrote in the preamble for the report, “Our goal is to enable more impressive R&D efforts to develop the security and evaluation mechanisms rapidly and to promote a reliable ecosystem, where AI is exploited for public good.” “Inspiration is evident: there is no benefit of an organization or country when the AI ​​events or malicious actors are capable, as the resulting loss will horrify everyone.”

    On the first score, assessing potential risks, scholars recommend the development of “metrology”, measurement of potential losses. They write that the requirement of quantitative risk evaluation is required to suit the AI ​​system to reduce uncertainty and require large safety margin. ,

    With a balance for the protection of corporate IP, AI for Risk requires allowing external parties to monitor AI research and development for research and development. This involves “developing a safe infrastructure which enables fully evaluate, protecting intellectual property, including the model theft.”

    Too: Stuart Russell: Will we choose the right objective for AI before destroying all of us?

    The development section worries how AI is reliable, reliable and safe “by design”. To do this, “technical methods” needs to be developed that can specify what is intended from the AI ​​program and can also underline what not to be there – “unwanted side effects” – scholars write.

    The actual training of the nerve mesh is then needed to upgrade in such a way that the resulting AI program “guarantee to complete their specifications”, they write. This includes parts of the training, which focus on, for example, cracking an LLM with malicious signs such as “reducing confusion” (often known as hallucinations) and “increasing strength against tampering,”.

    Final, the control section of paper includes both include how to expand the current computer safety measures and how to develop new techniques to avoid the runway AI. For example, traditional computer control, such as off-switch and overrid protocol, must be extended to handle AI programs. Scientists need to “designing new techniques to control very powerful AI systems that can actively weaken efforts to control them.”

    The paper is ambitious, which is appropriate given the increasing concern about the risk from AI as it connects more and more computer systems, such as agentic AI.

    Also: Multimodal AI creates new security risk, CSEM and weapon make information

    As scientists have accepted in introduction, research on security will not be able to keep the AI ​​with rapid pace until more investments are made.

    “Given that the science situation for the creation of reliable AI today does not cover all the risks completely, quick investment in research is necessary to keep pace with business capabilities with commercially operated growth in system capabilities,” write to authors.

    Write Time magazineThe Bengio runway resonates concerns about the AI ​​system. Bengio writes, “Recent scientific evidences also show that, highly capable systems become rapidly autonomous AI agents, they display goals that were clearly not programmed and not necessarily alliances with human interests,” Write Bengio.

    “I am really uncontrolled by unrestrained AI behavior, which is already performing, especially in self-protection and deception.”

    Want more stories about AI? Sign up for innovationOur weekly newspapers.

    major reliable route safe Scientists
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSamsung Galaxy Z Flip 7 may have a stunning hardware return
    Next Article 14 Best Bookshelph Speaker (2025): Active, Inactive, and Hi-Fi
    PineapplesUpdate
    • Website

    Related Posts

    Gadgets

    Scientists discovered the heaviest proton-emergent nucleus after nearly 30 years.

    June 8, 2025
    How-To

    Xbox Games Showcase 2025 Live: Person 4 remakes and all they are in the form of all major revelations

    June 8, 2025
    Web3

    Scientists discovered the heaviest proton-emergent nucleus after nearly 30 years.

    June 8, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025613 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025549 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025479 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Tiktok has an ancient solution for children bent for screen at night

    May 17, 20250 Views

    The 70th anniversary of Disneyland brings Cartoni anarchy in this summer celebration

    May 17, 20250 Views

    5 features window should steal from Linux Mint

    May 17, 20250 Views
    Our Picks

    Best Reed-It-Letler Apps to cure your Longrade

    June 8, 2025

    How to see French Open Mains Final on 9Now (it’s free)

    June 8, 2025

    This retro hides an RTX 5060 TI of the 1980s style and packs the workstation-class power for creative professionals

    June 8, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2025 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.