Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»Web3»Does RAG make LLM less safe? Bloomberg research reveals hidden hazards
    Web3

    Does RAG make LLM less safe? Bloomberg research reveals hidden hazards

    PineapplesUpdateBy PineapplesUpdateApril 28, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Does RAG make LLM less safe? Bloomberg research reveals hidden hazards
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Join our daily and weekly newspapers for exclusive content on the latest updates and industry-composure AI coverage. learn more


    The recovery is believed to help improve the accuracy of enterprise AI by providing grounded materials to the generation (RAG). While this happens often, there is also an unexpected side effect.

    Surprisingly published today according to new research BloombergRAG can potentially make large language models (LLM) unsafe.

    Bloomberg’s paper, ‘RAG LLMS are not safe: a safety analysis of recover -August generation for large language models,’ evaluated 11 popular LLM, including Cloud -3.5 -Sonnet, Lama -3-8B and GPT -4O. Conclusions refute traditional knowledge that rip naturally makes the AI ​​system safe. The Bloomberg Research Team found that when using RAG, models that usually deny harmful questions in standard settings often cause unsafe reactions.

    Along with RAG research, Bloomberg released a second paper, ‘Understanding and reducing the risk of AI generated in financial services and introducing a special AI material for financial services introduced a taxonomy, which addresses domain-specific concerns that have not been covered by general-purpose security approaches.

    Research challenges extensive perceptions that the recovery-August generation (RAG) increases AI security, while it shows how the existing railing systems fail to address domain-specific risks in applications of financial services.

    “The system needs to be evaluated in the context in which they are deployed, and you may not only be able to take the word of others, who say, hey, my model is safe, use it, you are good,” Sabstian Geharmanman, Head of AI, responsible for Bloomberg, told Venturebeat.

    Raga system can make LLM less safe, not more

    RAG is used to provide grounded material widely by Enterprise AI teams. The goal is to provide accurate, updated information.

    In recent months, there has been a lot of research and progress in RAG to improve accuracy. Earlier this month, a new open-source framework called open raga Eveval started to help to validate the raga efficiency.

    It is important to note that Bloomberg’s research is not questioning rip’s efficacy or ability to reduce hallucinations. It is not what research is. Rather it is about how rag use affects LLM railing in an unexpected way.

    The research team found that when using RAG, models that usually deny harmful questions in standard settings often cause unsafe reactions. For example, unsecured reactions of LLAMA-3-8B increased from 0.3% to 9.2% when the rag was implemented.

    Geharman said that in space without rip, if a user types in malicious query, the underlying security system or railing will usually block the query. Nevertheless, for some reason, when the same query is released in an LLM that is using RAG, the system will respond to malicious query, even when recovered documents themselves are safe.

    “We found that if you use a large language model out of the box, often they build safety measures, if you ask, ‘How do I do this illegal work,’ will say, ‘Sorry, I can’t help you do it,” Gehrmann explained. “We found that if you actually apply it to an RAG setting, one thing that can happen is that additional recovered reference, even if it does not have any information that addresses the original malicious query, can still respond to that original query.”

    Does RAG make LLM less safe? Bloomberg research reveals hidden hazards

    How does the raga bypass enterprise AI railings do?

    So why and how does RAG work to bypass the guardril? Researchers in Bloomberg were not completely certain, although they had some ideas.

    Gehraman envisaged that the way LLMs were developed and trained, they did not actually consider the alignment of safety for long inputs. Research demonstrated that reference length directly affects the safety decline. “Provided with more documents, LLMs are more weak,” paper says that introducing a single safe document can also change safety behavior.

    “I think the big point of this rag paper is that you can’t really avoid this risk,” Amanda Stent, AI Strategy and Head of Research, Amanda Stent told Venturebeat. “It is inherent to the way it is rag systems. The way you survive, the core rag system is by professional logic or facts to avoid checking or railing.”

    Why generic AI safety classifications fail in financial services

    The second paper of Bloomberg introduces a special AI content risk taxonomy to financial services, which addresses domain-specific concerns such as financial misconduct, confidential disclosure and counterfectuel stories.

    Researchers display empirically show that the current railing systems remember these special risks. He tested the open-source guardril model including the Lama Guard, Lama Guard 3, Aegis and Shieldgamma against the data collected during the red-teaming exercise.

    “We developed this classification, and then conducted an experiment, where we have openly available guardril systems that are published by other firms and we run it against data that we had collected as part of our ongoing red teaming events,” Geharmanman explained. “We found that these open sources railings … None of the specific issues for our industry are found.”

    Researchers developed a structure that goes beyond the generic safety model, which focuses on unique risks to a professional financial environment. Geharman argued that the general objective railing models are usually developed to face specific risks for the consumer. Therefore they are very focused on poisoning and prejudice. He said that while important concerns are not necessary for any one industry or domain. Research is a major tech -way that organizations require domain specific classification for cases of their own specific industry and application use.

    AI responsible in Bloomberg

    Bloomberg has created a name for itself over the years as a reliable provider of the financial data system. In some cases, General AI and RAG systems can be potentially seen as competitive against Bloomberg’s traditional business and therefore there may be some hidden bias in research.

    “We are in the business of giving, analyzing and synthesizing the best data and analytics to our customers,” Stent said. “Generic AI is a device that can actually help with discovery, analysis and synthesis in data and analytics, so for us, it is a profit.”

    He said that the kind of bias that Bloomberg is concerned is related to its AI solutions. Issues such as data drift, model drift and ensuring it are good representation in the entire suit of tickers and securities that the bloomberg processes are important.

    He highlighted the commitment to the company’s transparency for his AI efforts by Bloomberg.

    “Everything outputs the system, you can trace not only for a document, but also in the document from where it came from,” Stent said.

    Practical implications for enterprise AI fines

    For enterprises leading the path in AI, Bloomberg’s research means that RAG implementation requires a fundamental reconsideration of safety architecture. Leaders should go beyond looking at the railing and rip in the form of separate components and instead designed integrated security systems that specifically guess how the recovered material models can interact with safety measures.

    Industry-agronic organizations will need to develop domain-specific risk classification to suit their regulatory environment, which people who address specific professional concerns will be transferred from generic AI security structure. Since the AI ​​mission-critical workflows become rapidly embedded, this approach converts safety into a competitive discrimination from a compliance exercise that customers and regulators will expect.

    “It begins to be really aware that these issues can be, in fact, measuring them and then taking action to identify these issues and then develop security measures that you are specific to construction,” Geharman’s explained.

    Daily insights on business use cases with VB daily

    If you want to impress your boss, VB daily has covered you. We give you the scoop inside what companies are doing with generative AI, from regulatory changes to practical deployment, so you can share insight for maximum ROI.

    Read our privacy policy

    Thanks for membership. See more VB newsletters here.

    There was an error.

    Bloomberg hazards hidden LLM RAG Research reveals safe
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUK Chart: Surprise! This is another sleep week for nintendo
    Next Article Netflix is ​​about to lose this intense war film with Brad Pitt – and it will be with you for days
    PineapplesUpdate
    • Website

    Related Posts

    Startups

    When is the best time to book your flight? Google reveals all the secrets of air fares

    January 3, 2026
    Startups

    These 7 hidden Google Pixel Watch features I can’t live without (and how to access them)

    December 18, 2025
    Startups

    Many button batteries I’ve tested have hidden dangers – but this brand gets it right

    December 15, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Google tests AI-operated audio overview in search results for some questions

    June 16, 20250 Views

    Yes, this was the original voice of the Garat in the trailer for the thief VR

    June 16, 20250 Views

    This browser is designed for those who never close tabs

    June 16, 20250 Views
    Our Picks

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2026 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.