Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»AI/ML»Generative AI and privacy are the best frayinemy – a new study ranks the best and worst criminals
    AI/ML

    Generative AI and privacy are the best frayinemy – a new study ranks the best and worst criminals

    PineapplesUpdateBy PineapplesUpdateJune 25, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Generative AI and privacy are the best frayinemy – a new study ranks the best and worst criminals
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Generative AI and privacy are the best frayinemy – a new study ranks the best and worst criminals

    You are/gaty

    Most generative AI companies rely on user data to train their chatbott. For that, they can turn to public or private data. Some services are less aggressive and more flexible that scooping data from their users. Other, not so much. A new report of data removal service looks at the best and worst AI to respect your personal data and privacy.

    For its report “General AI and LLM data privacy ranking 2025“Incogni examined nine popular generative AI services and implemented 11 separate criteria to measure their data privacy practices. The criteria cover the following questions:

    1. Which data is used to train the model?
    2. Can a user conversation be used to train the model?
    3. Can signals be shared with non-service providers or other appropriate institutions?
    4. Can personal information from users be removed from training dataset?
    5. How clear is it if signals are used for training?
    6. How easy is it to get information about how the model was trained?
    7. What is a clear privacy policy for data collection?
    8. How readable is the privacy policy?
    9. What sources are used to collect user data?
    10. Is data shared with third parties?
    11. What data collect AI apps collected?

    Research included the provider and AI in Mistral AI Ke Le Chat, OpenEE’s chat, XE’s Groke, Anthropic Cloud, Inflation AI K PI, Decsec, Microsoft Copillot, Google Gemini and Meta AI. Each AI did well with some questions and not with others as well.

    Also: AI wants to work for your business? Then privacy needs to come first

    As an example, Groke earned a good grade about how it clearly explains that signals are used for training, but did not do so well on the readability of its privacy policy. As another example, their mobile apps were quite different between the grade iOS and Android versions given to CHATGPT and Gemini for data collection.

    Rear over the group, however, Le Chat took the top prize as the most privacy -friendly AI service. Although it lost some points for transparency, it still performed well in that area. In addition, its data collection is limited, and it scored high points on other AI-specific privacy issues.

    Chatgpt is in second place. Incogni researchers were slightly worried about how the model of Openai is trained and how user interacts with data service. But Chatgpt clearly presents the company’s privacy policies, lets you understand what happens with your data, and provides clear ways to limit the use of your data.

    (Disclosure: ZDNET’s original company Ziff Davis filed a case of April 2025 against Openai, alleging that it violates Ziff Davis copyright training and operating its AI system.)

    Groke came in third place, then Cloud and Pai. Each had trouble in some areas, but overall the user was quite good in respecting privacy.

    “Le Chat by Mistral AI is the least privacy-invasive platform, with a close back with chat and grooc,” Inclosure said in his report. “These platforms are at the highest place when it comes how transparent they are using how they use and collect the data, and how easy it is to select personal data used to train the underlying model. Catgpt turned out to be the most transparent whether the model would be used for training and was a clear secrecy policy.”

    For the lower half of the list, Deepsek finished sixth, followed by Copilot, and then Gemini. Meta AI left at the previous location, evaluating the least confidentiality of the bunch of AI service.

    Also: Apple planned to train his AI on his data without renouncing his privacy.

    Copilot scored the worst of nine services based on the AI-specific criteria, such as what data is used to train models and whether user interactions can be used in training. Meta AI took the worst grade for its overall data collection and sharing practices.

    “Platforms developed by the largest technical companies are the most privacy aggressive, Meta AI (META) is the worst, followed by Gemini and Copillot,” said that “said. “Gemini, Deepsek, PI AI, and Meta AI do not allow users to exit the signals used to train models.”

    AI Chatbot Privacy Ranking of Incogni for 2025

    Secret

    In his research, Gupta found that AI companies share data with various parties, including service providers, law enforcement, member companies of the same corporate group, research partners, colleagues and third parties.

    “Microsoft’s privacy policy implies that user signals can be shared with third parties that do online advertising services for Microsoft or who use Microsoft’s advertising technologies,” said in the report. “The confidentiality policies of the lampsac and the meta indicate that the signals can be shared with the companies within its corporate group. Meta and anthropic’s privacy policies can be properly understood to indicate that signs are shared with research colleagues.”

    With some services, you can prevent your signals from using to train models. It is a case with chat, Copilot, Mistral AI and Groke. With other services, however, stopping this type of data collection does not seem possible according to their privacy policies and other resources. These include Gemini, Deepsek, Pai AI and Meta AI. On this issue, Anthropic said that it never indicates the user to train its model.

    Also: Your data is probably not ready for AI – here is mentioned how to make it reliable

    Finally, a transparent and readable privacy policy sets a long way towards helping you find out what data is being collected and how to get out.

    “Being an easy-to-use, only the written support section that enables users to find answers to the questions related to privacy, shown themselves to greatly improve transparency and clarity, as long as it is kept till date,” Inkogan said. “Many platforms have similar data handling practices, however, companies like Microsoft, Meta, and Google suffer from single privacy policy covering all their products and a long privacy policy does not mean that it is easy to find answers to users’ questions.”

    Get top stories of morning with us in your inbox every day Tech Today Newsletter.

    criminals frayinemy generative privacy ranks study worst
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleLatest stock update in Walmart, buy best and more
    Next Article 5 best lip balm to try in 2025, tested under all hard conditions
    PineapplesUpdate
    • Website

    Related Posts

    Startups

    Why did Google’s Sergey Brin call early retirement the ‘worst decision’?

    December 16, 2025
    Startups

    Why does Amazon’s new facial-recognition AI for Ring doorbells have privacy experts worried?

    December 10, 2025
    Startups

    How chatbots can change your brain – a new study reveals what makes AI so persuasive

    December 6, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    10,000 steps or Japanese walk? We ask experts if you should walk ahead or fast

    June 16, 20250 Views

    FIFA Club World Cup Soccer: Stream Palmirus vs. Porto lives from anywhere

    June 16, 20250 Views

    What do chatbott is careful about punctuation? I tested it with chat, Gemini and Cloud

    June 16, 20250 Views
    Our Picks

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2026 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.