For nearly two decades, join a reliable event by Enterprise leaders. The VB transform brings people together with real venture AI strategy together. learn more
The regular Chatgpt user (of which the author of this article may have seen) may have seen or may not be a hit chatbot users to enter a “temporary chat”, which is designed to wipe all the information exchange between the user and the Unty of AI along with the closure by the user and the user.
In addition, Openai allows users to manually remove the chatgpt sessions before the left sidebar on the web and by left or control-clicking the desktop/mobile apps, or pressing them for a longer time/longer.

However, this week, Openai found himself criticized by some of the above chats and found that they came to know that the company had it. No In fact, these chat logs were already removed as a signal.
,You are telling me that my deleted chat chats are not really removed and is being investigated by a judge?“Post x user @NS123ABCA comment that viewed more than one million times.
Another user, @kepanocouple, “You can ‘delete’ a chat chat, although all chats should be maintained due to legal obligations?,
AI as impressive and software engineer Simon Wilison wrote on his personal blog: “(Openai’s) Paying to customers of API can decide to switch well on other providers, who can offer deformed policies by order of this court!”
Instead, Openai confirmed that it was removed since May 2025 in response to a federal court order and preserving the temporary user chat logs, although it did not disclose it to users until 5 June.
Order, embedded down and issued May 13, 2025By American Magistrate Judge Ona t. Wang“All output logs require Openai to preserve and separate the log data, which will otherwise be removed on a forward basis,” due to the chat removed by the user request or due to privacy obligations.
Stems from court directive The New York Times (NYT) v. Openai and MicrosoftNow 1.5 year copyright case It is still being argued that NYT ‘S lawyers allege that Openai’s language model copyright news materials re -organize Verbetim. The plaintiff argues that the logs, including users, may be removed, in which there may be a violation of the relevant output for the trial.
While Openai immediately complied with the order, it did not publicly inform the affected users for more than three weeks when Openai released a blog post and described a legal mandate to a FAQ and underlines what was affected.
However, Openai is keeping the defect square NYT And the judge’s order, saying that the demand for protection is “baseless”.
Openai makes it clear what is happening with a court order to preserve the Chatgpt user log – in which chats are affected
One in Blog post published yesterdayOpeni chief operational officer Brad Licap Defeated the situation of the company and said that it was advocating the user privacy and security against an over-braid judicial order, writing:
“The New York Times and other plaintiff have made a comprehensive and unnecessary demand against us in their baseless lawsuits: consumer chatgpt and API maintains customer data indefinitely. It funds fundamentally conflicts with privacy commitments to our users.”
The post clarified that A zero data retention (ZDR) without API Customers without Chatgpt free, plus, pro, and team users, are influenced by the protection orderEven if users on these schemes remove their chats or use temporary chat mode, their chats will be stored for the future future.
However, Customer and EDU users, as well as API clients using ZDR & Points, are API clients, No The order will be affected and their chats will be removed as directed.
Is conducted under maintained data ValidityWhich means that it is stored in a safe, separate system and is accessible only to a small number of legal and security personnel.
“This data is not shared automatically the new York Times Or someone else, “Lightcap insisted in Openai’s blog post.
Sam Altman floats the new concept of ‘AI Privilege’ that allows for a human doctor or a lawyer for confidential interaction between models and users.
Openai CEO and Co-Founder Sam Altman Also addressed this issue in a post from his account in a post Social network x last nightWrite:
“Recently NYT told a court not to force us to remove any user chat. We think it was an improper request that sets a bad example. We are appealing for the decision. We will fight with any demand that compromises the privacy of our users; this is a main principle.”
He also suggested that a comprehensive legal and moral structure may be required for AI privacy:
“We have recently been thinking about the need for something like ‘AI Privilege’; it actually intensifies the need for interaction.”
“Talking to AI should be like talking to a lawyer or doctor.”
,I hope the society will find it soon.,
Perception of AI privilege-as a possible legal standard-Ecos Attorney-client and doctor-rage secrecy.
Whether such a structure will receive traction in the court or the policy circles are yet to be seen, but the Altman’s comment indicates that OpenaiI may fast for such an innings.
What comes next for Openai and your temporary/deleted chat?
Openai has lodged a formal objection to the court order, requesting that it be evacuated.
In court filing, the company argues that demand lacks factual basis and preserving billions of additional data points is neither necessary nor proportional.
In the May 27 hearing, Judge Wang, ordered that the order is temporary. He directed the parties to develop a sample plan to test whether the deleted users are different from the logs made physically. Openai was ordered to submit the proposal by 6 June today, but I have not yet seen the filing.
What does it mean for enterprises and decision making in charge of chatgpt use in corporate environment
While the order exempts the Chatgpt enterprise and API using the ZDR & Points, the API matters deeply to the professionals responsible for deploying and scaling AI solutions within comprehensive legal and reputed implications organizations.
Those who oversee the full life cycle of the big language model-from the ingestion to the correct and integration-will require to re-release the data about data regime. If the LLM’s user-assembling components are subject to legal protection orders, it immediately raises the question about where the data goes after leaving a safe closing point, and how to separate high-risk interactions, log or anonymous.
Any platforms touching Openai API should validate which endpoints (eg, ZDR vs non-ZDR) are used and are ensured that data handling policies are reflected in user agreements, audit logs and internal documents.
Even if the ZDR &Point is used, data lifestyle policies may require a review to confirm that the downstream system (eg, analytics, logging, backup) do not unknowingly maintain transient interactions that were considered short -term.
Security officers responsible for the management of risk should now expand the threat modeling to include legal discovery as a potential vector. Teams must verify whether OpenAi’s backnd retention practices align with internal control and third-party risk assessment, and whether users are relying on features such as “temporary chats” that no longer work required under legal protection.
User a new flashpoint for privacy and safety
This moment is not just a legal clash; This is a flashpoint in the conversation developed around AI privacy and data rights. By preparing this issue as a case of “AI Privilege”, OpenIAI is effectively proposing a new social contract about how intelligent systems handle confidential inputs.
Do courts or MPs accept that the framing is uncertain. But for now, Openai is caught in a balance act – legal compliance, enterprise assurance and user trusts and when you talk to a machine you face loud questions about controlling your data.