June 7, 2025
Intangible Assets

OpenAI Appeals ‘Sweeping, Unprecedented Order’ Requiring It Maintain All ChatGPT Logs


Last month, a federal judge ordered OpenAI to indefinitely maintain all of ChatGPT’s data as part of an ongoing copyright lawsuit. In response, OpenAI has filed an appeal to overturn the decision stating that the “sweeping, unprecedented order” violates its users’ privacy.

The New York Times sued both OpenAI and Microsoft in 2023 claiming that the companies violated copyrights by using its articles to train their language models. However, OpenAI said the Times’ case is “without merit” and argued that the training falls under “fair use”.

Previously, OpenAI only kept chat logs for users of ChatGPT Free, Plus, and Pro who didn’t opt out. However, in May, the Times and other news organizations claimed that OpenAI was engaging in a “substantial, ongoing” destruction of chat logs that could contain evidence of copyright violations. Judge Ona Wang responded by ordering ChatGPT to maintain and segregate all ChatGPT logs that would otherwise be deleted.

In a court appeal, OpenAI argued that Wang’s order “prevent[s] OpenAI from respecting its users’ privacy decisions.” According to Ars Technica, the company also claimed that the Times’ accusations were “unfounded”, writing, “OpenAI did not ‘destroy’ any data, and certainly did not delete any data in response to litigation events. The order appears to have incorrectly assumed the contrary.”

“The [Times] and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us,” COO Brad Lightcap said in a statement. He added that the demand for OpenAI to retain all data “abandons long-standing privacy norms and weakens privacy protections.”

On X, CEO Sam Altman wrote that the “inappropriate request…sets a bad precedent.” He also added that the case highlights the need for “AI privilege” where “talking to an AI should be like talking to a lawyer or a doctor.”

The court order triggered an initial wave of panic. Per Ars Technica, OpenAI’s court filing cited social media posts from LinkedIn and X where users expressed concerns about their privacy. On LinkedIn, one person warned their clients to be “extra careful” about what information they shared with ChatGPT. In another example, someone tweeted, “Wang apparently thinks the NY Times’ boomer copyright concerns trump the privacy of EVERY @OPENAI USER – insane!!!”

On one hand, I couldn’t imagine having a ChatGPT log sensitive enough data that I’d care if someone else read it. However, people do use ChatGPT as a therapist, for life advice, and even treat it as a romantic partner. Regardless of whether I’d personally do the same, they deserve the right to keep that content private. 

At the same time, the Times’ case isn’t as baseless as OpenAI claims. It is absolutely worth discussing how artificial intelligence is trained. Remember when Clearview AI scraped 30 billion images from Facebook to train its facial recognition? Or reports that the federal government uses images of vulnerable people to test facial recognition software? Yes, those examples exist outside of journalism and copyright law. However, it highlights the need for conversations about whether companies like OpenAI should need explicit consent to utilize content rather than scraping whatever they want from the internet.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *