Last month, an Openai federal judge ordered to maintain all Chatgpt data as part of a lawsuit for continuous copyright. In response, Openai made an appeal to cancel the decision, which states that the “sweeping and unprecedented system” violates the privacy of its users.
New York Times prosecution In 2023, Openai and Microsoft claims that companies have violated copyright using their articles to train their language models. However, openai He said The Times issue is “undialy” and argued that training is under “fair use”.
Previously, Openai only kept chat records for Chatgpt Free, Plus and Plus users who did not open. However, in May, the Times and other news organizations Claim This Openai was involved in the “large and continuous” destruction of chat records that could contain evidence of copyright violations. Judge ONA Wang replied by requesting a chatgpt to keep all the records that will be deleted.
In the court’s appeal, Openai argued that Wang “prevent (s) Openai from respecting the privacy decisions of its users.” According to Ars Technica, the company also claimed that the Times accusations were “unfounded”, writing, “Openai” did not destroy any data, and certainly no data was deleted in response to litigation. It seems incorrectly assumed. “
“I have made (times) and other prosecutors a sweeping and unnecessary request in an unfounded suit by us,” CO Brad Lightcap He said In a statement. He added that the demand for Openai to retain all data “abandons the long -term privacy standards and weakens privacy protection.”
In X, CEO Sam German wrote that “the inappropriate demand … puts a bad precedent.” He also added that the issue highlights the need for “artificial intelligence concession”, as it should be “speaking to artificial intelligence such as speaking to a lawyer or a doctor.”
We recently thought about the need for something like “artificial intelligence concession”; This really accelerates the need to have a conversation.
Talking to artificial intelligence should be like speaking to a lawyer or doctor.
I hope this community will discover soon.
Sam Al -Taman (Sama) June 6, 2025
The court has led to a preliminary wave of panic. For all ARS Technica, the files of the Openai court were martyred by social media from LinkedIn and X as users expressed their concerns about their privacy. On LinkedIn, one person warned their customers “additional care” about the information they shared with ChatGPT. In another example, someone tweeted, “It seems that Wang believes that copyright in the New York Times outperforms the privacy of every APenai user – crazy !!!”
On the one hand, I couldn’t imagine the presence of Chatgpt data sensitive enough of the data I care about if someone else reads. However, people use Chatgpt as a therapist, for advice to life, and even treat it in the name Romantic partner. Regardless of whether I will do the same personally, they deserve the right to keep this content in particular.
At the same time, the Times issue is not as a basis like Openai’s claim. It is useful to discuss how to train artificial intelligence. Remember when CLEARVIEW AI Increase 30 billion pictures From Facebook to train face recognition? Or reports that fThe modified government uses pictures of weak people To test the face recognition program? Yes, these examples are outside the press and the law of copyright. However, it highlights the need for conversations about whether companies like Openai should need clear approval to use content rather than strip what they want from the Internet.
https://gizmodo.com/app/uploads/2025/05/OpenAI-ChatGPT-1200×675.jpg
Source link