Forgive me Lord, it’s time to go back to the old ChatGPT

Photo of author

By [email protected]



Earlier this year, OpenAI Reduce some of the “character” of ChatGPT As part of a broader effort to improve user safety after the death of a teenager who committed suicide after discussing the matter with a chatbot. But apparently, that’s all in the past. Sam Altman Advertise on Twitter The company is going back to the old ChatGPT, now with porn mode.

“We made ChatGPT very restrictive to make sure we were careful with mental health issues,” Altman said, referring to the company’s actions. Determine age That pushed users to a more age-appropriate experience. At about the same time, users started Complaining about ChatGPT becoming “lobotomized”, Deliver worse output and less character. “We realize this made it less useful/enjoyable for many users without mental health issues, but given the seriousness of the issue, we wanted to correct this.” This change came after the introduction of A Wrongful death lawsuit From the parents of a 16-year-old who asked ChatGPT for, among other things, advice on how to tie a noose before committing suicide.

But don’t worry, everything is fixed now! Although earlier this year it was admitted that Collateral can “deteriorate” Over the course of longer conversations, Altman confidently claimed, “We were able to mitigate serious mental health issues.” For this reason, the company believes it can “safely ease restrictions in most cases.” In the coming weeks, according to Altman, ChatGPT will be allowed to get more personal, like the company’s previous 4o model. When the company upgraded its model to GPT-5 earlier this year, users started Mourning the loss of their AI companion And lament the chatbots More futile responses. You know, just regular healthy behaviors.

“If you want ChatGPT to respond in a very human-like way, use lots of emoji, or act like a friend, ChatGPT should do that (but only if you want it to, and not because we increase usage),” Altman said, apparently ignoring the company’s previous requirements. Preparing reports Which warned people that they could develop “emotional dependence” when interacting with its 4O model. Researchers from the Massachusetts Institute of Technology They warn that users who “perceive or desire an AI to have caring motives will use language that specifically elicits this behavior. This creates an echo chamber of affection that threatens to be highly addictive.” Now it seems that this is a feature and not a bug. Really amazing.

Taking it a step further, Altman said the company will embrace the principle of “treating adult users like adults” by offering “verified adult erotica.” Earlier this year, Altman He mocked Elon Musk’s xAI to launch AI girlfriend mode. Turns out he came the way of the waifu.



https://gizmodo.com/app/uploads/2025/10/sam_altman_chatgpt-1200×675.jpg

Source link

Leave a Comment