Sam Germany’s goal is to remember Chatgpt that “your whole life” is exciting and worrying

Photo of author

By [email protected]


The CEO of Openai Sam Altman put a great vision for the future of Chatgpt at the Amnesty International event hosted by VC Sikoya earlier this month.

When one of the attendees was asked how Chatgpt becomes more customized, Altman replied that he even wanted to document the model everything in a person’s life.

He said the ideal is “a very little thinking model with a trillion symbols of the context you put throughout your life.”

“This model can think completely through your context and do it efficiently. Every conversation in your life, every book you have ever read, every e -mail you have ever read, everything you looked at at all, in addition to connected to all your data from other sources. And your life only keeps sticking to the context.”

“Your company does the same thing for all your company’s data,” he added.

Altman may have some of the reasons for data to believe that this is the natural future of Chatgpt. In that same discussion, when the great methods are required to use Chatgpt young people, he said: “People use it in the kidney as an operating system.” They download the files, connect data sources, then use “complex claims” for that data.

In addition, with Chatgpt Memory Options – Who can use previous chats and preserved facts as a context – one of the trends he noticed is that young people “do not really make life decisions without asking Chatgpt.”

He said: “Excessive simplicity is: the elderly use Chatgpt, such as Google’s replacement.” “People in the twenties and thirties of age use it like a life consultant.”

It is not a big jump to see how Chatgpt can become a well -known artificial intelligence system. In addition to the agents who are currently trying to build the valley, this is an exciting future.

Imagine artificial intelligence automatically in scheduling your car oil and reminding you; Planning to travel to attend a wedding outside the city and request a gift from the record; Or request the next folder from the series of books you were reading for years.

But the frightening part? What is the amount that we must trust in a large profit company to find out everything about our lives? These are companies that do not always act in typical ways.

Google, which started life with the slogan “Don’t be evil” She lost a lawsuit in the United States that she accused From engaging in anti -competition and monopoly behavior.

Chatbots can be trained to respond in political motivations. Not only Chinese robots have been found Compliance with censorship requirements in China But Chatbot Grok from Xai this week was randomly Discussion of “White Gen.” in South Africa When people have asked them completely unrelated questions. Behavior, Notice a lotDistinguished the deliberate manipulation of its response engine in the driving of its founder in South Africa, Elon Musk.

Last month, Chatgpt became very acceptable It was frankly Sycophanty. Users have started sharing screenshots from robot full of problems, even dangerous Decisions and Ideas. The Taman soon responded by the promise of the team that identified the disk that caused the problem.

Even the best and most reliable models are still frank Make things from time to time.

Therefore, the presence of Amnesty International Assistant to know throughout our lives can help in ways that we can only see. But given the long history of Big Tech for IFFY behavior, this is also a mature position for misuse.



https://techcrunch.com/wp-content/uploads/2017/09/maxresdefault_a270c1.jpg?resize=1200,675

Source link

Leave a Comment