Remember when players in podcasts all spoke with the founder of Uber Travis Kalanick About.Physics Vepi“Kalanik told viewers that he was about to discover new types of science by pushing chat keys from artificial intelligence to its previously uncomfortable lands.
It was ridiculous, of course, because this is not how Chatbot works or intelligence science. Calnnik’s ideas ridiculed any end by people on social media. But it seems that the gentlemen of all-in is now sending Kalanick’s ideas, and even tightened that they could be associated with the emergence of “psychosis of artificial intelligence”, despite the fact that they were more than happy to entertain the nonsense of the Uber founder when he was in the exhibition.
Kalanick appeared as a guest in the July 11 episode of all-in, explaining seriously how he was on the threshold of discovering new exciting things about quantum physics, previously unknown to science.
“I will go below this topic using GPT or GROK and I will start reaching the edge of what is known in quantum physics, then I parked coding, except for vibrant physics”, Kalanick Make up. “We are approaching what is known. I try to emphasize and know if there are penetrations that can be it. I have come close to some interesting breakthroughs only.”
https://www.youtube.com/watch?
The fact is that AI Chatbots such as GROK and Chatgpt are unable to provide new discoveries in quantum physics because this goes beyond its capabilities. They spit the sentences by reshaping the training data and reformulating them, not by testing the hypotheses. But the co -strait Chamat in the Vigaean I think that Kalnik was on something, as he took a step forward by insisting that chatting tools of artificial intelligence can only know the answer to any problem it poses.
“When these models are completely absolute from having to learn in the well -known world, instead, you can learn artificially, everything is turned upside down to what is the best hypothesis you have or what is the best question?
This type of insistence that AI Chatbots can solve any problem of marketing, but also puts users to fail. Tools like Grok and Chatgpt are still struggling with basic tasks such as an account The number of names of the United States containing the message p Because this is not what big language models are good in. But this did not prevent people, like Openai Sam Altman, from offering great promises.
Predictive host Jason Calcannis was the only one who suggested that perhaps Kalanik had offended his own experience during the July 11 episode. Calacanis Calnik asked whether he was “a kind of reading in it while trying to just random things on the margin.” Uber founder admitted that he could not really reach a new idea, but he only said that because “these things are related to what is known.” Kalanick compared it to withdrawing a stubborn donkey, indicating that he was already able to new discoveries if you had worked hard enough.
You expected the last word to be on this topic, given the fact that the players are everyone in avoiding controversy. They are badly Production failure An episode of podcasts in the week presented by Elon Musk and President Trump. (Podcast hosts are all friends with Musk, and the participating host David Sachs is Trump Caesar Caesl. So the new episode may be somewhat amazed to hear the strange Kalaki ideas that have been discussed again, especially if you can make fun of it.
The latest episode of all-in, downloaded on August 15It was opened with a discussion of the so -called “artificial intelligence’s psychosis”, a term that has not been defined in medical literature, but it appeared in the popular media to discuss how people who struggle with their mental health can see that their symptoms are exacerbated by engaging with artificial intelligence. Gizmodo mentioned Last week About complaints raised with FTC about users who suffer from hallucinations, which are moved by ChatGPT. Even one of the complaints told how one of the users stopped taking his medicine because Chatgpt told him that he was not present at the same time because he was suffering from an imaginary collapse.
Artificial intelligence’s psychology is not a clinical term, and it is difficult to determine the exact number of people with severe strains of their mental health from the use of male chat groups. But Chatgpt, Openai, acknowledged it was a problem. Calacanis Open the offer When talking about how people can get a “one shot”, the new colloquial is chosen from video games and used for people who fall into the depth of the artificial intelligence rabbit hole. They are drawn from artificial intelligence and fail to understand that it is just a computer program, and to send themselves to a fake spiral.
“Maybe you had witnessed a little of this when Travis (Calnnik) was in the program two weeks ago and said that he was like spending his time on the sidelines or edges … physics,” said Kalakanis. “It can really take you to the bottom of the rabbit hole.”
“Do you say that Travis suffers from psychosis from artificial intelligence?” Participant, David Friedberg, requested.
“I say that we may need a health examination. We may need a healthy examination because smart people can participate in this artificial intelligence. So we may have a little examination on the luxury of our TK pouring.”
https://www.youtube.com/watch?
Palihapitiya seemed to believe that the main problem with psychosis of artificial intelligence was just the product of the so -called unity, but he ignored his own role in feeding Calnnik’s narration that chat tools of artificial intelligence were really able to find new discoveries in science. David Sachs was not facing him, and insists that artificial intelligence’s mind was just a moral panic similar to fears 20 years ago on social media.
“This whole idea is about artificial intelligence minds, I think I should call nonsense on the entire concept. I mean, what are we talking about here? People do a lot of research?” Sachs said in an attempt to reduce news reports. “This looks like the moral panic created on social media, but it has been updated for artificial intelligence.”
The bags admitted that there is a mental health crisis in the United States, but it did not believe that it was the mistake of Amnesty International. Perhaps there is some truth to what the bags say. All new technologies include a form of social unrest and concerns about what a specific invention may mean for the future. But there is also no denial that people are more alone and isolated since the emergence of social media. This may not be every social media error. But revolutionary technologies will inevitably have positive and negative effects on society.
The question is always if the pros and cons superporation. It can be said that the jury is still outside social media and AI Chatbots.
https://gizmodo.com/app/uploads/2025/08/all-in-podcast-jason-calacanis-1200×675.jpg
Source link