Chatbots convinced stupid that they had broken the code on the statue in the backyard of the CIA.

Photo of author

By [email protected]


Near the headquarters of the CIA in Langley, Virginia, there is a statue known as Kryptos. It has been there since 1990 and contains four secret symbols – three of them were resolved. The latter went 35 years without deciphering it. According to Wireless reportThe responsible sculptor wants everyone to know that you do not solve the dreaded thing with Chatbot.

Jim Sanburn, who created sculptures for the Massachusetts Institute of Technology, National Oceanic and Attrads Administration in addition to his work outside the CIA, has been immersed with completely positive people because they replaced K4, the final plate of uninterrupted code and which was symbolized with Knottier techniques. But these programming instructions bombers are not actual analysts, professionals or otherwise, who were Heroes with dismantling the message Since his first appearance. No, they are just some of the people who operated the code through Chatbot and took his word on the answer.

in Conversation with WireSANBORN said he has seen a significant rise in the servants, which is already annoying if you are already seventeen years old, and he has got a lot of presentations over the past thirty dozens in half of the decades that have been forced to do so Start in imposing $ 50 fees To review solutions because you have had a lot of messages you send over the years. But worse than just the frequency of submission, according to Sanborn, is the position of applicants.

“The character of emails is different – the people who cracking the code with artificial intelligence are fully convinced that they broke the crepetus during breakfast,” he told WIRED. “So all of them are very convinced that by the time they reached, they break it.”

Here are some samples of very arrogant messages and the standard that Sanborn has received in recent years:

“I am just a veterinarian … he cracks in days with GROK 3.”

“It took 35 years to the National Security Agency with all its resources, I could not do so in only 3 hours before I took morning coffee.”

“Re -writing the date … there are no 100 % rift errors.”

If you have spent any time on social media, especially on X, you have seen these people. Perhaps these men are not themselves, but the same type of man. As you know, those who only say “collect it” or respond to a person’s post with “here what Grok says”, or share a screenshot of Chatgpt response as if it was additional in any way to conversation.

The goods, frankly, cannot be explained. Even if they break the SANBORN symbol successfully using artificial intelligence (which, for the record, Sanborn says they have not even approached), so what requires the device to do the work for you who generates such self -consent? It would be one thing if they trained a large infection model on an endless amount of knowledge of the encryption and used it to break the SANBORN code. But they literally ask Chatbot to look at and solve it. It is the least smart thing to imagine. It fluctuates to the back of the textbook to know what is the correct solution to the mathematics equation, except for this case, the textbook in Halosa made the answer with confidence.

This behavior is not uncommon, really. A study published last year in the magazine Computers in human behavior I found that when people learn that the advice was created by artificial intelligence, they tend to excessively, even allowing them to persuade them to meet the conflicting contextual information and their personal interests. The same study found that when someone exceeds the advice of artificial intelligence, it negatively affects his interactions with other humans. It may be because they are very happy themselves.



https://gizmodo.com/app/uploads/2025/03/GettyImages-564086461.jpg

Source link

Leave a Comment