Chatbot Disinfo, Los Angeles protests are enlarged

Photo of author

By [email protected]


Zoë Schiffer: Oh, Wow.

Leah Fegar: Yes, exactly. Whoever has Trump’s ear already. This has become widespread. Thus we were talking about people who went to X Grok and they were like, “Grok, what’s?” And what did Grouk tell them? No no. This was not actually pictures of the protest in Los Angeles. They said they were from Afghanistan.

Zoë Schiffer: Oh. Groc, no.

Leah Fegar: They were like, “there is no reliable support. This is a misunderstanding. It was really bad. It was really bad. Then there was another situation where two other people were sharing these pictures with Chatgpt and Chatgpt was the same,” yes, this is Afghanistan. This is not accurate, etc., etc. This is not great.

Zoë Schiffer: I mean, do not make me start at this moment that comes after many of these platforms dismantled the facts examination programs systematically, they decided to allow more content. Then Chatbots add to the mixture that, for all their uses, and I think it can be really useful, it is incredibly confident. When they do hallucinations, when they spoil chaos, they do it in a very convincing way. You will not see me defending Google’s research. The absolute garbage, the nightmare, but it is a little clear when it illuminates, when you are in a random and unreasonable blog more than when Grok tells you with complete confidence that you see a picture of Afghanistan when you are not.

Leah Fegar: It is really worrying. I mean, it’s hallus. It is completely cheerful, but with a drunk boy, who unfortunately did not surround him at a party in your life.

Zoë Schiffer: nightmare. nightmare. Yes.

Leah Fegar: They are like “No, no, no. I’m sure. I was never sure of my life.”

Zoë Schiffer: definitely. I mean, well, why does Chatbots give these incorrect answers with this confidence? Why don’t we see them just say, “Well, I don’t know, so you may have to check elsewhere. Here are some reliable places to search for this answer and that information.”

Leah Fegar: Because they do not. They do not admit that they do not know, which is really wild to me. In fact, there were a lot of studies on this topic, and in a recent study of research tools from artificial intelligence at the Division of Digital Press Center at Columbia University, I found that chat groups were “generally bad in refusing to answer questions that they could not answer accurate I cannot move in politics. ”You are like, well, you are weighing a lot now.

Zoë Schiffer: Well, I think we should stop there in this very horrific memo and we will go back. Welcome to Wadi Gharib. I joined today by Lea Feger, a great policy editor in Wire. Well, as well as trying to verify information and shots, there was also a set of reports about AI’s videos created. There was a Tiktok account that started downloading videos of an alleged national soldier named Bob, who was published in the Los Angeles protests, and you can see it says wrong and inflammatory things like the fact that the demonstrators “wander in balloons full of oil” and one of the videos close to a million views. So I don’t know, it seems that people should become more skilled in determining this type of fake shot, but it is difficult in an environment that is not printed on the context like a post on X or a video clip on Tiktok.



https://media.wired.com/photos/684b062f9b4d3960536bdfad/191:100/w_1280,c_limit/Uncanny-Valley-Protests-and-Disinformation-Politics-2218782179.jpg

Source link

Leave a Comment