It seems that even though the Internet is growing Drowning in fake photosWe can at least evaluate humanity’s ability to smell BS when it matters. A slew of recent research suggests that AI-generated misinformation has had no material impact on this year’s elections around the world because it’s not very good yet.
There has been a lot of concern over the years that increasingly realistic but synthetic content could manipulate audiences in harmful ways. The emergence of generative AI has raised these concerns once again, as technology makes it easier for anyone to produce fake visual and audio media that appear to be real. Last August, a political consultant used artificial intelligence to do just that A parody of President Biden’s voice To make a robocall telling voters in New Hampshire to stay home during the state’s Democratic primary.
Tools like ElevenLabs make it possible to send a brief audio clip of a person speaking and then repeat their voice to say what the user wants. Although many commercial AI tools include barriers to prevent this use, open source models are available.
Despite these developments, Financial Times In a new story, I looked at the past year and found that across the world, very little synthetic political content had spread.
Quoted A a report From the Alan Turing Institute which found that just 27 pieces of AI-generated content went viral during the summer European elections. The report concluded that there was no evidence that the election was affected by AI-driven misinformation because “most exposure was concentrated among a minority of users who have political beliefs already aligned with the ideological narratives embedded in this content.” In other words, among the few who saw the content (presumably before it was reported) and were willing to believe it, this reinforced beliefs about the candidate even if those exposed to it knew the content itself was generated by artificial intelligence. She cited the example of AI-generated images showing Kamala Harris addressing a crowd standing in front of Soviet flags.
In the United States, the News Literacy Project identified more than a thousand examples of misinformation about the presidential election, but only 6% of them were generated using artificial intelligence. In
Interestingly, it appears that users on social media were more likely to be mistaken in their identity TRUE Images generated by AI and not the other way around, but overall, users showed a healthy dose of skepticism. Fake media can still be exposed through official communication channels, or through other means such as a reverse image search on Google.
If the results are accurate, they will make a lot of sense. AI images are everywhere these days, but AI-generated images still have an off-putting quality, and show clear signs of being fake. The arm may be unusually long, or the face may not be reflected in the mirror surface correctly; There are many small signs that would reveal that an image is artificial. Photoshop can be used to create more convincing forgeries, but doing so requires skill.
AI advocates should not necessarily cheer this news. This means that the images created still have a way to go. Anyone who checked out OpenAI’s Sora model He knows that the video he produces isn’t very good, it almost looks like something generated by a video game graphics engine (Speculation is that he was trained in video games), clearly does not understand properties such as physics.
All that said, there are still concerns to be had. Alan Turing Institute report an act After all, we conclude that beliefs can be reinforced by realistic deepfakes containing misleading information even if the public knows the media is not real; Confusion about whether a piece of media is authentic hurts trust in online sources; Artificial intelligence images have already been used Targeting female politicians using deep pornographywhich can be psychologically harmful to their professional reputation because it reinforces sexist beliefs.
Technology will certainly continue to improve, so it’s something to keep an eye on.
https://gizmodo.com/app/uploads/2024/09/Kamala-Harris.jpg
Source link