Several wealthy Italian businessmen received a sudden phone call earlier this year. The spokesman, who seemed exactly like Defense Minister Guido Crosetto, had a special request: Please send money to help us the heat of the Italian journalists who were kidnapped in the Middle East.
But Krosto was not at the end of the line. He only learned of calls when many businessmen who targeted them contacted him. It was finally found that the fraudsters used artificial intelligence (AI) to falsify the Crosetto sound.
Recommended stories
List of 4 elementsThe end of the list
Progress in artificial intelligence means that it is now possible to create very realistic audio processes and soundties. In fact, new research has found that the sounds created from artificial intelligence are now indescribable from the real human sounds. In this clarification, we separate what the effects of this might have.
What happened in the Crosto case?
Many Italian entrepreneurs and businessmen received calls at the beginning of February, one month after Prime Minister Georgia Meloni issued the Italian journalist Cecilia SalaWho were imprisoned in Iran.
In calls, “Deepfake” Voice of Crosetto asked businessmen to deliver about one million euros ($ 1.17 million) to a bank account abroad, its details were provided during the call or in other calls claiming to be members of Crosetto employees.
On February 6, Crosetto was posted on X, saying that he had received a call on February 4 from “a friend, a prominent businessman”. That friend Crosetto asked if his office had contacted to order his mobile phone number. Crosto said it did not. “He told him that he was ridiculous, as I had already, and that he was impossible,” he wrote in his book X.
Crosetto added that he was subsequently contacted by another businessman who converted a large banking after a call from “General” provided a bank account.
“He calls me and tells me that he was contacted by a general, and that he had transferred a very large banker to an account presented by the” general. “
Similar calls were made by fake Ministry officials to other entrepreneurs, and they asked for personal information and money.
While he told all this to the police, Crosetto added: “I prefer to make the facts public so that no person faces the risk of falling into the trap.”
Some commercial figures in Italy, such as Giorgio Armani fashion designer and co -founder of Prada Patrizio Bertelli, were targeted in the fraud. However, according to the authorities, Massimo Moratti, the former owner of the Inter Milan football club, sent the required money. The police managed to track and freeze money from the bank transfers.
Moratti has since submitted a legal complaint to the city’s public prosecutor’s office. He told the Italian media: “I of course I filed, but I prefer not to talk about it and see how the investigation goes. Everything looked real. It was good. It can happen to anyone. “
How works to generate the sound of artificial intelligence?
Voice generators are usually used by Amnesty International “Deep Learning” algorithms, through which AI study large data collections of real human sounds and stadium “learning”, detection, intonation, and other voice elements.
The artificial intelligence program has been trained using many audio clips for the same person and “taught” to imitate the specific person’s voice, dialect and speaking style. The sound or sound created is also called the reproduction of a sound created by artificial intelligence.
Using NLP processing programs, which directs it to understanding, interpreting and generating human language, you can learn artificial intelligence even to understand the features of sound, such as irony or curiosity.
These programs can convert the text into sound ingredients, then create an artificial sound that appears as a real human. This process is known as “Deepfake”It is a term formulated in 2014 by Ian Godfello, director of automatic learning in the Apple Private Projects Group. It combines “deep learning” and “fake”, and refers to very realistic photos, videos, or sound, all created through deep learning.
How much is a person’s impersonation?
Research conducted by a team at Queen Mary University in London and published by Plos One on September 24 concluded that the voices created from artificial intelligence appear to be real human voices for the people who listen to them.
In order to conduct the research, the team created 40 samples of artificial intelligence voices – both using the sounds of real people and create completely new sounds – using a tool called ELEVENLABS. The researchers also collected 40 registration samples from the actual voices of people. All 80 of these sections were edited and cleaned for quality.
The research team used male and female sounds with British, American, Australian and Indian dialects in samples. ElevenLabs also offers a “African” tone, but researchers found that the accent of the accent was “very general for our purposes.”
The team employed 50 participants between the ages of 18 and 65 in the UK tests. They were asked to listen to records to try to distinguish between the sounds of artificial intelligence and real human sounds. They were also asked about the sounds that seemed more worthy of confidence.
The study found that although the “new” voices that were fully created of artificial intelligence were less convincing to the participants, Deepfakes or Spelones were ranked about realistic as much as true human sounds.
Forty -one percent of the sounds created from artificial intelligence and 58 percent of the vocal cloned animals due to real human sounds.
In addition, the participants were more likely to evaluate the votes offered to the British as real or human compared to American dialects, indicating that artificial intelligence sounds are very sophisticated.
More concern, the participants tend to evaluate the voices created from artificial intelligence as more worthy of the true human sounds. This contrasts with previous research, which usually found the voices of Amnesty International less worthy of confidence, indicating again, that artificial intelligence has become particularly developed in generating fake sounds.
Should we all be very concerned about this?
Although the sound created from artificial intelligence that seems very “human” can be useful for industries such as advertising and editing movies, it can be misused in fraud and fake news.
The fraud similar to those that targeted Italian businessmen already high. In the United States, there were reports of people receiving calls that included Deepfake’s voices for their relatives saying they were in trouble and asked for money.
Between January and June this year, people all over the world lost more than $ 547.2 million due to deep fraud, according to the data of the California -based intelligence company similar to artificial intelligence. The bullish trend appears, the number increased slightly over 200 million dollars in the first quarter to $ 347 million per second.
Can the video be “deep” too?
Impressive, yes. Artificial intelligence programs can be used to create DeepFake videos for real people. This, in addition to the sound created of artificial intelligence, means videos of people who do and say things they did not have to be very fake.
Moreover, it has become increasingly difficult to distinguish between online videos and is real and fake.
Deepmedia, a company that works on tools to detect artificial media, estimates that about eight million Deepfakes will be created and shared online in 2025 by the end of this year.
This is a significant increase of 500,000 that was shared online in 2023.
What is used Deepfakes?
Besides the fraud of phone calls and fake news, AI Deepfakes have been used to create sexual content around real people. More importantly, the artificial intelligence report, issued in July, found that the progress of artificial intelligence led to the industrial production of sexual assault materials on children created from artificial intelligence, which overwhelmed the law enforcement worldwide.
In May this year, US President Donald Trump I signed a draft law This makes the federal crime post intimate photos of a person without their consent. This includes Deepfakes created from artificial intelligence. Last month, the Australian government also announced that it would prohibit a request used to create nude photos.
https://www.aljazeera.com/wp-content/uploads/2025/10/GettyImages-2153672108-1-1759744737.jpg?resize=1920%2C1440
Source link