75 million Deepfakes: Persona leads the battle of companies against fraud employment

Photo of author

By [email protected]


Join the event that the leaders of the institutions have been trusted for nearly two decades. VB Transform combines people who build AI’s strategy for real institutions. Learn more


Since working remotely is the base, a mysterious threat has appeared in the sections of employing companies: sophisticated candidates working from artificial intelligence who can pass video interviews, provide convincing CVs, and even deceive human resources professionals in providing their jobs.

Now, companies are racing to spread the advanced identity techniques to combat what security experts describe as an escalating crisis of nominated fraud, which is largely driven by the tools of obstetric intelligence and the coordinated efforts by foreign actors, including the state -sponsored groups in North Korea that seek to infiltrate American companies.

It is based in San Francisco a personalityIt is a leading identity verification platform, on Tuesday, a great expansion of its workforce examination, as it has provided new tools designed specifically to detect characters created from artificial intelligence and Deepfake attacks during the recruitment process. The improved solution is directly integrated with the platforms of the main institutions, including Octa workforce cloud and Cisco the duoAllow the organizations to verify the identities of the candidates in the actual time.

Rick Song, CEO and co -founder told a personalityIn an exclusive interview with Venturebeat. “With the presence of the actors sponsored by the country that infiltrates the institutions and the Tawnidi institutions, which makes suicide easier than ever, it gives the solution of the workforce enhanced by institutions confidence that every attempt is related to a real and verified individual.”

Persona advertisement timing reflects the increasing urgent need for what cyberspace professionals call a “identity crisis” in remote employment. According to April 2025 Gartner reportBy 2028, one in four profiles will be fake nomineral profiles – an amazing prediction that emphasizes how to reduce artificial intelligence tools barriers that prevent the creation of false, convincing identities.

75 million prohibited attempts Deepfake reveal a huge scope of fraud to use artificial intelligence

The threat extends beyond the individual bad actors. In 2024 alone, Persona prevented more than 75 million afraid of artificial intelligence via its platform, which serves major technology companies, including Openaiand Corsiraand InstacartAnd twilio. The company has noticed an increase of 50 times in the Deepfake activity over recent years, as attackers have increasingly published advanced technologies.

“North Korea’s IT threat,” Song. “But it is not just North Korea. Many foreign actors do things like this now in terms of finding ways to infiltrate the organizations. The internal threat of companies is higher than ever.”

The recent prominent cases highlighted the severity of the case. In 2024, cyberspace security company Knowbe4 unintentionally rented the IT factor in North Korea Those who tried to download harmful programs on the company’s systems. According to the other Fortune 500 companies, the victim of similar plans, as foreign actors use fake identities to reach sensitive and intellectual companies systems.

the Ministry of Internal Security He has warned that “Deepfake’s identities” represented a growing threat to national security, as malicious actors use people born of artificial intelligence “creating realistic videos, firmness, voice and text of events that have never occurred.”

How to fight the technology of detecting three layers against advanced fake candidates’ plans

Song’s approach to the anti -fraud that was created by AI depends on what he calls a “multimedia” strategy that is looking to verify identity across three distinctive layers: the inputs themselves (photos, videos, documents), environmental context (device properties, network signals, capturing methods), and patterns at the population level that may indicate coordinated attacks.

“There is no silver bullet to solve identity,” Song said. “You cannot look at it from one methodology. Amnesty International can generate very convincing content if you look at the level of application, but all other parts of creating a convincing fake identity are still difficult.”

For example, although the artificial intelligence system may create a fake head, it becomes very difficult at the same time the device’s fingers, network properties and behavioral patterns monitored by personal systems. “If your geographical location is stopped, then the time areas are suspended, then the time areas are suspended, your environmental signals are suspended,” I made it clear. “All these things should come in one frame.”

The company’s algorithms are currently outperforming humans in defining Deepfakes, although Song acknowledges this arms race. He said: “Artificial intelligence improves better, and improves faster than our ability to discover the level of input.” “But we are seeing progress and adapting our models accordingly.”

Institution agents publish the workforce identity in less than an hour

The reinforcement solution of the enhanced workforce can be spread significantly, according to Song. Organizations that are already used Octa or Cisco ID platforms can merge Persona examination tools in less than 30 minutes to an hour. “Integration is incredibly fast,” said Song.

For the user experience companies, Song confirmed that legal candidates usually complete the seconds. The system is designed to create “friction for bad users to prevent them from accessing” while maintaining a smooth experience for original applicants.

Main technology companies already see results. Openai,, Which treats millions of verification of users per month through the character, the automatic examination achieves 99 % with just 18 milliliters of cumin. Artificial Intelligence Company uses the possibilities of examining Persona sanctions to prevent bad actors from accessing its strong language models while maintaining the subscription experience without friction for legal users.

Identification market axes from the background examination to the candidates

The rapid adoption of fraud, on employment materials, has created a new market category to verify identity designed specifically for manpower management. Traditional background verification companies, which verify information about candidates after assuming their real identity, is not equipped to deal with the primary question about whether the candidate claims it.

“The background tests assume that you are the one who says that you are, but then check the information you provide,” Song explained. “The new problem is: Are you the one who says you? This is completely different from what the background verification companies solve traditionally.”

The shift towards action has led to the elimination of many traditional identity verification mechanisms. “You didn’t have a problem with knowing that if someone appears personally, you know with a relatively high confidence, you are the one who says you,” Song noticed. “But if you are interviewing Zoom, all of this may be depth.”

Industry analysts expect that the workforce verification market will expand quickly as it recognizes more organizations of the scope of threat. according to MarketsandMarketsThe global identity verification market is expected to reach $ 21.8 billion by 2028, an increase of $ 10.9 billion in 2023, which represents a complex annual growth rate of 14.9 %, with workforce applications that represent one of the fastest growing sectors.

Beyond detection of deep: the future of digital identity lies in behavioral history

With the intensification of the technological arms race between fraud and discovery systems created from artificial intelligence, Song believes that the final solution may require a fundamental shift in how we think about verifying identity. Instead of focusing only on discovering whether the content has been artificially created, it is imagined in the future in which the digital identity is created through the accumulated behavioral history.

“In fact, the question may be not whether or not, but in fact, it is only responsible for this interaction,” Song said. The company explores systems where the identity will be proven through a person’s digital fingerprint – the history of legitimate transactions, the completion of the training course, purchases, and reactions that have been verified via multiple platforms over time.

“All the previous procedures I have done – a request from Doordash, ending a course on Coursera, and buying shoes from Stockx – may be those long -term reactions that will really determine who I am,” she explained. This approach would make it difficult for bad actors to create false, convincing identities, because they will need to manufacture years of authentic digital history instead of just a convincing video or document.

Enhanced Persona Manpower IDV The solution is immediately available, with government identity verification support in more than 200 countries and regions, and the capabilities of integration with the pads of the leading identity and access platforms. With a remote work revolution continuing to reshape how companies work, companies find themselves in an unexpected position: having to prove that job candidates are real people before they can start checking their qualifications.

In the digital age, it appears that the first qualification for any job may be simply present.



https://venturebeat.com/wp-content/uploads/2025/06/nuneybits_Vector_art_of_a_person_holding_a_mask_hidden_identity_a943b048-203c-4207-8de2-c158fa05b7c6.webp?w=1024?w=1200&strip=all
Source link

Leave a Comment