Site icon Back End News

iProov tops 1M daily biometric checks as deepfake attacks surge

iProov Cybersecurity - Back End News

Identity verification company iProov, a provider of science-based biometric identity verification solutions, said its platform handled more than one million identity checks per day in 2025, showing that more organizations are using tools that can detect deepfakes and stop online impersonation.

The company said the increase comes as generative artificial intelligence (AI) makes it easier for cybercriminals to create highly realistic fake videos, photos, and voices that can trick login systems, banking platforms, and employee access controls.

“As deepfakes and AI agents transform the enterprise attack surface, identity becomes the foundation of digital trust,” said Andrew Bud, founder and CEO of iProov. “With well over one million daily verifications, iProov delivers genuine human presence assurance so organizations can secure customer and workforce identities by anchoring every critical digital interaction to a real, verified human.”

Security experts warn that identity systems are now a major target for attackers. A study by research firm Gartner found that 62% of organizations experienced a deepfake attack in the past year, as criminals increasingly try to impersonate people instead of breaking into systems through traditional hacking.

Andrew Bud, founder and CEO of iProov

Rather than forcing their way into systems, attackers now try to log in using fake identities, deepfake videos, or AI-generated images to fool identity verification tools, the company said.

This shift is pushing organizations to adopt stronger biometric authentication and “liveness detection.” These technologies check whether a real person is present during identity verification, instead of a recording, photo, or AI-generated image.

Data from iProov’s Security Operations Center shows an increase in AI-driven fraud attempts. Its Threat Intelligence Report 2025 recorded a 2,665% jump in virtual camera attacks and a 300% increase in face-swap attempts compared with the previous year.

In a separate study involving 2,000 consumers, only 0.1% were able to correctly identify deepfake content, highlighting how difficult it has become for people to spot fake media without automated detection tools.

The company said independent testing and clear industry standards are becoming essential as organizations strengthen their defenses against deepfake scams and identity fraud.

Exit mobile version