Site icon Back End News

MITRE ATLAS flags identity verification flaw uncovered by iProov

iProov X MITRE ATLAS

MITRE ATLAS has published a case study describing a high-risk weakness in remote identity verification, also known as Know Your Customer (KYC), based on research conducted by iProov’s internal red team. The entry adds to MITRE ATLAS, a global knowledge base used to document threats and defenses related to artificial intelligence (AI) systems.

The case study explains how attackers can use widely available tools to bypass mobile-based KYC checks that rely on facial verification. According to the research, face-swapped video feeds generated using consumer software can be injected into mobile applications, allowing fake identities to pass verification steps that are meant to confirm a real person is present.

“Contributions from across industry, academia, and government, ranging from red-team findings to operational threat insights, are essential to advancing the accuracy and completeness of the MITRE ATLAS knowledge base,” said Doug Robbins, vice president, MITRE Labs. “When organizations openly share data and expertise, we collectively enhance the security and resilience of AI-enabled systems and the nation.”

iProov said the findings show a significant increase in attacks targeting identity systems, driven by progress in generative AI (GenA) and the availability of low-cost tools.

“We’ve seen an explosion in attack vectors relating to identity verification over the last 12 months, largely driven by advances in generative AI and the wide availability of low cost tools,” said Andrew Newell, chief scientific officer, iProov. “The publication of this latest MITRE ATLAS case study is part of the vital process of identifying and documenting such methodologies. The pace of evolution is only ever likely to increase, making it essential that all organisations examine their own defences against these new tactics without delay.”

The research focused on mobile KYC systems commonly used by banks, financial services firms, and cryptocurrency platforms. These systems often depend on active liveness checks, which analyze facial movements and visual details to confirm that a real user is in front of the camera. The case study shows that advanced deepfake videos can now copy these signals closely enough to defeat such checks.

iProov’s red team showed the attack using a combination of face-swapping software, video streaming tools, and an Android virtual camera application that works on standard, non-rooted devices. By replacing the phone’s camera feed with a manipulated video, the team was able to complete identity verification under a false identity.

The case study also highlights the need for stronger testing standards. It points to the European standard CEN 18099, which sets more demanding requirements for testing liveness detection systems against injection attacks, as a step toward improving the security of remote identity verification.

Exit mobile version