
In October 2025, Yurii Nazarenko — the operator of OnlyFake, an AI-powered counterfeit identity platform — pleaded guilty to charges in the United States after extradition from Romania. His platform had generated over 10,000 fake government-issued IDs, priced at around $15 each, and used them to successfully bypass Know Your Customer (KYC) checks at banks and cryptocurrency exchanges across the globe. The prosecution was the first of its kind. The technology it exposed was anything but unique.
The scale of deepfake and synthetic identity fraud in 2025 and 2026 demands a frank reassessment of traditional identity verification. According to iProov’s Threat Intelligence Report, injection attacks — in which pre-generated deepfake video is fed directly into a verification system’s camera feed to defeat liveness checks entirely — rose 783% in 2024. Native virtual camera attacks surged 2,665% in 2025. Deepfake files grew from roughly 500,000 in 2023 to eight million by 2025. And Gartner warned in early 2024 that 30% of enterprises would consider identity verification solutions unreliable in isolation by 2026. That threshold has now arrived.
The financial toll is significant. Estimated annual losses from synthetic identity fraud in the US alone stand at $30–35 billion. A Veriff industry survey in March 2026 found that 74% of respondents reported increased online fraud in the past twelve months, with 75% attributing the growth to AI. Synthetic identity document fraud surged 311% between Q1 2024 and Q1 2025, according to Sumsub. Fraudsters are no longer forging documents. They are engineering complete digital personas — cultivated over months before activation — designed to pass every static checkpoint a traditional onboarding flow can mount.
What changed most fundamentally in 2025 is not the sophistication of the threat — it is the regulatory expectation around it. In September 2025, the UK’s Economic Crime and Corporate Transparency Act (ECCTA) brought its “failure to prevent fraud” offence into force for large organisations. The question courts and regulators now ask is not whether fraud occurred, but whether the institution had reasonable fraud prevention procedures in place — assessed against 2025 and 2026 standards. Those standards include demonstrable controls against deepfake and synthetic identity attacks.
The EU AI Act compounds this. From August 2026, biometric identity verification systems classified as high-risk face penalties of up to €20 million, or 4% of global turnover, for transparency and data governance failures. Operational evidence of compliance is required; policy declarations no longer suffice. The FCA’s enforcement record in 2025, with fines exceeding £124 million for AML failures across the sector, reinforced the message. Regulators are not accepting that growth or technology complexity excuses inadequate controls.
What institutions often underestimate is how accessible the attack toolkit has become. The ProKYC tool — available as a $629 annual subscription — demonstrated a successful bypass of a major crypto exchange using a synthetic passport and deepfake video. Deepfake-as-a-service packages on criminal marketplaces start at $5. At the most serious end, North Korean operatives used AI-generated identities and real-time deepfake video to pass employment background checks and video interviews at technology firms, generating an estimated $800 million for the regime in 2024, according to OFAC.
An emerging threat dimension connects directly to the rise of agentic AI. Sumsub’s 2025 Annual Report documented the shift from isolated fraud attempts to agentic fraud operations — autonomous AI systems using generative AI, behavioural mimicry, and automated sequencing to execute coordinated, multi-step attack chains at scale. In March 2026, risk intelligence platform Sardine detected a fraud ring involving 150,000 accounts opened in eleven minutes using precisely this approach. Fraudsters are deploying agents. Compliance defences built for individual human actors are increasingly mismatched to the threat.
Effective defence requires a shift from static, point-in-time verification to continuous, layered assurance. Document authentication must go beyond visual inspection to metadata forensics and cross-document consistency checks. Liveness and deepfake detection must be supplemented by video injection defences specifically — confirming that the camera feed itself is authentic rather than a pre-generated stream bypassing the physical sensor. Device integrity checks must identify virtual cameras, emulators, and rooted devices before a single document is submitted. And behavioural monitoring must extend through the customer lifecycle, because synthetic identities are cultivated to pass onboarding, then activated months later.
Human oversight of critical decisions is both a regulatory expectation and a practical necessity. NIST’s updated digital identity guidelines (SP 800-63-4, 2025) frame the goal explicitly: not compliance with a checklist, but demonstrable management of actual risk. Every approval or rejection must generate a human-readable rationale; every override must be documented with the reviewer’s identity and reasoning. These are not implementation niceties — they are what regulators examine when something goes wrong.
The same agentic AI capabilities that fraudsters are weaponising are also the most effective defensive tools available. McKinsey’s February 2026 analysis of agentic AI in financial crime prevention found that autonomous monitoring systems reduce AML false positives by up to 70% while improving detection of high-risk events by approximately 30% — precisely because they operate continuously rather than as periodic filters. The key differentiator in high-performing deployments is not automation alone, but the combination of continuous monitoring, explainable recommendations, and human oversight of critical decisions. Institutions that have built this governance layer are detecting coordinated fraud rings and synthetic identity clusters that rule-based systems would miss entirely.
The immediate priority is an honest audit of where the current KYC framework stops. Specifically: whether deepfake detection, video injection attack defence, and post-onboarding behavioural monitoring are genuinely present — not merely contracted. Vendors should be asked, directly, how they detect virtual camera injection, how frequently their deepfake detection models are updated, and what their audit trail output looks like under regulatory examination. The ECCTA’s failure-to-prevent offence places the evidentiary burden squarely on the institution, not the attacker. Regulators will ask not whether deepfake fraud was theoretically preventable, but whether the institution took reasonable, documented, and up-to-date steps to prevent it.
About WIDTH
WIDTH is an AI-native unified compliance platform dedicated to helping global regulated industries complete compliance work in a more efficient, auditable, and scalable way. By integrating intelligent workflows, risk automation, and audit-grade execution capabilities, WIDTH enables institutions to achieve both greater efficiency and greater trust in an evolving regulatory environment.