Liveness Detection: Preventing Identity Spoofing with Face Verification Technology
What is liveness detection, how it works, ISO 30107-3 certification, injection attacks, and UK/EU regulatory requirements. Technical and compliance guide 2026.

Summarize this article with
Liveness detection is the technology that determines whether a face presented to a camera is a real, live person or a spoofing artefact โ a printed photo, a video replay, a 3D mask, or a deepfake video injected into the data stream. As biometric identity verification becomes the default for digital onboarding, liveness detection is the security layer that makes facial comparison trustworthy.
Biometric liveness transactions are projected to exceed 50 billion annually by 2027, doubling from 2025 levels. At the same time, deepfaked selfies increased 58% in 2025, injection attacks rose 40% year-on-year, and organisations lost over $200 million to deepfake fraud in Q1 2025 alone. Getting liveness detection right is not a technical detail โ it is a regulatory and financial necessity.
For broader context on automated identity verification, see our guide to automated document verification. For sector trends, see our analysis of digital identity trends 2026.
What is liveness detection?
Liveness detection is an anti-spoofing layer that confirms the presence of a live human face before any biometric comparison is performed. Without it, any facial recognition system is vulnerable to a high-quality photograph.
The technology divides into two fundamental approaches:
Active liveness detection asks the user to perform a real-time action: blink, smile, turn their head left or right, or say a word. The logic is that a static photo cannot comply with a randomised prompt. The vulnerability: modern deepfake tools can now synthesise facial movements in real time, responding to prompts with increasing accuracy. First-attempt rejection rates reach 35% in unguided flows, generating abandonment and support tickets.
Passive liveness detection requires no user action. The system silently analyses skin micro-texture, light reflection patterns (specular highlights differ markedly between skin and an LCD screen), 3D depth cues from motion parallax, and remote photoplethysmography (rPPG โ detecting blood flow from subtle skin colour variations). Leading implementations complete analysis in under 300 milliseconds with no visible step for the user.
Passive liveness is now the industry standard for high-volume consumer-facing KYC. One documented enterprise implementation cut onboarding time by 80% and fraud by 65% compared to its active predecessor.
The emerging best practice is a hybrid approach: passive screening for all users, with an active challenge triggered only for high-risk signals โ unusual device, high-value transaction, anomalous metadata.
The attack landscape: four categories to understand
Liveness detection must contend with a threat landscape that evolves faster than testing cycles. Understanding the categories determines which solution architecture is appropriate.
Presentation attacks
Presentation attacks physically present a spoofing artefact to the camera:
| Attack type | Sophistication | Key detection method |
|---|---|---|
| Printed photograph | Low | 2D texture analysis |
| Screen display (phone/tablet) | Lowโmoderate | Moirรฉ pattern, LCD glare detection |
| Video replay | Moderate | Motion analysis, liveness probe |
| Rigid 3D mask | High | Depth mapping, IR analysis |
| Hyper-realistic articulated mask | Very high | ISO 30107-3 Level 3 testing |
Injection attacks โ the critical blind spot
Injection attacks bypass the camera entirely. A deepfake video is fed directly into the data pipeline using virtual camera software or API manipulation, as if it were a live camera feed. A system can hold full ISO 30107-3 PAD certification and remain 100% vulnerable to injection attacks โ because PAD testing only covers what happens at the sensor, not what happens downstream in the data pipeline.
ROC.ai tracked 8,065 injection attempts against a single financial institution's liveness system between January and August 2025. Companies lost over $200 million to deepfake scams in Q1 2025. A single Indonesian bank faced $138.5 million in potential losses from deepfake KYC fraud over three months (KYC Chain, 2025). Yet 42% of organisations rely solely on PAD liveness, leaving them fully exposed.
Effective protection requires combining Presentation Attack Detection (PAD) at the sensor level with Injection Attack Detection (IAD) at the pipeline level. These are distinct technical components โ a vendor offering only ISO 30107-3 certification is not addressing injection attacks.
ISO 30107-3: the global benchmark
ISO/IEC 30107-3 is the international standard for Presentation Attack Detection (PAD). It defines methodology for testing whether a biometric system can detect and reject attempts to fool it with spoofing artefacts. The primary accredited testing body is iBeta Quality Assurance, NIST-accredited in the United States.
The standard defines three testing levels:
| Level | Attacker preparation | Material cost | Max penetration rate (APCER) | Max false rejection (BPCER) |
|---|---|---|---|---|
| L1 | 8 hours | ~$30 | 0% | โค15% |
| L2 | 2โ4 days | ~$300 | โค1% | โค15% |
| L3 | 7 days | Uncapped | โค5% | โค10% |
Key metrics to understand โ ISO 30107-3 replaces traditional FAR/FRR terminology in the PAD context:
- APCER (Attack Presentation Classification Error Rate): the rate at which attacks pass as genuine. Lower is better.
- BPCER (Bona-fide Presentation Classification Error Rate): the rate at which genuine faces are rejected as attacks. A BPCER of 0.8% means 8,000 legitimate users rejected per million verifications โ a real support cost and churn driver.
In January 2026, Yoti became the first company to achieve iBeta Level 3 certification, which includes hyper-realistic masks with mechanically articulated eyelids and deepfakes responding to real-time active prompts (Biometric Update, January 2026).
Always demand ISO 30107-3 confirmation letters from any vendor โ the published letters are available at ibeta.com. Marketing claims of "ISO compliance" without confirmation letters are unverifiable.
Explore further
Discover our practical guides and resources to master document compliance.
Explore our guidesUK and EU regulatory requirements
FCA, MLR 2017 and remote identity verification
The Financial Conduct Authority (FCA) requires regulated firms to conduct Customer Due Diligence (CDD) under the Money Laundering, Terrorist Financing and Transfer of Funds Regulations 2017 (MLR 2017). Remote identity verification using biometrics is an accepted method for CDD, provided it delivers equivalent assurance to in-person verification.
The FCA's 2024 guidance on digital identity confirms that a liveness-verified selfie combined with document verification and real-time sanctions screening satisfies standard CDD requirements. Enhanced Due Diligence (EDD) for high-risk clients may require additional steps, but the liveness-verified biometric is the accepted foundational layer.
NIST SP 800-63B-4 (finalised 2024) sets internationally referenced benchmarks: at IAL2, PAD is recommended with โฅ90% resistance per attack species; at IAL3, active liveness is mandatory and FMR (False Match Rate) must be โค1 in 1,000. UK and EU regulators increasingly reference NIST thresholds in technical guidance.
UK GDPR and biometric data
Facial biometrics are special category personal data under UK GDPR Article 9. Processing requires explicit consent, a legal obligation, or another Schedule 1 condition. The ICO (Information Commissioner's Office) requires data minimisation: biometric templates should not be retained beyond the verification moment unless strictly necessary and documented.
eIDAS 2.0 and ETSI TS 119 461
The EU's eIDAS 2.0 regulation requires each Member State to provide a certified EUDI Wallet by end 2026. Technical standard ETSI TS 119 461 v2 (February 2025) operationalises eIDAS 2.0 identity proofing requirements, explicitly covering liveness requirements, injection attack defences, evidence handling, and decision logging. A verification process conformant with ETSI TS 119 461 v2 simultaneously satisfies eIDAS 2.0, 6AMLD, and supervisory expectations โ a significant compliance convergence.
Common failure modes โ what users actually experience
Users on compliance and fintech forums consistently report the same friction patterns that vendors rarely discuss publicly:
Poor lighting causes false rejections more than any other single factor. Back-lit environments โ a window behind the user โ overexpose the face and distort texture analysis. Well-designed interfaces include a real-time lighting indicator before the biometric capture step.
Device variability disproportionately affects lower-income users. Budget Android front cameras produce low-resolution images that fail 2D texture analysis. This creates both a compliance risk (inconsistent verification standards) and an inclusivity problem: the same identity check fails more often for users who cannot afford premium devices.
Active liveness confusion is well-documented. Instructions like "slowly turn your head to the right" confuse non-native language speakers and older users at higher rates. First-attempt rejection rates up to 35% in unguided flows create support tickets, user frustration, and onboarding drop-off. Passive liveness eliminates this category of failure entirely.
The conversion cost is quantifiable: the biometric verification step alone causes 10โ15% abandonment. In an unoptimised complete KYC flow, cumulative drop-off reaches 40โ68% of prospects. Switching from active to passive liveness was documented to reduce abandonment by over 80% in one enterprise implementation โ often the single highest-impact change available in a KYC flow.
Integrating liveness detection into a complete KYC process
Liveness detection is a component, not a complete identity verification system. A compliant KYC flow combines three layers:
- Document verification โ OCR extraction, forgery detection, cross-reference with official databases
- Liveness detection + facial matching โ liveness check at sensor and pipeline level, followed by facial comparison between the document photo and the live face
- Regulatory screening โ sanctions lists (EU, OFAC, UN, HM Treasury), PEP databases, adverse media
A critical architectural point: systems must verify that the liveness check and document capture belong to the same session. Without session binding, an attacker can pass liveness on one device and substitute a different identity document. This attack vector is underaddressed by most off-the-shelf solutions.
CheckFile integrates all three layers in a single EU-hosted platform, ISO 27001 certified, with full UK and EU GDPR compliance. Visit our security page and pricing. For the broader automation framework, see our guide to automated verification.
Selecting a liveness detection solution: what to require
| Criterion | Minimum | Recommended |
|---|---|---|
| ISO 30107-3 certification | L1 | L2 for regulated onboarding |
| Injection attack protection | Not defined by ISO | IAD layer integrated |
| BPCER (false rejection rate) | < 2% | < 0.5% |
| Latency | < 3 seconds | < 500ms (passive mode) |
| Device coverage | Recent iOS/Android | Budget Android support |
| GDPR compliance | Mandatory | Zero-retention of biometric templates |
| Data hosting | EU | UK or Germany |
FAQ
What is liveness detection?
Liveness detection is an anti-spoofing technology that verifies a live human face is present during identity verification โ not a photograph, video replay, mask, or injected deepfake. It operates before facial recognition comparison and validates the authenticity of the biometric presentation.
What is liveness detection failed?
"Liveness detection failed" means the system could not confirm a live person was present. Common causes: poor lighting (back-light from a window), low-quality front camera, slow internet connection interrupting passive analysis, or a genuine spoofing attempt. For legitimate users, a second attempt in better lighting with the camera at eye level resolves most cases.
What is the difference between active and passive liveness detection?
Active liveness asks the user to perform a real-time action (blink, turn head). Passive liveness analyses the face silently โ no user action required. Passive is faster (under 300ms), causes significantly less abandonment, and is now the industry standard for consumer-facing KYC, though it requires more sophisticated AI models to resist deepfake attacks.
What is face liveness detection?
Face liveness detection and liveness detection refer to the same technology: an anti-spoofing check applied to facial biometrics, confirming the face belongs to a live person rather than an artefact or deepfake.
How do I pass liveness detection?
For legitimate users: ensure good front lighting (no back-light from windows), hold the device at eye level, and remain still. For passive liveness, no specific action is needed. For active liveness, follow the on-screen prompt exactly and complete it in one smooth motion. First-attempt failure is common on all platforms; a second attempt in better lighting conditions resolves the majority of cases.
Stay informed
Get our compliance insights and practical guides delivered to your inbox.