What is Synthetic Identity Fraud?
Synthetic identity fraud is not traditional identity theft. It does not rely on fully stolen identities. Instead, it combines real and fabricated elements to create a new, seemingly legitimate person.
A typical synthetic identity might include:
- A real national ID number or SSN
- A fake name or date of birth
- AI-generated or manipulated facial imagery
- A controlled contact footprint such as email, phone number, or address
Fraudsters use these identities to build credibility over time. They open accounts, pass basic checks, and gradually establish a transaction history. Once trust is established, they exploit it through credit abuse, loan defaults, or account takeovers.
This long-term approach makes synthetic identity fraud particularly difficult to detect. Unlike stolen identities, there is often no direct victim reporting the fraud. The identity itself exists in a grey zone between real and fake, which allows it to pass standard verification processes.
The rise of generative AI has further complicated the landscape. Fraudsters can now create realistic face images, deepfake videos, and synthetic biometric data at scale. This introduces a new layer of risk during digital onboarding, where verifying that a real person is physically present becomes critical.
Why Document Verification Alone Can't Stop Synthetic Identities
Document verification remains a foundational component of digital onboarding. It validates the authenticity of identity documents and checks for signs of tampering or forgery. However, on its own, it is not designed to detect synthetic identities.
There are several reasons for this.
First, synthetic identities often use valid, real data. If a fraudster combines a legitimate ID number with fabricated personal details, the document may still pass authenticity checks. The system confirms the document is real, but not that the identity is genuine.
Second, document verification does not confirm live presence. While it can detect many types of forgery or tampering, fraudsters can still submit genuine documents that do not belong to them, or use high-quality reproductions and digitally manipulated images that bypass basic checks. In these cases, the legitimate document holder is not involved in the process at all, and there is no guarantee that the person presenting the document is physically present or authorized to use it.
Third, weak biometric implementations can be bypassed using deepfake selfies or pre-recorded videos. Without strong anti-spoofing measures, systems may not be able to detect synthetic or manipulated facial inputs.
Finally, static checks cannot adapt to evolving attack patterns. Fraudsters actively test onboarding flows to identify gaps, especially where identity verification is treated as a one-time checkpoint rather than a dynamic risk signal.
This is where liveness detection becomes essential. It shifts verification from static validation to real-time proof of presence.
How Liveness Detection Works
Liveness detection is designed to determine whether a biometric input, typically a face, comes from a live human being physically present during the verification process. It plays a critical role in preventing spoofing attacks, including deepfakes, masks, and replayed media.
There are two primary approaches.
Active Liveness Detection
Active liveness detection requires the user to perform specific actions during the verification process. These actions are designed to confirm responsiveness and real-time interaction.
Examples include:
- Turning the head
- Blinking or smiling on command
- Following on-screen prompts
The system evaluates whether the user responds correctly and within expected timing parameters. This makes it harder to use pre-recorded videos or static images.
Active liveness provides a clear challenge-response mechanism. It is effective against basic replay attacks and low-effort spoofing attempts.
Passive Liveness Detection
Passive liveness detection operates without explicit user actions. It analyzes biometric data in real time using machine learning models.
Techniques typically include:
- Micro-expression analysis
- Skin texture and reflectivity detection
- Depth and 3D structure estimation
Passive systems aim to detect subtle signals that differentiate a real face from a synthetic or manipulated one. This includes identifying artifacts from screens, masks, or generative models.
Passive liveness verification can happen quickly, often within a single capture. However, passive systems require robust training data and continuous updates to remain effective against evolving threats such as deepfakes.
In practice, many implementations combine elements of both approaches. More advanced liveness solutions go a step further by offering adaptive liveness checks, which dynamically combine active and passive methods based on the risk level and user context. The choice ultimately depends on risk tolerance, user experience requirements, and the specific fraud vectors being addressed.
What Makes a Liveness Detection Solution “Proven” Against Synthetic Identity Fraud?
When evaluating liveness detection solutions, the term “proven” is often used but rarely defined. In the context of fraud prevention, proven results are based on measurable performance under realistic attack conditions.
Several criteria matter.
Independent testing and validation
Credible solutions are evaluated by third-party laboratories using standardized benchmarks. These tests assess performance against known spoofing techniques and provide objective metrics. Often, these independent bodies also issue certifications confirming that a product meets defined security thresholds.
In the field of biometric liveness detection, one of the most recognized benchmarks is iBeta Level 2 certification, which is typically achieved only by advanced providers capable of resisting sophisticated presentation attacks.
Attack simulation testing
Effective systems are tested against real-world attack scenarios. This includes presentation attacks such as printed photos, screen replays, and more advanced methods like deepfake injections.
Deepfake resistance evaluation
With the rise of AI-generated media, it is critical to assess how a system performs against synthetic video and face generation techniques. This goes beyond traditional spoofing and requires continuous model adaptation.
False Acceptance Rate (FAR)
FAR measures how often a system incorrectly accepts a fraudulent attempt. In synthetic identity contexts, a low FAR is essential to prevent unauthorized access.
False Rejection Rate (FRR)
FRR measures how often legitimate users are incorrectly rejected. High FRR impacts user experience and conversion rates, making it a key operational metric.
Real-world deployment metrics
This includes how the system behaves under actual traffic, across devices, lighting conditions, and user demographics.
Anti-replay protection
The system must detect and block attempts to reuse recorded biometric data. This includes video replays and screen-based attacks.
3D mask detection
Advanced fraud attempts may involve physical masks or prosthetics. Detection capabilities should extend beyond flat image analysis.
Injection attack resistance
This refers to the ability to prevent direct data injection into the verification pipeline, bypassing the camera entirely. It is a critical but often overlooked attack vector.
To sum these criteria above - a solution is considered proven when it consistently demonstrates strong performance across these dimensions, particularly under adversarial conditions. It is not about a single metric, but about resilience across a range of attack types.
Liveness Detection as Part of a Broader Fraud Prevention Strategy
Liveness detection is a critical control, but it is not a standalone solution. Its effectiveness increases significantly when integrated into a broader identity verification and risk framework.
In practice, this means combining:
- Liveness detection with document verification to confirm both identity and presence
- Face match (to the document provided)
- AML screening to assess regulatory risk
- Risk scoring models to evaluate behavioral and contextual signals
- Ongoing monitoring to detect changes over time
This layered approach reduces reliance on any single signal. If one control is bypassed, others can still detect anomalies. This is especially applicable to synthetic identity fraud - these identities are designed to pass isolated checks. It is the correlation of multiple signals that exposes inconsistencies.
Balancing Fraud Prevention and Onboarding Experience
There is a direct relationship between security controls and user experience. Stronger verification can reduce fraud risk, but it can also increase friction and impact conversion rates.
Overly strict biometric flows may lead to:
- User drop-off during onboarding
- Increased support requests
- Delays in account activation
The process should have clear and concise instruction with minimal effort for the end user to input.
On the other hand, weak liveness detection increases exposure to synthetic identity fraud and automated attacks. The balance lies in risk-based orchestration.
Instead of applying the same level of verification to every user, platforms can:
- Trigger liveness checks based on risk signals
- Adjust verification intensity dynamically
- Use Passive Liveness Checks for low-risk scenarios and Active Liveness Checks for higher-risk cases.
More advanced solutions also offer Adaptive Liveness Checks, where the system dynamically selects the appropriate method based on factors such as risk level, user behavior, and contextual signals—helping balance strong security with a seamless user experience.
This approach aligns security with business outcomes. It protects against fraud while maintaining a smooth onboarding experience for legitimate users. Product teams need to evaluate not only detection accuracy, but also latency, completion rates, and user satisfaction. These metrics are interconnected.
What Modern Liveness Detection Should Offer Digital Platforms
Modern digital onboarding environments require liveness detection systems that are continuously tested against emerging synthetic identity tactics and integrated into a broader risk-based framework.
From a technical and operational perspective, this includes:
- Advanced liveness detection designed to identify spoofing attempts, presentation attacks, and synthetic media
- Deepfake and AI-injection mitigation capabilities built to keep pace with emerging generative threat
- Support for Passive, Active and Adaptive liveness checks
- Configurable risk settings to match different fraud, UX, and compliance requirements
- Seamless integration with biometric and document verification flows for stronger identity assurance
- Flexible deployment across web & mobile environments through API-first architecture
Within this context, Identomat delivers liveness detection as an integrated layer of a full identity verification system, purpose-built for real-world fraud conditions. It verifies user presence with advanced anti-spoofing controls, supports deepfake resistance, and is continuously evaluated against evolving attack scenarios.
Identomat offers three types of liveness checks - Passive, Active, and Adaptive - allowing businesses to tailor verification flows based on risk level and user context. This adaptive approach ensures a balance between strong fraud prevention and a seamless user experience.
The platform is fully configurable, enabling institutions to design onboarding workflows that align with their specific risk policies, regulatory requirements, and user experience goals. As a white-label solution, Identomat allows businesses to maintain full control over branding and customer interaction, delivering a consistent and trusted user journey.
Identomat’s biometric liveness detection is also certified at iBeta Level 2, placing it among a select group of solutions that have been independently validated against sophisticated presentation attacks and high-quality spoofing techniques.
If you'd like to learn more how Identomat can empower your business onboarding processes, reach out to our team and schedule your demo.


