Synthetic-identities are designed to pass checks, not trigger alarms. Individual data points, like email addresses, phone numbers, identity documents, may all appear valid. The fraudsters are relying on data points that pass checks individually. Yet, they often don’t make sense when you step back and look at the full picture.
Detecting synthetic identity fraud isn’t always about finding obvious red flags. Detection today depends on applying scrutiny: asking whether an identity makes sense when its components are considered together.
Most organisations already apply basic controls: they verify email addresses and phone numbers, check for social-media presence, and use document-verification tools to match faces to identity documents. These measures are necessary and a solid starting point. They are no longer sufficient because synthetic identities are designed to pass those checks.
The Two Entry Points Businesses Overlook
Fraudsters do not only seek to become customers. They may also attempt to become employees.
- Know Your Customer (KYC) - The first risk arises in customer onboarding. Fraudsters open accounts, access credit and build a financial footprint. Existing KYC controls tend to ask whether an identity is valid. They are less effective at asking whether it behaves like a real person.
- Employee Onboarding - The second risk lies in employment and insider access. Individuals may apply for roles to gain entry to internal systems, exfiltrate data, or secure long-term access. Though widely reported in the United States, this threat is becoming more relevant in Britain. The underlying problem is the same: passing verification does not guarantee authenticity.
Where Current Checks Fall Short
Most organisations rely on a combination of email and phone validation, social-media presence checks, and identity document verification. These confirm existence, not coherence.
An email address may be valid but newly created. A phone number may receive messages but lack history. Even document verification is becoming less reliable. Artificial intelligence can now generate convincing identity documents, pass facial-matching systems and create plausible personas.
The Missing Layer: Behavioral Coherence
A more useful question is not whether data points exist, but whether they fit together over time. This is behavioural coherence. Open-source intelligence (OSINT) is particularly effective here.
A genuine identity leaves a trace that develops naturally: it appears in public records over time, is tied to a consistent location and shows connections to people, organisations and activities. A synthetic identity struggles to replicate such depth.
Applying OSINT to Test Identity Coherence
Address checks should go beyond confirming that a location exists. The relevant question is whether an individual is genuinely linked to it. Public data can help: electoral registers, planning applications, company-director filings, and rental records may all provide evidence.
Geographic footprint is another indicator. School-governor roles and charity trusteeships are difficult to fabricate at scale.
Risk and disclosure records, such as insolvency filings, court data or regulatory actions, should be consistent with declared information and linked to the same identity and location.
Professional claims can also be tested. Registers should confirm qualifications and expected locations. Synthetic identities often assert credibility without verifiable grounding.
Mortality checks are frequently neglected. Links to deceased individuals, or inconsistencies across reused identity elements, are common features of synthetic profiles.
This is where deeper checks start to separate real from synthetic.
Contact Data: Going Beyond “Does It Exist?”
Email addresses and phone numbers are commonly checked by fraud teams, although, often validated in isolation. More revealing is whether they can be tied to a history.
An email may appear in breach data associated with the same name. A phone number may recur across online accounts consistent with the identity. Social-media profiles linked to these details should show continuity over time.
To tackle the proliferation of false accounts, large social media platforms invest heavily in detection and removal of accounts under fictitious identities. Long-standing accounts with consistent naming and activity aligned to a claimed identity can therefore provide a useful, although not infallible, signal of legitimacy.
From Checks to Narratives
The shift here is subtle but important. The task is no longer to run isolated checks, but to assemble a narrative. Does the individual appear to live where claimed? Has a plausible life developed over time? Do independent sources tell a consistent story?
Synthetic identities do not merely fail checks. They fail to form a convincing narrative.
Where This Is Heading
Organisations are increasingly placing greater emphasis on deeper validation: integrating OSINT into verification processes and analysing identities across multiple data sources rather than in isolation.
Such tools do not have to replace existing checks. They add context and depth.
This is where platforms like Public Insights are starting to play a role.
Not by replacing existing checks, but by adding depth.
- Investigators can run these checks manually through the platform
- APIs allow these signals to be assessed at scale
- Custom flags can be applied when identities lack coherence
The Limits of Validation
Synthetic identities are becoming more convincing. They still struggle, however, to resemble real lives. The opportunity lies not in confirming existence, but in testing coherence.