DEEPFAKES, SYNTHETIC REALITIES and THE EROSION OF EVIDENTIARY TRUST
- Emerging Risks Global

- 5 days ago
- 8 min read

We used to be able to trust our eyes. That era is ending.
The convergence of generative artificial intelligence, ubiquitous data collection and platform-driven content distribution has created something genuinely new in the history of influence operations: the capacity to fabricate reality at scale, with precision and at negligible cost. Deepfakes are the headline-grabbing manifestation—synthetic video and audio indistinguishable from authentic recordings—but they are only the most visible symptom of a deeper transformation. What is really changing is the evidentiary foundation on which democratic societies process information, make decisions and hold power accountable.
This article argues that the strategic significance of synthetic media lies not primarily in individual deceptions—though those matter—but in the systemic erosion of evidentiary trust. When any piece of media can plausibly be synthetic, the default assumption shifts from "this is probably real" to "this could be fake." That shift is more damaging than any individual deepfake, because it undermines the shared evidentiary basis on which collective decision-making depends.
The Personalisation Of Influence
To understand why synthetic media represents a qualitative shift, we need to look beyond the technology itself to the infrastructure it plugs into.
For most of human history, influence operations were broadcast affairs. A government or adversary produced a message and distributed it as widely as possible, hoping it would land with enough of the target population to achieve the desired effect. The message was generic because the distribution was generic. Radio propaganda, printed leaflets and even early internet disinformation all operated on this model.
The contemporary information environment is fundamentally different. Continuous datafication—the conversion of every online interaction, purchase, movement and preference into structured data—has created detailed behavioural profiles of billions of individuals. Platform architectures are designed to exploit these profiles, serving content calibrated to individual psychological vulnerabilities: confirmation biases, identity commitments, emotional triggers, information gaps. When you combine this targeting infrastructure with generative AI's capacity to produce unlimited quantities of contextually adapted content, you get something that has no historical precedent: personalised influence at scale.
This is not propaganda in a new wrapper. It is a fundamentally different capability. A troll farm employing hundreds of human operators can produce thousands of posts per day. A generative AI system can produce millions, each tailored to the specific psychological profile of its target. The content does not need to be sophisticated—it needs to be relevant, timely and emotionally resonant for the specific individual who encounters it. The datafication infrastructure ensures relevance. The generative AI ensures volume. The platform architecture ensures delivery.
The implications for cognitive warfare are profound. An adversary no longer needs to craft a single compelling narrative and hope it resonates broadly. It can generate thousands of micro-narratives, each designed for a specific audience segment, each exploiting a specific vulnerability, each delivered through a channel the target already trusts. The IRA's operations during the 2016 US elections—simultaneously running Black Lives Matter pages and pro-police pages—were a crude, human-operated prototype of this capability. What is coming will be orders of magnitude more granular.
The Deepfake Problem—And The Deeper Problem
Deepfakes themselves attract most of the public attention and understandably so. The technology has progressed from obviously artificial outputs to productions that defeat both human perception and most automated detection systems. Audio deepfakes are now particularly dangerous: a convincing synthetic voice recording requires only seconds of authentic source audio and can be generated in real time. The potential for fabricating evidence—of a political leader making a damaging statement, of a military commander issuing an illegal order, of a corporate executive committing fraud—is obvious and alarming.
But the deeper strategic problem is not the deepfake itself. It is what the existence of deepfakes does to the evidentiary environment.
Consider what happens when a genuine recording of a public figure saying something damaging surfaces. Before deepfakes, the default response was to assess whether the recording was authentic. Now, the default response is for the figure's supporters to claim it is synthetic—and for that claim to be plausible regardless of the recording's authenticity. This is what researchers have called the "liar's dividend": the benefit that accrues to bad actors from the mere existence of synthetic media technology, regardless of whether any specific deepfake is involved. The technology provides a ready-made defence against any inconvenient evidence.
The liar's dividend operates asymmetrically. It benefits those who wish to deny reality more than those who wish to establish it. Proving that a recording is authentic requires sophisticated forensic analysis; claiming it is fake requires only an assertion. In a polarised information environment where institutional trust is already degraded, the assertion often suffices.
This asymmetry has structural consequences for accountability. Democratic governance depends on a shared evidentiary basis—a common set of facts that participants in the political process accept, even when they disagree about their interpretation. When the evidentiary basis itself becomes contestable, the prerequisites for democratic deliberation erode. You cannot deliberate about policy when you cannot agree on what happened.
The Speed Problem
There is a temporal dimension that compounds the challenge. The information environment operates on attention cycles measured in hours. A deepfake released during a crisis—an election, a military confrontation, a diplomatic negotiation—can achieve its strategic effect before any forensic analysis can be completed. Even when the deepfake is subsequently debunked, the correction reaches a fraction of the audience that saw the original and the emotional and cognitive effects of the initial exposure persist.
This creates a structural first-mover advantage for the attacker. The cost of producing a synthetic media attack is low. The cost of forensic verification is high. The time required for production is minutes. The time required for verification is hours or days. And the verification, when it arrives, competes for attention with a news cycle that has already moved on.
Cognitive warfare practitioners understand this asymmetry perfectly well. The objective is not to produce a deepfake that survives forensic scrutiny. It is to produce one that achieves its effect in the window between release and verification. In a crisis context, that window may be all that matters.
Ai-Generated Text And The Death Of Provenance
While deepfake video and audio attract the most attention, AI-generated text may ultimately prove more strategically significant. Large language models can produce fluent, contextually appropriate prose in any style, register, or language. They can generate news articles, social media posts, academic papers, government communications and personal messages that are indistinguishable from human-authored content.
The strategic implications extend beyond volume. Provenance—the ability to trace content to its source—has been a foundational tool for assessing credibility. We trust a news report partly because we can identify the journalist and the outlet. We assess a social media post partly by evaluating the account that posted it. AI-generated content severs this connection. A sophisticated operation can create entire networks of synthetic personas—complete with years of posting history, consistent personality traits and authentic-seeming social connections—that produce and amplify content with no traceable human origin.
This is already happening. Investigations have identified networks of AI-generated personas operating across social media platforms, producing content designed to shift public opinion on specific policy issues. The sophistication is increasing rapidly. Early AI-generated content was detectable through stylistic tells—uniform sentence length, absence of idiosyncratic expression, certain repetitive patterns. Current models produce output that varies in style, includes deliberate imperfections and adapts to platform-specific norms in ways that defeat most detection methods.
For security professionals, the operational implication is that content attribution—determining who produced a piece of content and why—is becoming exponentially more difficult. The traditional intelligence model of tracing disinformation campaigns to specific state actors through digital forensics is under severe pressure. When every piece of content could plausibly have been generated by AI, the attribution problem becomes not just technically harder but epistemically different. You are no longer looking for a human author; you are looking for the prompt that generated the output—and the person who crafted the prompt may be entirely disconnected from the distribution infrastructure.
What The Detection Arms Race Misses
The instinctive policy response to synthetic media has been to invest in detection technology—tools that can distinguish synthetic from authentic content. This investment is necessary but insufficient and it may be strategically misguided if treated as the primary solution.
The fundamental problem is that detection is structurally disadvantaged in an adversarial context. Detection tools are trained on the outputs of current generative models. When the models improve—as they continuously do—previously effective detection methods become obsolete. The arms race favours the generator, not the detector, because generation is a creative process with infinite variation while detection is a pattern-matching process that depends on known signatures.
More importantly, a detection-centric approach misunderstands the nature of the threat. The strategic damage from synthetic media is not primarily caused by individual deepfakes that fool individual viewers. It is caused by the systemic erosion of trust in all media. Detection addresses the first problem; it does not address the second. Even a perfect detection system—which does not exist—would not restore the default assumption that media is authentic. The knowledge that synthetic media exists is sufficient to sustain the liar's dividend regardless of detection capability.
This does not mean detection is valueless. It means detection must be embedded within a broader framework that addresses the trust problem directly. Content provenance systems—which cryptographically record the origin and editing history of media at the point of creation—represent a more promising structural approach. If widely adopted, they could shift the evidentiary default from "is this fake?" to "can this be verified?" That is a fundamentally different question and one that places the burden of proof where it belongs: on the content, not the viewer.
Implications For Cognitive Warfare Strategy
Synthetic media transforms cognitive warfare in three ways that security professionals need to understand.
First, it democratises capability. Producing a convincing deepfake or a sophisticated AI-generated influence campaign no longer requires state-level resources. A skilled individual with access to commercially available tools can produce content that would have required a professional intelligence operation a decade ago. This means the threat is no longer confined to the three or four state actors that dominate the cognitive warfare literature. Non-state actors, commercial entities, domestic political operatives and lone actors can now deploy capabilities that were previously the preserve of great powers.
Second, it accelerates the feedback loop between polarisation and cognitive warfare that this series has examined. Synthetic media can be generated in response to real-time events, exploiting emotional peaks and cognitive vulnerabilities as they emerge. The latency between a triggering event and a tailored influence response is collapsing from days to minutes. This means the feedback loop operates faster, which means it intensifies faster, which means the window for intervention narrows.
Third, it shifts the centre of gravity from content to infrastructure. When content itself becomes unreliable, the critical variable becomes the infrastructure through which content is verified, distributed and consumed. Platform architectures, provenance systems, institutional credibility and individual media literacy become more important than any individual piece of content. Cognitive warfare strategy must follow this shift—focusing on protecting and strengthening the infrastructure of trust rather than playing whack-a-mole with individual pieces of synthetic content.
Where This Leaves Us
We are entering a period in which the evidentiary foundations of democratic decision-making are under simultaneous pressure from technological capability, adversary intent and structural vulnerability. Synthetic media is the mechanism, but the target is something more fundamental: the shared agreement that evidence matters, that reality is knowable and that public discourse can be grounded in fact.
The societies that navigate this transition successfully will not be those that build the best deepfake detectors—though detection has its place. They will be those that invest in the institutional and social infrastructure that makes evidence trustworthy independent of any specific technology: content provenance systems, institutional credibility, educational programmes that develop critical evaluation skills and platform architectures that reward verification over engagement.
The technology is not going back in the bottle. The question is whether democratic societies will adapt their evidentiary infrastructure fast enough to preserve the shared reality on which collective self-governance depends.




Comments