Five Barriers to Disruption
- Emerging Risks Global

- 7 days ago
- 6 min read
Updated: 6 days ago

Reframing the Bystander Intervention Model for Cognitive Warfare
In 1968, Latané and Darley asked a deceptively simple question: why do people fail to act when they witness an emergency? Their bystander intervention model identified five sequential barriers that prevent action—not because people are indifferent, but because specific cognitive and social mechanisms inhibit response at each stage. The model has been widely applied in emergency response, workplace safety, and public health.
It has not, to date, been applied to cognitive warfare. It should be. The central problem in countering cognitive operations is not a lack of capability but a failure to act—a failure that follows a predictable pattern. Organisations, institutions, and states routinely fail to disrupt cognitive operations they have the capacity to counter, for reasons Latané and Darley’s model explains with precision.
The original model describes five steps a bystander must complete to intervene: notice the event, interpret it as an emergency, assume personal responsibility, know how to help, and act. Failure at any step means no intervention. The reframing below inverts the perspective: each step becomes a barrier to disruption that cognitive warfare practitioners exploit and that defenders must overcome. The five barriers map onto the detect–deter–disrupt framework that structures operational counter-strategy.
The mapping
Barriers 1–2 correspond to detect: recognising that an operation is underway and correctly categorising it. Barrier 3 is the deter/disrupt decision point: overcoming institutional inertia to claim ownership. Barriers 4–5 correspond to disrupt: selecting and executing the right intervention. Cognitive warfare succeeds when defenders stall at any barrier. Effective counter-strategy requires clearing all five.
The five barriers
Barrier 1: Failure to notice [DETECT]
Latané and Darley found that bystanders absorbed in their own tasks literally failed to perceive emergencies occurring around them. The cognitive warfare parallel is normalisation: the operation is invisible because it blends into the ambient noise of the information environment. Adversary operations are designed for this. Russian IRA content during the 2016 US elections did not look like foreign propaganda; it looked like American political argument. Cognitive operations succeed when they are indistinguishable from organic domestic discourse. The barrier is compounded by information overload—security teams monitoring thousands of signals cannot attend to all of them —and by the cognitive warfare practitioner’s deliberate exploitation of existing social fractures, which means the manipulated content resonates with genuine grievances.
Disruption action: Build ambient detection capability—automated anomaly detection layered with human analytical judgment. The goal is not to monitor all content but to identify behavioural signatures (coordinated inauthentic behaviour, artificial amplification patterns, temporal clustering) that distinguish adversary operations from organic discourse. Detection must be continuous, not event-triggered.
Barrier 2: Failure to interpret correctly [DETECT]
Even when bystanders notice an event, they frequently misinterpret it. Latané and Darley called this pluralistic ignorance: each individual looks to others for cues, and when no one reacts, each concludes there is no emergency. In cognitive warfare, pluralistic ignorance operates at institutional scale. Analysts detect anomalous information patterns but hesitate to classify them as adversary operations because no peer institution has raised the alarm. The result is collective under-reaction. The barrier is reinforced by the inherent difficulty of distinguishing cognitive warfare from legitimate political communication—a distinction that is analytically available but operationally demanding—and by the political costs of misclassification. Labelling domestic political speech as foreign-manipulated carries reputational and legal risk, so institutions default to inaction.
Disruption action: Establish shared interpretive frameworks—common criteria, agreed across institutions, for when anomalous information patterns cross the threshold from organic controversy to adversary operation. Joint attribution protocols between intelligence, platform, and civil society actors reduce dependence on any single institution’s judgment. The key is lowering the interpretive burden on individual analysts by making classification a collective institutional function.
Barrier 3: Diffusion of responsibility [DETER → DISRUPT]
This is the critical pivot. Latané and Darley’s most powerful finding was that the more people witness an emergency, the less likely any individual is to act—because each assumes someone else will. In the cognitive warfare context, this manifests as institutional buck-passing. Intelligence agencies assume platform companies will act. Platforms assume governments will regulate. Governments assume civil society will educate. Media literacy advocates assume regulators will enforce. The adversary operates freely in the gap between mandates. Diffusion of responsibility is the single most exploitable barrier in democratic counter-cognitive warfare architecture, because democratic societies deliberately distribute authority across institutions—a structural feature that adversaries treat as a structural vulnerability.
Disruption action: Assign named responsibility. The bystander research is unambiguous: diffusion of responsibility is overcome by direct, personal assignment (“You, in the blue shirt—call 999”). The institutional equivalent is explicit mandate allocation: which institution leads detection, which leads attribution, which leads public communication, which leads platform enforcement. The Nordic total-defence model works because responsibility is distributed by design rather than by default—each institution knows its role before the crisis arrives.
Barrier 4: Competence deficit [DISRUPT]
Bystanders who recognise an emergency and accept responsibility still fail to act when they do not know what to do. In cognitive warfare, the competence barrier operates at two levels. Tactically, responders may lack the technical skills to counter specific operations—forensic attribution, platform-level content action, strategic communication under time pressure. Strategically, institutions may not understand which intervention matches which threat type. A prebunking campaign is the wrong response to a coordinated inauthentic network; a takedown is the wrong response to a slow-burn narrative erosion campaign. Mismatched interventions are worse than inaction because they consume resources, signal awareness to the adversary, and may generate backlash that the adversary can exploit.
Disruption action: Develop intervention repertoires matched to threat typologies. Not every cognitive operation warrants the same response. Build decision frameworks that connect threat classification (from Barrier 2) to intervention selection: exposure and attribution for covert state operations, prebunking for technique-based manipulation, platform-level action for coordinated inauthentic behaviour, strategic communication for narrative-level campaigns, and institutional credibility investment for slow-burn trust erosion. Train responders across the repertoire, not in a single modality.
Barrier 5: Audience inhibition [DISRUPT]
The final barrier is the most psychologically acute. Even bystanders who have noticed the emergency, interpreted it correctly, accepted responsibility, and know what to do may still fail to act—because they fear the social consequences of being wrong. Latané and Darley called this audience inhibition: the paralysis caused by the prospect of public embarrassment. In cognitive warfare, audience inhibition takes institutional form. Officials fear being accused of overreaction, censorship, or political bias. Platform companies fear regulatory backlash or accusations of partisanship. Analysts fear career consequences if their attribution turns out to be wrong. The result is a systematic bias toward inaction at the moment when action is most needed—the early stages of a cognitive operation, when disruption is cheapest and most effective but evidence is least certain. The adversary’s information advantage is not technical; it is psychological. They act because they face no audience inhibition. Defenders hesitate because they do.
Disruption action: Create institutional cover for action under uncertainty. This means pre-authorised response protocols that permit rapid action at defined confidence thresholds without requiring crisis-level approval chains. It means after-action review cultures that treat false positives as acceptable costs of vigilance rather than career-ending errors. And it means political leadership that publicly backs rapid response even when individual attributions are later revised. The goal is shifting the institutional incentive from “don’t act unless certain” to “act proportionately, correct openly.”
Why the sequence matters
The bystander model’s most important insight is that the five barriers are sequential. Failure at any single barrier prevents all downstream action. An institution with superb disruption capabilities (Barriers 4–5) that suffers from diffusion of responsibility (Barrier 3) will never deploy them. An analyst who notices and correctly interprets an operation (Barriers 1–2) but fears the career consequences of being wrong (Barrier 5) will sit on the intelligence. The chain breaks at its weakest link.
This has a direct operational implication: diagnose which barrier is failing before investing in capability. Most counter-cognitive warfare investment flows to Barriers 1 and 4—detection technology and response capability. But if the binding constraint is Barrier 3 (no one owns the problem) or Barrier 5 (everyone is afraid to act), then better detection tools and trained responders will not produce disruption. They will produce better-informed inaction.
The adversary, by contrast, faces none of these barriers. Authoritarian cognitive warfare practitioners operate under unified command, face no audience inhibition, and suffer no consequences for misattribution. The asymmetry is structural. Democratic societies cannot eliminate it—doing so would require abandoning the distributed authority and accountability that define democratic governance. But they can mitigate it by engineering their institutional architecture to clear each barrier systematically, rather than assuming that capability alone produces disruption.
Summary: Detect–Deter–Disrupt through the bystander lens
Barrier | Bystander mechanism | Cognitive warfare parallel | Domain |
1. Failure to notice | Inattention / distraction | Normalisation; operation blends into information noise | DETECT |
2. Failure to interpret | Pluralistic ignorance | Institutional under-reaction; misclassification risk | DETECT |
3. Diffusion of responsibility | Someone else will act | Institutional buck-passing between agencies | DETER → DISRUPT |
4. Competence deficit | Don’t know how to help | Wrong intervention for threat type; skill gaps | DISRUPT |
5. Audience inhibition | Fear of embarrassment | Fear of overreaction, censorship accusations, career risk | DISRUPT |
The societies that disrupt cognitive warfare effectively will not be those with the most advanced detection systems or the largest counter-influence budgets. They will be those that engineered their institutional architecture to clear all five barriers—ensuring that when an operation is detected, someone owns it, someone knows what to do, and someone is willing to act before the evidence is perfect.




Comments