top of page

COMMAND AND CONTROL IN THE COGNITIVE DOMAIN

Here is the democratic dilemma in its sharpest form: cognitive warfare requires a coordinated, rapid, whole-of-government response and democratic governance is designed to prevent exactly that.


Separation of powers, independent judiciaries, free press, civilian oversight of the military, protection of individual rights against state action—these are not bugs in the democratic system. They are its core features. They exist because concentrated, rapid, unaccountable government action is dangerous. The entire architecture of liberal democracy is built to slow things down, distribute authority and create friction between the state and the citizen.


Cognitive warfare exploits this architecture. An adversary operating under unified command, with no institutional constraints on speed or scope, attacks a target whose response must navigate competing institutional mandates, legal restrictions, political sensitivities and democratic accountability mechanisms. The structural asymmetry is permanent: it cannot be resolved without abandoning the democratic principles that define the societies being defended.


This article examines the command and control problem in democratic cognitive defence—why it is genuinely hard, what partial solutions look like and where the limits lie.


The Mandate Problem

The first challenge is jurisdictional. In most democracies, no single institution has the mandate, authority and capability to conduct cognitive defence across the full spectrum of the threat.


Intelligence agencies can detect and attribute foreign operations, but they are legally constrained from monitoring domestic information environments—constraints that exist for essential civil liberties reasons. Military organisations have doctrinal frameworks for information operations, but their authority typically extends only to the military domain and deployed operations, not to domestic civil space. Law enforcement can investigate criminal activity, including some forms of foreign interference, but cognitive manipulation that falls short of criminal thresholds is beyond its remit. Media regulators can enforce broadcasting standards, but social media platforms operate largely outside traditional regulatory frameworks. Education ministries can implement media literacy programmes, but their timelines are generational, not operational.


Each institution holds a piece of the puzzle. None holds the picture. And the legal and institutional barriers between them are not administrative inconveniences—they are constitutional safeguards that prevent the concentration of state power over information.

The adversary faces none of these constraints. Russian cognitive warfare operations are coordinated across intelligence services, military units, state media, proxy organisations and commercial entities under unified strategic direction. Chinese cognitive warfare operates through the civil-military fusion doctrine that integrates state, military and commercial capabilities into a single strategic apparatus. The coordination problem that democracies struggle with is, for authoritarian adversaries, a solved problem—solved by eliminating the institutional pluralism that democracies rightly regard as essential.


The Legal Gap

The second challenge is legal. Democratic legal frameworks are built around the concept of identifiable, attributable harm—typically physical or financial harm—with established chains of causation and clear victims. Cognitive warfare fits poorly into this framework.


What is the legal harm of a foreign-generated social media campaign that amplifies existing domestic political divisions? No individual can demonstrate personal injury. The causal chain between the campaign and any specific political outcome is probabilistic, not deterministic. The content may consist entirely of protected speech—opinions, arguments and characterisations that would be legal if produced by domestic actors. The foreign origin is legally relevant under some frameworks and irrelevant under others.

The European Union has made the most progress in addressing this gap. The Digital Services Act creates obligations for large platforms to assess and mitigate systemic risks, including the risk of negative effects on civic discourse—a category broad enough to encompass cognitive warfare effects. The AI Act imposes transparency requirements on AI-generated content, addressing one vector of synthetic manipulation. Together, they represent an emerging regulatory architecture that addresses the cognitive domain without requiring the identification of individual criminal acts.


But regulation operates slowly. A systemic risk assessment is a process measured in months; a cognitive warfare campaign operates in hours. Enforcement actions follow investigation, adjudication and appeal processes that can take years. The legal framework provides the structural foundations for cognitive defence, but it cannot provide the operational speed that real-time response requires.


Models That Work—Partially

Several democracies have developed institutional models for cognitive defence that offer instructive, if partial, solutions to the command and control problem.

Sweden re-established its Psychological Defence Agency in 2022—an institution with explicit responsibility for identifying foreign information influence targeting Sweden, supporting societal resilience and coordinating the government's response. The agency sits outside the intelligence services, operates transparently and has no content moderation or censorship authority. Its tools are analysis, communication and coordination. It represents a clear institutional answer to the mandate problem: someone is responsible.


Finland's comprehensive security model distributes cognitive defence responsibility across government, civil society, media and the educational system. Rather than concentrating authority, the Finnish model builds resilience into every institutional layer of society. The military handles military-domain threats. The media handles media integrity. Schools handle media literacy. Government handles crisis communication. The coordination is cultural rather than institutional—a product of Finland's small population, high trust and shared threat perception regarding Russia.


Estonia's approach emphasises speed and adaptability. Having experienced Russian information operations since the 2007 cyber attacks, Estonia has developed rapid-response capabilities that leverage a small, highly networked government and a digitally literate population. The Government Communication Office coordinates crisis messaging, while the Internal Security Service addresses foreign interference. The model works because Estonia is small enough that coordination can be personal rather than procedural.


Taiwan offers perhaps the most innovative model. Facing persistent Chinese cognitive operations, Taiwan has developed a combination of rapid government fact-checking (the "60-minute response" standard for countering false claims), platform cooperation frameworks and citizen-led fact-checking networks. The Polis platform for participatory policymaking addresses cognitive vulnerability at a structural level by building consensus and reducing the polarisation that adversary operations exploit.


Each of these models has features that are difficult to transplant to larger, more diverse, more polarised democracies. Sweden's agency model works in a high-trust society with a strong consensus tradition. Finland's distributed model works in a small, cohesive population with a shared external threat. Estonia's speed advantage depends on a small government. Taiwan's innovation benefits from acute, existential threat perception that generates political will.


What A Democratic C2 Framework Looks Like

Despite the challenges, the elements of a democratic cognitive defence framework are identifiable—even if no single democracy has implemented all of them simultaneously.

The first element is clear mandate allocation. The bystander intervention analysis in this series identified diffusion of responsibility as the single most exploitable barrier to disruption. Overcoming it requires explicit assignment: which institution leads detection, which leads attribution, which leads public communication, which leads platform engagement, which leads educational response. The assignments must be made before a crisis, documented publicly and exercised regularly. The Nordic model of pre-assigned institutional responsibility during total-defence scenarios offers a template.


The second element is strategic coordination without operational centralisation. A single institution controlling all aspects of cognitive defence is neither desirable nor achievable in a democracy. But a coordination mechanism—a standing body that maintains shared situational awareness, deconflicts institutional responses and ensures that detection in one domain triggers response in others—is essential. This body needs authority to convene but not to direct; to share intelligence but not to classify; to coordinate but not to control. It is a facilitator, not a commander.


The third element is pre-authorised response protocols. The speed asymmetry between adversary attack and democratic response is the defining operational challenge. Pre-authorised protocols—agreed in advance, tested in exercises and triggered by specific indicators rather than ad hoc political decisions—can compress the response timeline without bypassing democratic accountability. If the indicators for a coordinated inauthentic network are met, the response protocol activates automatically. The political decision is made in advance, when there is time for deliberation; the operational response is executed in real time, when speed matters.


The fourth element is democratic oversight that is rigorous but not paralysing. Every cognitive defence capability—detection, attribution, public communication, platform engagement, educational intervention—must be subject to democratic scrutiny. But scrutiny must be designed to work at the speed of the threat, not at the speed of parliamentary inquiry. This means standing oversight mechanisms with real-time access, not post hoc review committees that examine events months after they occurred. It means transparency as a default—publishing attribution assessments, disclosing detection methods and subjecting intervention decisions to public evaluation.

The fifth element is civilian-military integration that respects the boundary between external defence and domestic civil space. Cognitive warfare blurs the line between military and civilian domains—a blurring that adversaries exploit but that democracies must manage carefully. Military capabilities for information operations in the deployed environment are essential and should be developed robustly. Extending those capabilities into domestic civil space is a red line that must be maintained, even when the threat operates across both domains. The interface between military and civilian cognitive defence must be clearly defined, heavily overseen and operated with extreme caution.


The Limits Of Institutional Design

Even the best institutional framework for democratic cognitive defence operates under constraints that no design can overcome. Democratic cognitive defence will always be slower than authoritarian cognitive attack. The speed advantage can be reduced but not eliminated. Accepting this asymmetry—and designing strategies that account for it rather than pretending it can be solved—is a mark of strategic maturity.


Democratic cognitive defence will always be constrained by civil liberties protections that limit the state's ability to monitor, filter and respond to information in the domestic environment. These constraints are not obstacles to effective defence; they are the substance of what is being defended. A cognitive defence strategy that erodes civil liberties to achieve operational effectiveness has defeated itself.

Democratic cognitive defence will always be politically contested. Any government capability that operates in the information domain will be accused—by domestic opponents, by adversary propaganda and by genuinely concerned civil libertarians—of being a tool for controlling public discourse. These accusations cannot be prevented; they can only be answered through transparency, accountability and a track record of restraint.


The goal is not a perfect system. The goal is a system that is good enough—that can detect operations, coordinate responses, communicate effectively and learn from experience, all while maintaining the democratic accountability that distinguishes it from the authoritarian systems it opposes. That is a modest but realistic ambition and achieving it would represent a significant advance on where most democracies stand today.

 
 
 

Comments


business-people-working-data-project.jpg

REQUEST ERG'S SECURITY CONVERGENCE EXPERTISE

Receive tailored, intelligence-led and risk-based
security advice, designed 
to meet your requirements

 

Get in touch with us and we will assist you further.

Security Education, Risk, Resilience Awareness and Culture

Address

Southgate Chambers, 37-39 Southgate Street, Winchester, England, SO23 9EH

EMERGING RISKS GLOBAL ®

Emerging Risks Global ® (ERG) is a trading name of Woodlands International Ltd ©

Registered in England and Wales: 11256211.

VAT GB 507 077 204

Connect With Us

  • Instagram

This website and its content is copyright of  Woodlands International Ltd ©. 2025  All rights reserved. 

bottom of page