DESIGNING DIGITAL FRICTION: ENHANCING SENSE-MAKING OVER FACT-CHECKING
- Emerging Risks Global

- Mar 2
- 7 min read

Fact-checking does not work. That statement requires immediate qualification—fact-checking works as a corrective for specific false claims and the organisations that do it perform a valuable democratic function. What fact-checking does not do and structurally cannot do, is address the system-level dynamics that make societies vulnerable to cognitive manipulation. Fact-checking treats symptoms. The disease is in the architecture.
This article argues that the more promising—and more difficult—approach is to introduce deliberate friction into digital information ecosystems: design features that slow the propagation of content, force moments of reflection and reward sense-making over reaction. The goal is not to censor or curate but to change the speed at which information moves through the system, creating space for the deliberate cognition that manipulation is designed to bypass.
Why Fact-Checking Hits A Ceiling
The limitations of fact-checking as a counter-cognitive warfare strategy are well documented and they are structural rather than operational.
The first limitation is scale. The volume of false, misleading and manipulative content circulating on major platforms at any given moment exceeds the capacity of every fact-checking organisation in the world combined by several orders of magnitude. Generative AI is making this worse: the cost of producing manipulative content is approaching zero while the cost of evaluating it remains substantial. The economics are insuperable. You cannot fact-check your way out of an information crisis when the attack surface is infinite and the defensive resources are finite.
The second limitation is timing. Fact-checks are almost always reactive—they arrive after the false claim has circulated, often long after it has achieved its intended effect. The cognitive science is clear that corrections are less effective than initial exposure: first impressions stick and corrections that arrive after attitudes have formed face an uphill battle against anchoring effects and motivated reasoning. A fact-check that arrives twenty-four hours after a viral falsehood reaches a fraction of the original audience and changes a fraction of their minds.
The third limitation—and the most fundamental—is that fact-checking addresses the wrong level of the problem. Cognitive warfare does not operate primarily through individual false claims. It operates through the degradation of sense-making capacity: the systematic erosion of the cognitive processes through which populations evaluate information, form judgments and make collective decisions. You can fact-check a specific deepfake.
You cannot fact-check the erosion of trust in all media. You can correct a specific statistical distortion. You cannot correct the retreat into identity-protective cognition that makes people reject corrections that challenge their worldview.
Fact-checking assumes a model of the problem in which the primary issue is false content and the solution is true content. But the primary issue is not false content. It is a system architecture that rewards speed over accuracy, engagement over understanding and emotional reaction over reflective judgment. In that architecture, true content and false content propagate through the same dynamics and the dynamics—not the content—are the vulnerability.
The Case For Friction
Friction, in engineering, is resistance that slows movement. In the digital context, friction is any design feature that introduces delay, effort, or reflection into the process of consuming or sharing information. The current design philosophy of digital platforms is to minimise friction—to make engagement as fast, easy and instinctive as possible. This philosophy is optimised for advertising revenue. It is catastrophic for collective sense-making.
The case for introducing deliberate friction into digital ecosystems rests on a simple observation from cognitive science: the quality of human judgment is inversely correlated with the speed at which it is made. Fast, instinctive judgment—what psychologists call System 1 processing—is efficient but error-prone. It relies on heuristics, is susceptible to emotional manipulation and defaults to identity-consistent conclusions. Slow, deliberate judgment—System 2 processing—is effortful but more accurate. It evaluates evidence, considers alternatives and is more resistant to manipulation.
The digital information environment is engineered for System 1. Infinite scroll. One-click sharing. Autoplay video. Push notifications. Engagement metrics that reward the content that produces the fastest reaction. Every design choice accelerates information processing and reduces the probability that users will engage in the reflective evaluation that manipulation struggles to survive. Friction reverses this. Not by preventing engagement—that would be censorship—but by creating moments in which deliberate cognition can occur. The evidence that friction works is already substantial.
The most studied friction intervention is the share prompt: a message that appears when a user attempts to share content, asking them to read the article before sharing or to consider whether the content is accurate. Field experiments on major platforms have shown that share prompts reduce the sharing of misleading content by significant margins—not because they change attitudes but because they create a moment of reflection in which System 2 processing can engage.
Exposure to full context is another form of productive friction. Studies demonstrate that users who read full articles rather than headlines make better accuracy judgments. Platforms that display article summaries, reading-time estimates, or full-text previews before allowing sharing introduce friction that improves the quality of information propagation through the system.
Diversity exposure—deliberately surfacing content from outside a user's ideological bubble—introduces a different kind of friction: the cognitive discomfort of encountering opposing perspectives. The evidence here is more mixed. Some studies show that exposure to opposing views can increase polarisation rather than reduce it, particularly when the opposing content is hostile or extreme. But other research suggests that exposure to moderate opposing perspectives, particularly from within-group members, can reduce attitude extremity and increase openness to alternative viewpoints. The key variable is not exposure per se but the quality and context of the exposure.
Temporal friction—simply slowing the speed at which content propagates—may be the most powerful intervention of all. Several platforms have experimented with delays on sharing: requiring a brief waiting period before shared content goes live, or reducing the algorithmic amplification of content in its first hours of circulation. These interventions target the viral cascade dynamics that cognitive warfare exploits, without making any judgment about the content's truth or value.
Friction Is Not Censorship
The most common objection to friction-based interventions is that they restrict free expression. This objection rests on a confusion between access and amplification. Friction does not prevent anyone from saying anything. It does not remove content. It does not block access. What it does is change the speed and ease with which content moves through a system—a design parameter that platforms already control and manipulate for commercial purposes.
The current platform architecture is not a neutral baseline. It is an engineered system optimised for engagement and the optimisation produces specific effects on collective cognition: faster propagation, more emotional content, more polarisation, more vulnerability to manipulation. Changing the optimisation parameters is not censorship; it is a different design choice with different consequences.
There is a legitimate concern that friction-based interventions could be designed or captured in ways that disproportionately affect certain types of speech. This concern is real and must be addressed through transparency, accountability and democratic oversight of platform design choices. But the alternative—maintaining an architecture that is optimised for the dynamics cognitive warfare exploits—is not a neutral or rights-respecting choice. It is a choice that privileges the adversary's operational advantage over democratic publics' capacity for reflective self-governance.
Designing For Sense-Making
The deeper shift that friction interventions represent is a move from content-level to process-level defence. Instead of trying to identify and remove false content—which is reactive, resource-intensive and structurally disadvantaged—friction-based approaches improve the quality of the cognitive processes through which users engage with all content.
This is a fundamentally different defensive posture. Content-level defence plays the adversary's game: it responds to each new piece of manipulative content with a specific countermeasure. Process-level defence changes the game: it makes the entire system more resistant to manipulation by improving the quality of collective sense-making.
Designing for sense-making means asking different questions about platform architecture. Instead of asking "how do we remove false content?" the questions become: How do we create environments in which users are more likely to engage in reflective evaluation? How do we make accuracy more salient than engagement? How do we reward the production and sharing of content that contributes to understanding rather than content that provokes reaction? How do we design for the cognitive processes we want rather than the cognitive processes that maximise revenue?
Some of these design changes are straightforward. Displaying source credibility indicators, showing content age, providing context about why content is appearing in a feed and making sharing a two-step process rather than a one-click action are all implementable with current technology. Others are more ambitious: redesigning recommendation algorithms to optimise for informational diversity rather than engagement, creating platform architectures that reward long-form content over soundbites and developing community-based sense-making features that leverage collective intelligence rather than individual reaction.
The most radical implication is that platform business models may need to change. The advertising-driven model monetises attention and attention is maximised by minimising friction and maximising emotional engagement—precisely the dynamics that cognitive warfare exploits. A platform that genuinely designed for sense-making would produce less engagement, generate less advertising revenue and deliver a less addictive user experience. It would also produce a healthier information ecosystem and a more cognitively resilient population. The question is whether democratic societies are willing to impose that trade-off through regulation, or whether they will continue to accept an architecture optimised for the adversary's advantage.
The Long Game
Friction is not a complete solution to cognitive warfare. It does not address adversary capability, it does not solve the attribution problem and it does not repair the structural conditions—economic insecurity, institutional erosion, social fragmentation—that generate cognitive vulnerability in the first place.
What friction does is buy time. It slows the dynamics that adversary operations exploit, creating space for reflective cognition, institutional response and collective sense-making. In a threat environment where speed is the adversary's primary advantage—where the window between attack and effect is measured in hours—slowing the system down is strategically significant.
The deeper value of the friction approach is that it respects democratic agency. It does not tell people what to think. It does not remove content they might want to see. It does not position institutions as epistemic gatekeepers. What it does is create an information environment in which people are more likely to think well—to engage the deliberate, reflective cognitive processes that manipulation is designed to bypass.
The adversary's operational model depends on a system that moves faster than people can think. Designing friction into that system is not a technical intervention. It is a democratic one: a decision to value the quality of collective judgment over the speed of collective reaction and to design information environments that serve democratic self-governance rather than undermine it.




Comments