top of page

The Human Factor in Cybersecurity: Influence, Impact, and Leadership

ree

Introduction

In the modern, hyper-connected digital world, cybersecurity has evolved beyond a purely technical challenge to encompass the complexities of human interaction, organisational culture, and leadership. Despite continuous advances in defensive technologies, a persistent and growing body of evidence indicates that human factors, including cognition, behaviour, biases, and culture , represent both profound vulnerabilities and indispensable assets in cybersecurity. High-profile incidents like the 2017 Equifax breach and the 2020 Twitter hack starkly exemplify how attackers can bypass even the most robust technical controls by exploiting human error or social engineering weaknesses.


The shift toward remote work, proliferation of social engineering tactics, and rapidly changing threat landscapes further amplify the urgency to understand and address human-centred risks. As organisations move to cloud-based systems and adopt artificial intelligence (AI)-driven security tools, the interplay between human adaptation, behaviour, and security outcomes becomes even more critical. At the organisational level, security leaders, particularly Chief Information Security Officers (CISOs), are increasingly expected to not only manage technical risks but also shape culture, influence behaviour, and lead effective change to foster security resilience.


This article draws deeply from the academic insights of foundational documents such as Framing the Human, Security in the Wild, CISO Culture, and Users Are Not the Enemy , integrating their key themes with a wide spectrum of contemporary web sources. Through a multidisciplinary lens, it examines the role of human behaviour in shaping cybersecurity outcomes and critically analyses how leadership and culture can be leveraged to mitigate vulnerabilities and enhance resilience. The discussion navigates theoretical frameworks, cognitive psychology, real-world user behaviour, organisational culture, and practical leadership approaches, offering a cohesive, evidence-based argument for putting the human factor at the core of all cybersecurity strategies.


Beyond Technology: The Centrality of Human Behaviour

A substantial body of research underscores that most modern cyber-attacks target the human layer, not technology per se. According to Verizon’s 2024 Data Breach Investigations Report, 74% of breaches involve some form of human error or manipulation, including phishing, credential misuse, or social engineering. This pattern holds across sectors, from financial services and healthcare to critical infrastructure and education. The Equifax breach of 2017, for example, was precipitated not only by technical flaws (an unpatched system) but also by breakdowns in process, communication, and human vigilance. Similarly, the Twitter hack of 2020 bypassed technological safeguards by exploiting the trust and inattention of employees through vishing (voice phishing) and social engineering.


Social engineering, ranging from phishing and spear-phishing to business email compromise and pretexting, remains the most common and effective tactic, representing over 90% of successful cyberattacks. Attackers skillfully exploit psychological factors such as urgency, authority, trust, and reciprocity, manipulating users into revealing credentials, transferring funds, or downloading malware. As AI and deepfake technologies advance, these attacks are only becoming more convincing and pervasive.


The convergence of psychology and cybersecurity is pivotal in understanding why even knowledgeable users regularly make insecure choices. Cognitive biases, systematic deviations from rational judgment, underlie many security errors. These biases manifest in both end-users and decision-makers, influencing individual behaviours and even shaping organisational security policies. An excess of confidence, for example, may cause network administrators to neglect critically needed updates, or leaders to assume that existing controls are sufficient; both scenarios witnessed in notable breaches. Risk perception is also fundamentally psychological. Individuals often estimate cyber risks not through probability and impact analysis but by affect, familiarity, and social cues, resulting in both over-preparation for rare incidents and under-preparedness for more probable, but less “salient,” threats.


Studies such as Security in the Wild reveal that users’ behaviour in situ frequently diverges from prescribed best practices. Users routinely navigate trade-offs between security, convenience, productivity, and social engagement. For instance, employees may reuse passwords or circumvent complex authentication for the sake of efficiency or develop their own creative (but unapproved) security adaptations. Importantly, such behaviours are not always irrational or reckless. Many users make pragmatic security decisions within their local context, responding to environmental constraints, organisational pressure, or social dynamics, which may conflict with universal policy but succeed locally. Empirical observations underscore that imposed overly rigid or poorly aligned security policies can drive users toward workarounds that increase organisational risk. For instance, an employee who must log into multiple systems daily may use the same (weak) password everywhere despite knowing the risks, simply due to the excessive cognitive load and daily workflow pressure.


The Challenge of Compliance and the Role of Motivation

Research highlights that awareness alone does not guarantee secure behaviour. The notorious “knowing-doing gap” means that even well-trained employees may not comply with policies due to misaligned incentives, habitual behaviour, or motivational deficits. Protection Motivation Theory (PMT) and the Theory of Planned Behaviour (TPB) offer valuable frameworks by contextualixing security compliance in terms of perceived threat severity, self-efficacy, attitudes, subjective norms, and behavioural control. Both intrinsic motivators (personal responsibility, pride, fear of consequences) and extrinsic motivators (rewards, penalties, recognition) influence whether employees adhere to secure practices. Leadership plays a critical role in shaping these levers.


Recent literature advocates moving from traditional technology-centric models to “human-centric” frameworks, in which the design and implementation of security controls are explicitly aligned with human behaviour and organisational realities. These models incorporate psychological resilience, adaptive training, habit formation, and ethical considerations into systemic defences. The “Human-Centric Cybersecurity Framework” integrates:


  • Psychological resilience development (stress management, emotional intelligence training)

  • Adaptive, personalised training and awareness programs — gamified, culturally tailored, and role-specific

  • Alignment of policies with day-to-day user contexts and needs

  • Socio-technical integration, blending technical controls with behavioural interventions and incentives

  • Ethical/AI considerations for human-AI collaboration, fairness, and privacy


By modelling security posture not just in terms of technical maturity but also organisational culture, user experience, and psychological preparedness, these frameworks provide a pathway for more resilient and adaptive security ecosystems. Organisational culture, the shared values, beliefs, and attitudes, influences security behaviors as profoundly as formal rules or technical controls. From “blame the user” cultures that breed resentment and noncompliance, to “empower the defender” approaches that foster positive engagement, the character of a company’s culture is a key determinant of risk. The Organisational Cybersecurity Culture Model (OCCM) and similar frameworks emphasise that effective cultures are built on:


  • Support and trust over fear and punishment

  • Leadership example and visibility

  • Transparent, open communication about risks and incidents

  • Recognition and encouragement of positive security behaviors

  • Continuous learning and feedback, not just “tick-the-box” compliance


Theories of compliance and behaviour change can be utilised to support the development of positive security cultures. Protection Motivation Theory (PMT) posits that security behaviours are influenced by perceived threat, vulnerability, response efficacy and self-efficacy. Enhanced by including rewards, contextual cues, and supporting structures, PMT has proven predictive in user compliance studies. The Theory of Planned Behaviour (TPB) explores how attitudes, subjective norms (social pressures), and perceived control shape intentions to perform or bypass security practices. The General Deterrence Theory focuses on the impact of sanctions (certainty, severity, and swiftness) in deterring undesirable behaviours, underscoring the importance of clear policies and enforcement mechanisms, while Cognitive and Socio-Technical Models highlight the value of decision-support systems, habit formation, nudges, and reducing cognitive strain to stimulate secure behaviour change. Socio-technical integration connects user behaviour with adaptive tech defences to reduce errors.


CISO’s are uniquely positioned to bridge the gap between technical defences and human realities. No longer mere custodians of security technology, modern CISOs must:


  • Foster an organisation-wide security culture

  • Drive complex behaviour change initiatives

  • Align cybersecurity with business strategy and objectives

  • Engage, influence, and partner with diverse stakeholders — from the boardroom to the frontlines


CISO and executive leadership must be visible champions of security , not only through policy but by modelling secure behaviours, sharing stories of incidents (without blame), and integrating security into business priorities. Instead of one-way edicts, CISOs should foster ongoing dialogue, acknowledge users’ expertise, and use narrative to make security relatable. They should be equipped with policies developed with user input, rather than imposed top-down. These are more likely to reflect real workflows and be embraced. Participatory design and regular feedback loops serve to identify and resolve points of friction. Crucially, a culture that “catches people doing the right thing” (e.g., reporting a phish, championing secure behaviour) encourages positive repetition. Punitive or shame-based approaches, in contrast, foster disengagement and under-reporting of errors. The training designed to encourage this should be adaptive, continuous, role-specific and interactive. It should employ simulation, gamification, “nudges,” micro learning and scenario-based exercises to build cognitive and emotional engagement, not just rote compliance.


Borrowed from behavioural economics, nudging uses subtle prompts at the moment of decision to steer users toward better security choices without coercion. Examples include dynamic warnings on suspicious emails, password strength meters, and just-in-time feedback when risky behaviours are detected. Data-driven nudge strategies are demonstrably effective in reducing errors, improving incident response, and embedding secure habits at scale. CISOs incorporating nudge theory report measurable improvements in user resilience and security culture maturity.


A growing body of research points to the importance of diversity and inclusion within cybersecurity culture, for both defensive effectiveness and equity. Teams with diverse perspectives, experiences, and backgrounds are better equipped to detect new and sophisticated threats, challenge assumptions, and innovate. Cultural factors (national, linguistic, generational, or otherwise) shape attitudes towards risk, authority, and compliance. Therefore, culturally adaptive training, multilingual resources and flexible communication are essential. Inclusion must address not just demographics, but also cognitive diversity and psychological safety. The acceleration of digital globalisation demands security cultures that are inclusive and adaptable across cultures, generations, and languages. Diverse teams contribute to innovation, threat detection, and improved problem-solving. Organisations should therefore integrate multilingual resources, context-adaptive training and culturally-sensitive communication. Tailoring interventions for different generational attitudes and digital skills improves engagement and resilience.


Despite advances in automation and controls, “culture eats strategy for breakfast”; if the human element is neglected, even the best-laid plans will fail. Security leaders must remain vigilant against complacency, iterate on feedback, and sustain behavioural change efforts indefinitely, adapting to new threats and realities. Human error, behaviour, and organisational culture stand at the heart of cybersecurity outcomes. Social engineering, cognitive biases, and psychological stressors enable attackers to circumvent technical barriers. Yet users, when empowered, well-led, engaged, and supported, can be transformed from the “weakest link” into the strongest line of defence. Central to this transformation are effective CISO leadership, supportive and transparent communication, adaptive policies, behavioural interventions, and continuous measurement of security culture. Diversity and inclusion, ethical AI, and global adaptability provide further resilience and ensure that human-centric security is both fair and effective.


To realise this vision, security leaders must integrate human factors into every layer of cybersecurity design, policy, and practice. By combining behavioural science, organisational development, ethical technology, and robust leadership, the future of security can be both secure and profoundly human-centric.

 
 
 
business-people-working-data-project.jpg

HAVE ANY QUESTIONS ?

Can our experts help you?

Get in touch with us and we will assist you further.

Security Education, Risk, Resilience Awareness and Culture

Address

Southgate Chambers, 37-39 Southgate Street, Winchester, England, SO23 9EH

EMERGING RISKS GLOBAL ®

Emerging Risks Global ® (ERG) is a trading name of Woodlands International Ltd ©

Registered in England and Wales: 11256211.

Connect With Us

  • Instagram

This website and its content is copyright of  Woodlands International Ltd ©. 2025  All rights reserved. 

bottom of page