From Compliance to Convergence: Integrating Human Factors into Organisational Cybersecurity Culture
- Emerging Risks Global
- Oct 29
- 13 min read

Abstract
Organisations face increasingly complex cyber threats, prompting a shift from purely technical defences to holistic strategies that integrate human behaviour, organisational culture and psychological theory. This paper critically evaluates the role of human factors in cybersecurity, drawing on a systematic review of standards and frameworks, including NIST and ISO/IEC 27000 and recent literature on autonomous systems and behavioural science. While automation offers scalable defence, research consistently positions human expertise as complementary rather than dispensable. The essay explores insider risk, cognitive biases and the limitations of compliance-driven training, advocating for a concordance-based approach that fosters intrinsic motivation and shared responsibility. Models such as COM-B, Protection Motivation Theory and dual-process cognition are applied to cybersecurity contexts, highlighting the need for psychologically acceptable, user-centric design. The paper concludes that effective cybersecurity requires a socio-technical paradigm, one that blends adaptive education, leadership endorsement and cultural reinforcement to embed secure behaviours across organisational ecosystems. This work contributes to the evolving discourse on cyber resilience by proposing a framework for integrating behavioural science into cybersecurity strategy, with implications for policy, training and system design.
Organisations are aware of the threat of cyber-attacks (Borrett et al., 2014; Brewer, R., 2015; Dillon, 2021; Jeyaraj, Zadeh & Sethi, 2021). This coincides with an increase in the body of knowledge and academic routes in the field of cybersecurity, alongside common awareness of mature cybersecurity practices (Oyelami & Kassim, 2020; Sasse & Rashid, 2019). Numerous cyber defence and information assurance measures are identified in a systematic review of cybersecurity standards and frameworks, produced by Reuben-Owoh and Haig (2025). The authors identify how these measures and systems are regularly reviewed in response to the emerging threat landscape. Reuben-Owoh and Haig (2025) further identify that a plethora of standards and frameworks, including the NIST and the ISO/IEC 27000 series, exist to provide guidance on how to identify, prevent and mitigate computer network vulnerabilities and to manage security risks. Reflecting technological developments, many cybersecurity systems can be fully automated to detect, analyse and respond to cyber incidents and events without the need for direct human intervention. Reinforcement‐learning agents, rule‐based anomaly detectors and adaptive network controllers can autonomously identify malicious traffic, quarantine compromised assets and even deploy remediation measures (Oesch et al., 2024; Wang & Li, 2023). However, despite the suggestion that human intervention is not necessary to ensure effective cybersecurity, research has indicated that periodic human‐driven calibration is still essential for aligning autonomous decisions with evolving organisational policies and legal requirements (Wang & Li, 2023). In fact, no research evidences that the human dimension is indispensable in cyber defence. Instead, the body of literature consistently positions autonomy as complementary to, rather than a substitute for, human expertise. This essay aims to critically evaluate why organisations need to consider the human aspects of cybersecurity when designing and implementing measures to encourage secure behaviour among employees.
A plethora of research has considered the human factor in cybersecurity (Forsythe et al., 2013; Mittal, 2015; Whittle, 1997). Although research does not always clearly differentiate between attacker, defender and user (Jeong et al., 2019), Mittal (2015, p. 141) stresses that it is “the weakest link in the security of computer systems”. In a review of literature considering the interaction between users and computer systems, Mittal (2015, p.142) refers to a paper produced by Vroom and Von Solms (2004), who identify the importance of user behaviour and how it is influenced by psychological, intrinsic and extrinsic factors, including the behaviour of peers. Counterproductive behaviours can result in the violation of information security policies, which falls within the National Protective Security Authority’s definition of Insider Risk, being the sum of “the likelihood of harm or loss to an organisation and its subsequent impact, because of the action or inaction of an insider” (National Protective Security Authority, 2023). Given the importance of people to organisations, research has considered human activities and designed mitigations to reduce the potential negative impact resulting from them (Georgiadou, Mouzakitis & Askounis, 2022). With the context and impact of addressing human factors established, it is necessary to explore how security approaches are adapting to better integrate these considerations.
Recognising that systems may not be designed to sufficiently consider human behaviour, the CyBOK Human Factors Knowledge Area suggests that security must be psychologically acceptable, usable and aligned with the realities of human interaction (Sasse & Rashid, 2019). To achieve this, it proposes a human-centric design philosophy, which accounts for cognitive load, physical context, device limitations and social dynamics, however, it is also important to remain aware that users within organisational settings may prioritise productivity over security, viewing security as a blocker rather than an enabler. The findings of Ashenden (2008) suggest that a further challenge to encouraging secure behaviour amongst employees may be caused by the reliance organisations place upon cybersecurity technology and processes, rather than on people. The author suggests that cybersecurity controls and systems should be supported by the development and maintenance of human qualities and soft skills such as authority, leadership, vision and good management practice. Gaps in considering the human aspects of cybersecurity have also been identified by the UK National Cyber Security Centre (NCSC). This highlights the ongoing debate on whether current efforts are sufficient and what new approaches experts are recommending.
In the guidance article, ‘Putting people at the heart of an organisation's approach to cyber security’, the NCSC (2025) suggests that the human dimension in cyber security lacks maturity and does not always receive an appropriate level of organisational focus. The NCSC suggests that efforts to improve in this area have historically failed to adopt a systemic approach, resulting in ineffectiveness. The NCSC suggests that by adopting a whole system approach, which considers both people and security together, more effective and resilient systems can be developed. Such an approach can consider three key areas in concert: the people-centred design of a secure system, cybersecurity education and cybersecurity culture. This approach ensures that security tools and processes are developed with a variety of users in mind, that up-to-date best practices in cyber security are communicated effectively and that the development of a people-centred cyber security culture is encouraged. The people-centred design approach is supported by Nguyen, Lasa and Iriarte (2022), who produced a paper reviewing human-centred design (HCD) in the context of Industry 4.0. Although the authors recognise the limitations of their study, having reviewed only a limited amount of research, their work demonstrates the potential benefits of adopting a Human Centred Design approach to improve the usability of systems. This is consistent with the findings of Liaropoulos (2015), who proposes that humans are both the strongest and weakest link in cyber defence. Rather than treating users as passive endpoints, the author advocates for empowering individuals with education and awareness training. This involves developing an understanding of how users interact with systems to identify and mitigate vulnerabilities and embedding cyber hygiene practices into organisational culture. Kioskli et al. (2023) support the suggestion that cyber education should be human-centric and multidisciplinary, but argue that current approaches are not effective in preparing individuals to consider human factors associated with social engineering attacks, like phishing. This is supported by a report from the Information Commissioner's Office (ICO) (2024), which states that 56% of businesses and 62% of charities that reported breaches or attacks felt phishing attacks were the most disruptive type of attack organisations face.
Jeong et al. (2019) present the view that half of cyber security breaches are due to human error, in concert with the theory presented by Reason (1990), which suggests that a taxonomy of error exists which includes skills-based errors called ‘slips’, ‘lapses’, where people forget an intended action or when they make a ‘mistake’, having adopted an incorrect course of action. The effective mitigation of this threat vector requires a converged approach to security, which blends technical controls with improving the awareness and capabilities of users (Choo, 2011; Dlamini & Modise, 2012; Furnell, Tsaganidi & Phippen, 2008). Rahim et al. (2015) acknowledges the call by researchers to improve the levels of cyber security awareness, demonstrated across Internet communities (Furnell, Tsaganidi & Phippen, 2008; Rezgui & Marks, 2008) The authors’ suggest that this will improve the sensitivity of users to the myriad of cyber threats and vulnerabilities which exist and will increase as organisations increasingly adopts technologies such as artificial intelligence, augmented reality, the Internet of Things (IoT) and autonomous vehicles (Ross & Maynard, 2021; Siponen, 2000). Research has emphasised that education tailored to develop human factors, including awareness, decision-making and psychological resilience, is essential to reducing risk and enhancing organisational readiness. Khadka and Ullah (2025) suggest that cybersecurity should be reframed as a socio-technical challenge, where adaptive training, emotional intelligence and gamified learning foster behavioural alignment with security protocols. Evans et al. (2016) argue that cybersecurity assurance must incorporate human reliability assessment and behavioural validation to address non-malicious insider threats and cognitive biases. This is further supported by Maalem et al. (2020), who highlight that human errors, social engineering susceptibility and blind faith in technology are exacerbated by weak organisational cultures and poorly designed educational interventions. Huang et al. (2025) extend this by demonstrating that organisational culture, shaped by leadership, norms and psychological safety, directly influences cyber hygiene and threat response. Without a culture that reinforces vigilance and accountability, even well-trained individuals may default to risky behaviours. If training remains superficial or compliance-driven, employees may revert to insecure shortcuts under pressure. Thus, education must not only transfer knowledge but also embed values and practices that align human behaviour with resilient security ecosystems.
The importance of culture in reinforcing shared values, norms and expectations around digital behaviour and cyber security lies at the core of the human dimension in cyber security, influencing everything from adherence to security policies to proactive threat reporting (Uchendu et al., 2021). Integrating with cyber security education, models that integrate behavioural science, ethics and systems thinking bridge the gap between technical defences. A robust security culture is predicated on visible top-management support, comprehensive policies and continuous awareness and training programmes. Research by Zimmermann and Renaud (2021) has further supported these recommendations through presenting the finding that the incorporation of hybrid nudges, which are combined with information to help people to understand what the objective of a particular nudge is, may improve behaviours. Although ‘nudges’ have received some attention, the inconsistent results generated by digital nudges may support the need for organisations to consider alternative approaches when designing and implementing measures to foster cyber-secure practices (Zimmermann & Renaud, 2021). While nudges offer promise, their inconsistent results suggest that motivational theories such as PMT may provide a more robust foundation. This view is supported by Puhakainen and Siponen (2010), who also demonstrate that even well-designed security training campaigns may falter if they overlook the motivational dimensions of human behaviour. By integrating Protection Motivation Theory (Rogers, 1975) with organisational commitment constructs, they reveal that fear appeals alone can backfire, potentially undermining intrinsic motivation and fostering disengagement. This critique underscores the importance of cultivating intrinsic security values, embedding security as a shared personal responsibility rather than enforcing it through a purely top-down compliance model. This is aligned with Michie, Van Stralen and Wests’ (2011) COM-B model, which identifies the importance of motivation, alongside the development of capability and the provision of opportunity, to behaviour change. To understand why users may not adhere to security protocols, we can further explore dual-process theories of cognition, particularly System 1 and System 2 thinking (Kahneman, 2011). System 1 is fast, automatic and intuitive, while System 2 is slow, deliberate and analytical. In cybersecurity contexts, many decisions are made under System 1, where users act quickly and habitually, often bypassing security measures due to cognitive overload or convenience. This aligns with the concept of cognitive miserliness, where individuals seek to conserve mental effort (Green & Dozier, 2023). Moreover, Adams and Sasse (1999) argue that security mechanisms must support business processes rather than obstruct them.
This essay has demonstrated that organisations must consider human aspects and how adopting tools and psychological theories can help achieve this, supporting the design and implementation of measures to encourage secure behaviour among employees. Organisations can encourage secure behaviour amongst employees through leadership endorsement and by providing clear procedural guidance to foster buy-in and sustained behavioural change rather than one-off compliance (Uchendu et al., 2021). Ultimately, effective cybersecurity requires a shared understanding of risk and its implications, where users are not merely passive recipients of instructions but active participants in shaping secure environments. This shift from compliance to concordance, from doing as told to understanding and agreeing, marks a critical evolution in cybersecurity culture. To foster concordance, adherence and reduce non-compliance, organisations must move beyond tick-box compliance training(Haney & Lutters, 2020) and instead promote care-based security cultures (Wood, 2025). The COM-B model (Michie, Van Stralen & West, 2011) provides a useful framework here, suggesting that behaviour is a function of Capability, Opportunity and Motivation. Security interventions should therefore enhance users’ psychological capability, create enabling environments and foster intrinsic motivation.
References
Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42(12), 40-46.Ashenden, D. (2008). Information Security management: A human challenge?. Information security technical report, 13(4), 195-201.
Ajzen, I. (1991). The theory of planned behaviour. Organisational behaviour and human decision processes, 50(2), 179-211.
Ashenden, D. (2008). Information Security management: A human challenge?. Information security technical report, 13(4), 195-201.
Bandura, A. (1977). Self-efficacy: toward a unifying theory of behavioural change. Psychological Review, 84(2), 191.
Borrett, M., Carter, R. & Wespi, A. (2014). How is cyber threat evolving and what do organisations need to consider?. Journal of Business Continuity & Emergency Planning, 7(2), 163-171.
Brewer, R. (2015). Cyber threats: reducing the time to detection and response. Network Security, 2015(5), 5-8.
Choo, K. K. R. (2011). The cyber threat landscape: Challenges and future research directions. Computers & Security, 30(8), 719-731.
Corman, A. (2023). The Human Element in Cybersecurity–Bridging the Gap Between Technology and Human Behaviour. Unpublished Manuscript. Available at: https://www.researchgate.net/profile/Md-Mahabub-Alam-2/publication/380270220_The_Human_Element_in_Cybersecurity_-_Bridging_the_Gap_Between_Technology_and_Human_Behaviour/links/66336f4d3524304153582cba/The-Human-Element-in-Cybersecurity-Bridging-the-Gap-Between-Technology-and-Human-Behaviour.pdf
Dillon, R., Lothian, P., Grewal, S. & Pereira, D. (2021). Cyber security: evolving threats in an ever-changing world. In Digital Transformation in a Post-Covid World (pp. 129-154).
Dlamini, Z. & Modise, M. (2012). Cyber security awareness initiatives in South Africa: a synergy approach. In 7th International Conference on Information Warfare and Security (Vol. 1, pp. 98-107).
Evans, M., Maglaras, L. A., He, Y. & Janicke, H. (2016). Human behaviour as an aspect of cybersecurity assurance. Security and Communication Networks, 9(17), 4667-4679.
Forsythe, C., Silva, A., Stevens-Adams, S. & Bradshaw, J. (2013, July). Human dimension in cyber operations research and development priorities. In International Conference on Augmented Cognition (pp. 418-422).
Furnell, S., Tsaganidi, V. & Phippen, A. (2008). Security beliefs and barriers for novice Internet users. Computers & Security, 27(7-8), 235-240.
Georgiadou, A., Mouzakitis, S., Bounas, K. & Askounis, D. (2022). A cyber-security culture framework for assessing organization readiness. Journal of Computer Information Systems, 62(3), 452-462.
Green, M. L. & Dozier, P. (2023). Understanding human factors of cybersecurity: Drivers of insider threats. In 2023 IEEE International Conference on Cyber Security and Resilience (CSR) (pp. 111-116). IEEE.
Haney, J. & Lutters, W. (2020). Security awareness training for the workforce: moving beyond “check-the-box” compliance. Computer, 53(10), 10-1109.
Huang, W., Romanosky, S. & Uchill, J. (2025). Beyond technicalities: Assessing cyber risk by incorporating human factors (Research Report No. RRA3841-1). https://www.rand.org/pubs/research_reports/RRA3841-1.html
Information Commissioners Office. (2024). https://ico.org.uk/about-the-ico/research-reports-impact-and-evaluation/research-and-reports/learning-from-the-mistakes-of-others-a-retrospective-review/phishing/
Jeong, J., Mihelcic, J., Oliver, G. & Rudolph, C. (2019). Towards an improved understanding of human factors in cybersecurity. In 2019 IEEE 5th international conference on collaboration and internet computing (CIC) (pp. 338-345). IEEE.
Jeyaraj, A., Zadeh, A., & Sethi, V. (2021). Cybersecurity threats and organisational response: textual analysis and panel regression. Journal of Business Analytics, 4(1), 26-39.
Kahneman, D. (2011). Fast and slow thinking. Allen Lane and Penguin Books.
Khadka, K. & Ullah, A. B. (2025). Human factors in cybersecurity: an interdisciplinary review and framework proposal. International Journal of Information Security, 24(3), 1-13.
Kioskli, K., Fotis, T., Nifakos, S. & Mouratidis, H. (2023). The importance of conceptualising the human-centric approach in maintaining and promoting cybersecurity-hygiene in healthcare 4.0. Applied Sciences, 13(6), 3410.
Liaropoulos, A. (2015, p.189). Cyber-Security: A Human-Centric Approach. In European Conference on Cyber Warfare and Security; Academic Conferences International Limited.
Maalem Lahcen, R. A., Caulkins, B., Mohapatra, R. & Kumar, M. (2020). Review and insight on the behavioural aspects of cybersecurity. Cybersecurity, 3(1), 10.
Michie, S., Van Stralen, M. M. & West, R. (2011). The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implementation Science, 6(1), 42.
Mittal, S. (2015). Understanding the human dimension of cyber security. Indian Journal of Criminology & Criminalistics (ISSN 0970–4345), 34(1), 141-152.
Mwim, E. N. & Mtsweni, J. (2022). Systematic review of factors that influence the cybersecurity culture. In International Symposium on Human Aspects of Information Security and Assurance (pp. 147-172). Springer International Publishing.
National Cyber Security Centre. (2025). Putting people at the heart of an organisation’s approach to cyber security. https://www.ncsc.gov.uk/collection/cyber-security-culture-principles/putting-people-at-the-heart-of-an-organisations-approach-to-cyber-security
National Protective Security Authority. (2023). Introduction to insider risk. https://www.npsa.gov.uk/specialised-guidance/insider-risk-guidance/introduction-insider-risk.
Nguyen Ngoc, H., Lasa, G. & Iriarte, I. (2022). Human-centred design in industry 4.0: case study review and opportunities for future research. Journal of Intelligent Manufacturing, 33(1), 35-76.
Oesch, S., Austria, P., Chaulagain, A., Weber, B., Watson, C., Dixson, M. & Sadovnik, A. (2024). The path to autonomous cyber defense. IEEE Systems Journal, 18(2), 123–135.
Oyelami, J. O. & Kassim, A. M. (2020). Cyber security defence policies: A proposed guidelines for organisations cyber security practices. International Journal of Advanced Computer Science and Applications, 11(8).
Puhakainen, P. & Siponen, M. (2010). Improving employees' compliance through information systems security training: An action research study. MIS Quarterly, 757-778.
Rahim, N. H. A., Hamid, S., Mat Kiah, M. L., Shamshirband, S. & Furnell, S. (2015). A systematic review of approaches to assessing cybersecurity awareness. Kybernetes, 44(4), 606-622.
Reason, J. (1990). Human Error. Cambridge University Press.
Reuben-Owoh, B. & Haig, E. (2025). A Systematic Review of Voluntary Cybersecurity Standards and Frameworks: B. Reuben-Owoh, E. Haig. International Journal of Information Security, 24(5), 206.
Rezgui, Y. & Marks, A. (2008). Information security awareness in higher education: An exploratory study. Computers & Security, 27(7-8), 241-253.
Rogers, R. W. (1975). A protection motivation theory of fear appeals and attitude change. The Journal of Psychology, 91(1), 93-114.
Ross, P., & Maynard, K. (2021). Towards a 4th industrial revolution. Intelligent Buildings International, 13(3), 159-161.
Sasse, M. A. & Rashid, A. (2019). Human factors knowledge area issue 1.0. Cybok: The CyberSecurity Body of Knowledge.
Siponen, M. T. (2000). A conceptual foundation for organizational information security awareness. Information management & computer security, 8(1), 31-4
Uchendu, B., Nurse, J. R., Bada, M. & Furnell, S. (2021). Developing a cyber security culture: Current practices and future needs. Computers & Security, 109, 102387.
Vroom, C. & Von Solms, R. (2004). Towards information security behavioural compliance. Computers & Security, 23(3), 191-198.
Wang, Z. & Li, M. (2023). Performance and limitations of fully autonomous intrusion detection systems. Computers & Security, 115, 102611.
Whittle, D. B. (1997). Cyberspace: The human dimension. WH Freeman & Co.
Wood, P. (2025). Human-centric risk management: Caring about risk. City Security Magazine, (95), 33. https://content.yudu.com/web/3zs7s/0A3zs7y/CSMSpring25/html/index.html?page=33&origin=reader.
Zimmermann, V. & Renaud, K. (2021). The nudge puzzle: matching nudge interventions to cybersecurity decisions. ACM Transactions on Computer-Human Interaction (TOCHI), 28(1), 1-45.
