Research and Advances
HCI

Human-Centered Cybersecurity Revisited: From Enemies to Partners

Focusing on "enabling approaches" that treat humans as partners adds another layer of protection to our cybersecurity defenses.

Posted
human-robot handshake, illustration

Humans, especially in their role as end users in organizations, have long been considered the weakest link—even enemies—in cybersecurity. This image stems from the perception that, essentially, it is the users who behave insecurely by creating weak passwords, clicking on phishing links, or providing data in insecure networks. Thus, “enemies” here concerns insecure behaviors and policy violations attributed to seemingly thoughtless, careless, or uninformed user actions, not necessarily malicious activities from attackers or hostile insiders.

Previous measures to tackle the supposed enemy end user can be clustered into constraining approaches, which aim to limit human influence and thus potential error. Yet, despite technical and process controls, organizations still must rely heavily on human interaction with technical systems. This gave rise to considering approaches, which try to increase the usability of security technologies38 by reducing errors, insecure workarounds, and security-usability trade-offs. But even with these efforts, security attacks targeting humans, such as phishing attacks that exploit cognitive biases and heuristics, are at an unprecedented high17 and becoming increasingly sophisticated. And not only is the number of reported incidents rising, but even more so the financial losses associated with them.17 It is therefore clear that human cognition and behavior play an important role in coping with persistent and quickly evolving security threats, demanding new pathways.

Key Insights

  • In our complex, interconnected world, we need to use all available resources to counteract cyber threats—including humans. Technology alone is not sufficient to counteract emerging challenges.

  • Therefore, to prevent cyber incidents, we need to move beyond seeing humans as a problem and constraining human interactions with technology.

  • We propose viewing humans as partners, not only focusing on errors and incidents but also holistically analyzing and supporting human contributions to cybersecurity. This shift from a problem-preventing to a solution-fostering paradigm opens new pathways to tackle current cybersecurity challenges.

In the following, we showcase examples of constraining and considering approaches and reflect on their implications in terms of related questions, applied solutions, and their outcomes. We then make the argument for adding a third category of measures, enabling approaches, that treat humans as a resource and a partner in security efforts (Figure 1). Informed by insights from the human sciences, safety science, and initial evidence from the cybersecurity domain, enabling approaches focus on fostering positive human contributions to security—for example, through intrinsic motivation—rather than on preventing human error. This switch in perspective opens pathways to new solutions for supporting secure human behavior and decision making. We therefore call for evidence-based research studying human motivation and behavior that contributes to security, and for developing and testing the resulting measures. A more holistic view achieved through the careful selection and combination of approaches can address some of the challenges we currently face in cybersecurity.

Figure 1.  Visual summary of the approaches toward humans in cybersecurity: constraining, considering, and enabling approaches.

The contribution of the article is threefold:

  • We describe exemplary cases of constraining and considering approaches in the cybersecurity domain and illustrate the resulting implications of related solutions.

  • We make the case for adding enabling approaches to the portfolio to complement where constraining approaches are not feasible and where increasing usability in line with considering approaches is not enough to release the untapped human potential to tackle current sophisticated cyber threats.

  • Through a comparison of practical examples, we aim to increase awareness of the implications of choosing or combining approaches. We thereby provide a decision-making aid for system designers and security professionals.

Constraining Approaches

Constraining approaches, as depicted in Figure 1a, aim to limit users’ (negative) influence on a system to prevent human error. Measures in that category primarily comprise the following elements:

  • Automation: Through the use of automation, human influence, and thus the potential for human error, is reduced. For example, an automatic filter prevents a phishing email from entering a mailbox, thereby restricting human access.

  • Constraining policies: Password composition rules, regulations prohibiting the use of compromised USB sticks, and blocking email attachments from external senders can reduce security threats and guide users. Another way to restrict people is through the principle of least privilege.30 This concept maintains that a user should only have access to information that is needed to complete a required task (for example, only people who work in the HR department need access to personnel files).

  • Top-down education/training: Efforts to train users in complex security technologies (that is, to match technological requirements) and to be aware of security risks may reduce human error.

Table 1 summarizes further examples of solutions aligning with constraining approaches in comparison with those related to considering and enabling approaches. The table illustrates three prominent human-related cybersecurity challenges: password authentication, phishing detection, and incident handling.

Table 1. Illustration of the consequences of different approaches toward humans in cybersecurity.

While partially beneficial and even necessary in some cases, constraining approaches are not without their challenges. On the one hand, automation is necessary in today’s society because it increases the efficiency and scalability of tasks and frees humans from tiresome and dangerous tasks, such as alarm validation in security operation centers or security tasks in hazardous environments. On the other hand, not all processes can be easily automated, including authentication and communication. Sometimes it is necessary to switch to manual, human-based handling, as in the case of emergencies. Further challenges of automation lie in the invisibility of the underlying processes and unexpected outcomes16 that can make it difficult for humans to anticipate, evaluate, and decide upon the provided information. And finally, automated processes are also largely designed by humans and thus not completely free from error or bias, as seen in artificial-intelligence-based security tools.

Likewise, the use of constraining policies can be challenging. When policies are not in line or even in conflict with users’ perceptions or primary tasks, the effects can be counterproductive. For example, strict password policies and mandatory password changes frustrate users,23 leading to the creation of passwords that follow easy-to-guess patterns.20 Blocking USB sticks or email attachments from external senders can lead to insecure workarounds, such as sending sensitive data via private email accounts without encryption.

And finally, there are certain challenges to top-down training. For example, security-awareness campaigns do not always translate into actual behavior.6 Security technologies are often designed by experts, for experts. Thus, even if users are trained, some technologies might still be difficult to use (for example, due to noninclusive designs), leading to either limited adoption or, again, insecure shortcuts. Furthermore, training all users to become security experts is not always economically justified given the ever-changing nature of new threats.

To conclude, constraining approaches can be beneficial and even necessary in some cases; for example, automation is needed to match attackers’ efforts, which are also often built on automation. Policies can provide helpful guidance for users. Yet constraining approaches often come with negative side effects, such as users creating insecure workarounds when the measures do not adequately consider their primary tasks and relevant cognitive and psychological aspects. For example, requiring people to remember numerous complex and random passwords conflicts with human memory. Research suggests that short-term memory has a limited capacity for recording such information compared with meaningful, rehearsed information (for example, birthdays or a dog’s name), which is more easily transferred to long-term memory.2 In addition, not every process can reasonably be automated, so humans remain a relevant and essential part of the sociotechnical system. This concerns all steps in the process, from determining the need for new technologies, to designing them, to interacting with them as employees or customers.

Following this line of argument, and focusing especially on the mismatch between technical security requirements and users’ primary tasks, perceptions, and cognition,1 many researchers argue for considering human factors early on in the design process.

Not every process can reasonably be automated, so humans remain a relevant and essential part of the sociotechnical system.

Considering Approaches

The aim of considering approaches (Figure 1b) is to enhance the usability of security technologies to reduce errors, security-usability trade-offs, and insecure workarounds. This shift from constraining to considering approaches is not unique to the cybersecurity domain: As described by Rasmussen,32 research on human behavior across application areas often starts with normative, prescriptive models describing rational behavior, leading to constraining approaches. Over time, the models evolve to descriptions of actual instead of rational behavior, resulting in considering approaches.

Considering approaches mainly comprise the following measures:

  • Understanding users: Especially for emerging technologies, considering approaches most often aim to understand human needs, expectations, and mental models. The findings form the basis for identifying potential challenges with existing security technologies or gaps for which solutions are needed.

  • Usable design: A second group of measures involves enhancing the usability of existing solutions or proposing new security solutions aimed at decreasing security-usability trade-offs, for example, password alternatives in the form of easier-to-memorize passphrases or images.

  • User-centered education/training: Similar to constraining approaches, education and training also play an important role. But here the focus is on aligning training efforts with users’ needs and cognition, for example, ensuring that relevant information is actually conveyed and understood or increasing users’ motivation to take training through gamification.

Further examples for solutions aligning with considering approaches are summarized in Table 1.

Considering approaches not only increase awareness of human factors in cybersecurity but also lead to the design of security solutions with enhanced usability. Yet we still face security challenges. These include the still limited consideration of usability aspects by designers and system developers in practice3 and challenges with implementing and maintaining usable security solutions.19 Another challenge is security professionals’ stance toward human interaction with technology. A review of national security strategies and industry security reports revealed that humans are still often considered a problem in cybersecurity.38 In many cases, the focus of constraining approaches, which perceive humans as a problem to be controlled, has shifted to humans as a problem to consider in the system design.

Considering approaches not only increase awareness of human factors in cybersecurity but also lead to the design of security solutions with enhanced usability.

At the same time, we are confronted with an increasing number of security threats targeting humans as an attack vector, for example, by exploiting cognitive biases and heuristics in phishing attacks.17 It appears we cannot solve today’s cybersecurity challenges by merely excluding or constraining the human and that even considering approaches are not enough to thwart attackers’ efforts.

Rather, we need to rethink the roles and labels we assign to people. We should question the goals we are setting and the conditions in which we expect people to achieve these goals. Only when we stop enforcing compliance with inept policies and technologies can we expect to permanently build people’s security capacities. Furthermore, initial research indicates that humans are motivated and able to behave securely when they are adequately enabled to do so. This has been illustrated by, for example, non-constraining password nudges that bridge the gap between users’ security perception and technical password strength, as well as motivate the creation of secure passwords.39 Hence, while considering human aspects and usability is essential, doing so is insufficient to tap the full potential of humans to actively and willingly contribute to security.

It is time to reconsider how we deal with the human factor in cybersecurity and explore new avenues for research. Security is neither a product nor mere technology, but it requires the integration of people, processes, and technology.5 As such, we need to consider the interaction of humans and security technologies within complex sociotechnical systems. It is not sufficient to rely on findings from purely technological disciplines. If we truly want to understand and improve security behavior, it is essential that we also incorporate insights from disciplines primarily concerned with the “people” and “process” aspects, such as the human sciences and safety science.

Rethinking the Human Factor in Sociotechnical Systems

As pointed out earlier, one way of reconsidering the human factor in cybersecurity is by learning from insights in related disciplines that have experienced similar challenges (see, for example, the case of risk communication18). For cybersecurity, the relevant areas are those closely connected to humans and safety: the human sciences and safety science.15

Human sciences.  By exploring the human and social sciences, we can gain useful insights into different areas and contexts in which humans learn, decide, and act. Disciplines such as sociology examine social interactions in groups and organizations while disciplines such as psychology focus on individual perception and behavior. Psychological aspects have already been successfully applied to human-technology interaction and usable security research in considering approaches, for example, in the exploration of human mental models of security, the evaluation of users’ security perceptions, and in explaining gaps in users’ privacy-related intentions and behaviors.

Safety science.  Safety science studies safety, risk, and resilience in sociotechnical systems. The interdisciplinary field encompasses not only social sciences and psychology (that is, human sciences), but also health sciences, physical sciences, and engineering.14

While safety is not the same as security, both aim to protect sociotechnical systems from risks, harms, and failures. Safety focuses on protection from unintended errors and accidents, whereas security focuses on protection from deliberate attacks. While the cause for a failure might be different in safety and security (for example, technical problem vs. attack), measures to support humans in preventing, detecting, and coping with threats follow similar routes. Thus, several researchers have already argued that learning from safety science can be beneficial for cybersecurity.15

Safety science has a longer tradition compared with cybersecurity, yet both have experimented with similar sets of approaches, including constraining approaches (for example, proceduralization focusing on compliance14) and considering approaches (for example, cognitive systems engineering that considers human factors14). Similar to cybersecurity, previous approaches have been successful in reducing the number of safety incidents to a certain level but either also led to unintended side-effects or were not designed to cope with the complexity of today’s sociotechnical systems. Therefore, current trends in safety science suggest exploring new pathways through not only considering the human aspects but also actively supporting humans in contributing to safety as a partner.14

Relevant insights from the human sciences and safety science.  Insights from human and safety sciences that are relevant for rethinking the human factor in cybersecurity include the following:

  • Systems thinking and constructivism: Problems in today’s complex sociotechnical systems emerge from the interplay of many factors, including humans. It is thus seldom possible or helpful to trace back errors to a single human error. In addition, attributing error is a constructivist process rather than an objective one (for example, as argued by Woods et al.36).

  • Analyzing success: Human behavior can play a dual role, contributing to both error and success.22 Studying not only error but also success is thus relevant for a holistic understanding of the factors preventing negative and fostering positive outcomes. For example, in the healthcare sector, Dekker13 found that the same deviations from procedures that had led to incidents in a few cases had also almost always contributed to successful patient care.

  • Adaptive human behavior. Human behavior changes with, and can adapt to, new or changing environmental circumstances. Restricting adaptive behaviors can sometimes also hinder contributions to success and productivity, as in the healthcare example mentioned above.13 Instead, permitting adaptive behaviors, particularly in unfamiliar or unforeseen circumstances, allows humans to navigate situations that policies may not have anticipated or are unable to predict. This increases the likelihood of restabilizing a system.22

  • Learning process: Learning from not only a small number of errors but also near misses and successful operations is beneficial, because doing so provides additional quantitative data and insights into potentially insecure actions.35 It requires the systematic collection and sharing of data within and across organizations, as successfully implemented in domains like aviation. However, blaming people for errors or resorting to punishment is counterproductive for learning.12

  • Resilience: Increasing static barriers might not be sufficient to tackle emergent threats in complex and dynamic sociotechnical systems.22 Current resilience research instead focuses on an organization’s ability to flexibly adjust to, cope with, and recover from expected and unexpected incidents.22 Thus, there is an emerging shift from a resistance stance toward adaptability.

  • Expertise: People with high levels of expertise, rather than those at high levels in the hierarchy, should be included in decision making. This can also include end users, as they are experts in their primary tasks.38 For example, risk management research suggests that nonexperts should be made equal partners in risk management, as they might identify risks that security experts do not see.18

  • Ownership: Psychological ownership is an essential predictor for IT system acceptance and use7 and a key component for extra-role behavior—that is, user engagement beyond the job role that contributes positively to the workplace.31 Ownership can specifically be fostered in less-structured work environments31 and through targeting the factors leading to psychological ownership.4 To do so, systematic approaches for supporting behavior change can be useful.

  • Human strengths and capabilities: There are domains where emerging technologies such as artificial intelligence can outperform humans and have the potential to de-bias human decision making, for example, in employment decisions. There are also many domains, however, in which human intuition and simple heuristics can outperform complex algorithms and models in both accuracy and efficiency.10 Human expertise can be used to complement technology or algorithms, such as in cybersecurity,33 medical diagnostics, and deception detection. Furthermore, there are cases in which algorithms can be biased,40 which then benefit from complementary approaches, such as having humans account for these biases.

Enabling Approaches

Based on insights from the human sciences and safety science, we suggest supplementing the current portfolio of mainly constraining and considering approaches with enabling approaches. These acknowledge that today’s sociotechnical systems are highly complex and interconnected and that a combination of factors contribute to error but also to success. Enabling approaches therefore aim to make use of all available resources, including and especially focusing on the human and the greater sociotechnical system. They aim to integrate humans as a potential security resource and treat them as an equal partner to technology. These approaches consider not only the interactions of humans and technology but also the systemic factors influencing those interactions, such as leadership and organizational security culture. Figure 1c therefore depicts multiple interactions among humans and technology, complementing each other in a system context. This stands in contrast to constraining and considering approaches, where the focus is on technology design, with the human element placed below or subordinate to the technology.

These approaches consider not only the interactions of humans and technology but also the systemic factors influencing those interactions, such as leadership and organizational security culture.

Unlike previous approaches, enabling approaches take a “solution fostering” perspective rather than a “problem preventing” one. Along with this transition comes a change in how humans are viewed, from being a problem that needs to be controlled or considered to recognizing them as a potential security asset. This switch in perspective allows for complementing existing research approaches with a more holistic consideration of the human role in cybersecurity and an analysis of the human factors contributing to success.

As an example, in the case of a phishing attack, we might not only analyze what led a few people to click on the link, but would also explore what led the majority of people to detect and perhaps even report the threat. The focus of analysis would not be on counting how many people reported the phishing compared with how many people fell for it but rather on understanding why some people detected it and why some decided to actively report the suspicious email. Exemplary questions include: What made them notice and check the fraudulent URL? For example, did they successfully apply the knowledge acquired from training? Did they know from the media or a peer? And what made them report it? For example, how did they know how to report it? What motivated them to report it? And which resources were available to them to foster reporting? Methods for identifying these factors include barrier analysis.26 To identify differences in behavioral determinants (factors that steer one’s behavior), barrier analysis divides people into two subgroups: those who do perform a desired behavior (doers) and those who do not perform a desired behavior (non-doers). Identifying the determinants that make the most significant difference helps pinpoint the most effective factors to focus on in motivating individuals to change their behavior. This approach can improve the response to phishing attacks, enabling the detection of new threats and better supporting those who fall for a phish with adequate training or resources. Therefore, we might gain new and highly relevant insights by switching our perspective—that is, if we also study what goes well instead of treating users as enemies and focusing only on preventing problems.

Building on insights from the human sciences and safety science, enabling approaches are envisioned to include the following measures:

  • Adoption of a humans-as-partners mindset: The first step toward enabling approaches would be to adopt a humans-as-partners mindset, acknowledging the human potential to actively contribute to security in complex systems when analyzing security incidents, designing security measures or policies, or formulating research questions. For example, as discussed above, human strengths and capabilities such as human intuition and creativity should be considered.

  • Understanding positive human contributions to security: In line with the safety insight on analyzing success, enabling approaches would identify factors contributing to success, identify factors contributing to both success and failure to avoid negative side-effects resulting from generally prohibiting or fostering them, and explore how human strengths can be leveraged to increase security—for example, how to foster ownership and extra-role behavior.

  • Human-technology partner design: Following the humans-as-partners mindset, measures such as training or technical security solutions should be designed and evaluated with people with people who have the highest expertise for a given task, that is, the employees working with it daily rather than people higher up in the hierarchy.

Further examples of envisioned solutions aligning with enabling approaches are summarized in Table 1.

Initial research in the direction of enabling approaches has already provided us with some examples in which humans in cybersecurity can be a security partner or a wall of defense rather than an enemy. Yet, while some of the examples date back a couple of years, research on using an enabling approach or mindset is still scarce in the cybersecurity domain.

Examples of enabling approaches in cybersecurity.

Example 1.  The first set of examples comprises complementing technological developments with human strengths and capabilities.

For example, Heartfield et al.21 analyzed how humans can successfully contribute to the detection of social engineering attacks within a human-as-a-security (HaaS) framework. The HaaS framework is based on the idea that human senses, competencies, and knowledge can supplement technical sensor data. Successful applications of human heuristics in the area of medical diagnostics and deception detection can serve as inspiration for their use in tackling cybersecurity challenges. Another area for the complementary use of human and technological strengths is in emergency situations, where considering computers as part of the emergency management team can allow people to “continue to do the things they do well, supported by the technology, not driven by it.”8 Furthermore, several authors studied the positive effects of crowdsourcing approaches in which collective human information is used to detect phishing attempts and bad domains.29

Another human strength is creativity. An illustrative example of security-related creativity comes from the French elections in 2017, in which a disinformation campaign was successfully counteracted.24 Among other things, the campaign team creatively designed traps for attackers by purposefully crafting misleading information that would be easily detectable by the public but not by attackers or algorithms. The team did so by including the names of fictional characters invented by French humorists. Their cultural significance to the French public would make “news” by alleged “sources” with these names easily detectable as disinformation.24 Related creative human strategies to slow down, distract, and detect attackers include (virtual) honeypots and measures to increase distrust among cybercriminals.

And despite AI advances in the area of creativity, research shows that the complex nature of human creativity is still difficult to replicate or surpass with AI27 and that a complementary approach can still be beneficial compared with a competing approach.37

Example 2.  The second example concerns making use of the advantages of adaptive human behavior.

Constraining approaches aim to limit adaptivity to avoid human error, for example, through mandating compliance, and thus consider noncompliance a negative aspect. Yet adaptive behaviors and deviations from policies can have at least three beneficial purposes:

  • They can be seen as a warning signal that there is a mismatch between the policy and users’ primary tasks or security perceptions such that they engage in “shadow security” practices.25 These are adaptive security practices such as creating compromises within teams to align their tasks with perceived security requirements.

  • In some cases, a deviation from policies can lead to behavior that is more secure than originally intended. For example, a group of people might decide to use encrypted email communication even though not mandated by a policy. Others might come up with valuable feedback on security measures or report suspicious actions without being tasked to do so.

  • Deviations from policies can be an indicator that the policies are no longer up to date with the quickly evolving threat landscape and that humans have already adapted their behavior to deal with the changing environment. This particular human strength to flexibly adapt to changing circumstances can then be useful to maintain system security until updated policies are in place.

Example 3.  A third set of examples concerns human resilience, for example, in terms of their ability to flexibly and manually handle processes in emergency situations.

As an example, a German university had been the victim of a large-scale cyber attack in 2019. Because of efficient emergency management, the adaptive behaviors of employees, and support from colleagues at other universities in analyzing the damage, restoring systems, and manually distributing new passwords to all 38,000 affected accounts, the damage was limited and the system was restored.28 This example shows not only how adaptive and resilient human behavior can contribute to security, but also that being able to manually handle processes and keeping everyone in the loop even when processes are automated is highly relevant.

Envisioning a research agenda for enabling approaches.

Viewing the human as a potential security resource and an equally treated partner to technology in tackling security challenges opens new avenues for research and practice. Studying in which cases humans can actively contribute to security and fostering supporting factors may complement the set of available approaches to encourage secure behaviors and counteract quickly evolving cyber attacks. We therefore suggest working toward an evidence-based foundation for applying enabling approaches in cybersecurity and call for more empirical research on the human contribution not only to failure but also to success. The research should consider different security outcomes, as well as different user groups and application areas. As a first step, this means considering an enabling approaches mindset when formulating research questions and choosing the methods and outcome measures. The examples described here, including the HaaS framework, human adaptivity, and resilience, already illustrate cases in which humans can positively contribute to security beyond compliance. Furthermore, related work indicates that intrinsic motivation is a relevant factor in following security policies34 and forming security habits.9 Therefore, we suggest conducting more systematic research into analyzing the factors that enable and motivate humans to detect insecure or risky situations and to behave securely. Based on the insights of empirical research, implications should then be derived in the form of frameworks, guidelines, interventions, and technical security solutions.

The following research questions concerned with both research content and methodology can guide future research and address potential challenges related to enabling approaches.

Content-related research questions:

  • What motivates users to behave securely?

  • How can we foster cybersecurity ownership and extra-role behavior?

  • In which cases or situations can humans outperform technology?

  • How do we deal with factors contributing to success and failure at the same time?

  • How can we address malicious behaviors while applying enabling approaches?

Methodology-related research questions:

  • How can we measure cybersecurity success aside from the absence of errors?

  • How can we evaluate the assumed benefit of developed “enabling” measures?

  • Which stakeholders should be involved in the design of “enabling” measures?

Conclusion: Selecting and Combining Approaches

In the last sections, we described the predominant approaches in cybersecurity that concern human-computer interaction, namely constraining and considering approaches. We then illustrated how—based on insights from human and safety sciences—humans could further contribute to cybersecurity in a positive way if enabling approaches were considered. Why is it relevant to actively distinguish and choose from these approaches when conducting cybersecurity research or implementing cybersecurity measures in practice?

As seen in Table 1, the approach or mindset that the researcher or designer adopts has major implications for the formulation of (research) questions, the choice of solutions to deal with the human factor in cybersecurity, and the selection of ways to evaluate measures.

The chosen approaches manifest in the published research and in the related tools or policies. For example, Davis et al.11 reviewed 49 empirical studies on cybersecurity behavior. They found that the large majority of 36 articles focused on measuring compliance, which is in line with a constraining approaches mindset, whereas only 10 also considered extra-role volunteering behavior, a measure better suited to evaluating enabling approaches. For example, if the goal was to avoid human error by enforcing a new security policy, measuring compliance as an outcome appears to be a suitable measure. However, potential human contributions to security, extending beyond mere compliance, are neither considered nor measured. This does not necessarily imply that the potential does not exist, but rather reflects a set of assumptions held by the researcher as well as a lack of inquiry and analysis. In line with the saying “What you look for is what you find,” we suggest a conscious and careful selection of approaches, and hence related solutions. Table 1 directly compares examples of different security-related application areas along with their underlying questions and related evaluation measures.

As also apparent from Table 1, the main difference between the approaches might be not only in the selected solution itself but also in the stance with which it is implemented. For example, in the phishing case, training is an action listed in all approaches. However, the focus of the training in constraining approaches might be on explaining policies and practicing compliant behavior. The focus of the training in considering approaches might be on teaching users to understand and counteract phishing threats in a playful and practical way. And the focus of the training in enabling approaches might be on instilling a sense of security and enhancing intrinsic motivation to be a “security sensor” across all ranks, from employees to top management. This approach encourages considering systemic factors rather than relying solely on top-down directives at the employee level. Thus, even if the type of solution appears similar, the way the human is treated—and how they might react to the treatment—differs significantly. If one aims to have humans as partners in tackling cybersecurity challenges, one should carefully consider the selected solutions and whether they are in line with enabling approaches.

The approaches are not necessarily exclusive; in some cases, a combination might be suitable. Consider, once again, the case of phishing (Table 1). Cybersecurity professionals in an organization may decide to limit users’ interaction with known phishing threats as much as possible through automated filters and the blocking of executable files, that is, by employing a constraining approach. To account for falsely classified email and new forms of attacks, they may aim to increase user awareness of phishing through a poster campaign and add a visual warning to emails sent from outside the organization, in line with considering approaches. When the organization experiences a new phishing attack that circumvented technical protection mechanisms, only a few employees reported it. The cybersecurity professionals may decide to take that incident as a learning opportunity rather than blaming the individual employees who fell for the phish. Furthermore, they may aim to understand what made the few employees report the phish, to enhance communication with employees in similar future cases. This approach of partnering with employees and learning from negative as well as positive outcomes would be in line with enabling approaches.

While this example illustrates a suitable combination of approaches, it might be contradictory, frustrating, or confusing for users in other cases. For example, if users feel they are treated as partners in security and provide proactive ideas for security improvements in line with enabling approaches, they may get frustrated if their idea is rejected for not being compliant with current policies, which would be in line with constraining approaches. Likewise, if specific tasks were completely automated to limit the potential for user errors, in line with constraining approaches, users might not be able to manually handle and maintain resilience in an emergency, as envisioned in enabling approaches. Finally, it might not be sufficient to enhance the usability of an existing system, in line with considering approaches, to convincingly express the idea of partnering with humans in cybersecurity, as envisioned in enabling approaches. A better understanding of human needs and strengths is still needed. These examples highlight that a holistic consideration of cybersecurity measures and a related stance toward humans are highly relevant in counteracting cybersecurity threats, with humans as partners rather than enemies.

Acknowledgments

This research was funded by the Digitalization Initiative of the Zurich Higher Education Institutions (DIZH) and the Swiss National Sciences Foundation (Grant 207550).

More Online

To view the complete list of references for this article, please visit https://dl.acm.org/doi/10.1145/3665665 and click on Supplemental Material.

    References

    • 1. Adams, A. and Angela Sasse, M. Users are not the enemy. Commun. ACM 42, 12 (1999), 4046.
    • 2. Allen, R.J. and Baddeley, A.D. Working memory and sentence recall. In Interactions between Short-Term and Long-Term Memory in the Verbal Domain. Psychology Press, (2008), 7597.
    • 3. Alt, F. and von Zezschwitz, E.  Emerging trends in usable security and privacy. I-Com 18, 3 (2019), 189195.
    • 4. Ambuehl, B. et al. Can participation promote psychological ownership of a shared resource? An intervention study of community-based safe water infrastructure. J. of Environmental Psychology 81 (2022), 101818.
    • 5. Andress, A. Surviving Security: How to Integrate People, Process, and Technology. CRC Press, Boca Raton, FL, USA, (2003).
    • 6. Bada, M., Sasse, A.M., and Nurse, J.R. Cyber security awareness campaigns: Why do they fail to change behaviour? Arxiv Preprint Arxiv:1901.02672  (2019).
    • 7. Barki, H., Paré, G., and Sicotte, C. Linking IT implementation and acceptance via the construct of psychological ownership of information technology. J. of Information Technology 23, 4 (2008), 269280.
    • 8. Carver, L. and Turoff, M. Human-Computer Interaction: The Human and Computer as a Team in Emergency Management Information Systems. Commun. ACM 50, 3 (Mar. 2007), 3338; DOI:10.1145/1226736.1226761
    • 9. Collins, E.I. and Hinds, J. Exploring Workers’ Subjective Experiences of Habit Formation in Cybersecurity: A Qualitative Survey. Cyberpsychology, Behavior, and Social Networking 24, 9 (2021), 599604.
    • 10. Czerlinski, J., Gigerenzer, G., and Goldstein, D.G. How good are simple heuristics? In Simple Heuristics That Make Us Smart. Oxford University Press, Oxford, UK, (1999), 97118.
    • 11. Davis, J., Agrawal, D., and Guo, X. Enhancing users’ security engagement through cultivating commitment: the role of psychological needs fulfilment. European J. of Information Systems 32, 2 (2023), 195206.
    • 12. Dekker, S. Just Culture: Restoring Trust and Accountability in Your Organization. CRC Press, Boca Raton, FL, USA, (2018).
    • 13. Dekker, S. Why do things go right? (Sep. 28, 2018); http://www.safetydifferently.com/why-do-things-go-right/
    • 14. Dekker, S. Foundations of Safety Science: A Century of Understanding Accidents and Disasters. Routledge, Boca Raton, FL, USA, (2019).
    • 15. Ebert, N. et al. Learning from safety science: A way forward for studying cybersecurity incidents in organizations. Computers & Security 134 (2023), 103435; 10.1016/j.cose.2023.103435
    • 16. Endsley, M.R. From here to autonomy: lessons learned from human–automation research. Human Factors 59, 1 (2017), 527.
    • 17. ICCC FBI. Internet crime report 2021 (2021).
    • 18. Fischhoff, B. Risk perception and communication unplugged: twenty years of process. Risk Analysis 15, 2 (1995), 137145.
    • 19. Gutfleisch, M. et al. How does usable security (not) end up in software products? Results from a qualitative interview study. In 2022 IEEE Symp. On Security and Privacy (SP). IEEE, New York, NY, USA (2022), 893910.
    • 20. Habib, H. et al. User Behaviors and Attitudes Under Password Expiration Policies. In SOUPS@ USENIX Security Symp. USENIX, Berkeley, CA, USA (2018), 1330.
    • 21. Heartfield, R. and Loukas, G. Detecting semantic social engineering attacks with the weakest link: Implementation and empirical evaluation of a human-as-a-security-sensor framework. Computers & Security 76 (2018), 101127.
    • 22. Hollnagel, E., Woods, D., and Leveson, N. Resilience Engineering: Concepts and Precepts. Ashgate Publishing Ltd., Farnham, UK (2006).
    • 23. Inglesant, P.G. and Angela Sasse, M.  The true cost of unusable password policies: password use in the wild. In Proceedings of the SIGCHI Conf. on Human Factors in Computing Systems (CHI). ACM, New York, NY, USA (2010), 383392.
    • 24. Jeangène Vilmer, J. The “Macron Leaks” Operation: A Post- Mortem (2019); https://www.atlanticcouncil.org/wpcontent/uploads/2019/06/The_Macron_Leaks_Operation-A_Post-Mortem.pdf
    • 25. Kirlappos, I., Parkin, S., and Angela Sasse, M. Learning from “Shadow Security”: Why understanding non-compliance provides the basis for effective security. In Proceedings of the Workshop on Usable Security. Internet Society, Reston, VA, USA (2014), 1–10.
    • 26. Kittle, B. A practical guide to conducting a barrier analysis. New York: Helen Keller Intern.  (2013).
    • 27. Koivisto, M. and Grassini, S. Best humans still outperform artificial intelligence in a creative divergent thinking task. Scientific Reports 13, 1 (2023), 13601.
    • 28. Kost, M., Loibl, B., Reuter, P., and Stenke, M. #JLUoffline. Der Cyber-Angriff auf die Justus-Liebig-Universität Gießen im Dezember 2019. ABI Technik 42, 1 (2022), 4354.
    • 29. Lain, D., Kostiainen, K., and Čapkun, S. Phishing in organizations: Findings from a large-scale and long-term study. In IEEE Symp. on Security and Privacy (SP). IEEE, New York, NY, USA (2022), 842859.
    • 30. Ma, X. et al. Specifying and enforcing the principle of least privilege in role-based access control. Concurrency and Computation: Practice and Experience 23, 12 (2011), 13131331.
    • 31. O’driscoll, M.P., Pierce, J.L., and Coghlan, A. The psychology of ownership: Work environment structure, organizational commitment, and citizenship behaviors. Group & Organization Management 31, 3 (2006), 388416.
    • 32. Rasmussen, J. Risk management in a dynamic society: a modelling problem. Safety Science 27, 2-3 (1997), 183213.
    • 33. Schaltegger, T., Ambuehl, B., Alexander Ackermann, K., and Ebert, N. Re-thinking Decision-Making in Cybersecurity: Leveraging Cognitive Heuristics in Situations of Uncertainty. In Proceedings of the 57th Hawaii Intern. Conf. on System Sciences. Annual Hawaii International Conference on System Sciences, Honolulu, HI, USA (2024), 47344743.
    • 34. Son, J. Out of fear or desire? Toward a better understanding of employees’ motivation to follow IS security policies. Information & Management 48, 7 (2011), 296302.
    • 35. Van der Schaaf, T.W., Lucas, D.A., and Richard Hale, A. Near Miss Reporting as a Safety Tool. Butterworth-Heinemann, Oxford, UK (2013).
    • 36. Woods, D.D., Johannesen, L.J., Cook, R.I., and Sarter, N.B. Behind Human Error: Cognitive Systems, Computers and Hindsight. Technical Report. Dayton Univ Research Inst (Urdi), OH.
    • 37. Wu, Z. et al. AI creativity and the human-AI co-creation model. In Human-Computer Interaction. Theory, Methods and Tools: Thematic Area, HCI 2021, Held as Part of the 23rd HCI Intern. Conf., Proceedings, Part I 2. Springer, Cham, Switzerland, (2021), 171190.
    • 38. Zimmermann, V. and Renaud, K. Moving from a ‘human-as problem’ to a ‘human-as-solution’ cybersecurity mindset. Intern. J. of Human-Computer Studies 131 (2019), 169187.
    • 39. Zimmermann, V. and Renaud, K. The nudge puzzle: matching nudge interventions to cybersecurity decisions. ACM Transactions on Computer-Human Interaction (TOCHI) 28, 1 (2021), 145.
    • 40. Zou, J. and Schiebinger, L. AI can be sexist and racist—it’s time to make it fair (Comment). Nature 559 (2018), 324326; 10.1038/d41586-018-05707-8

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More