This article describes several cybersecurity innovations. First, it proposes to integrate behavioral economics’ findings of biases in judgment and decision-making into cyber strategies, policies, and guidance using a new framework called Behavioral Economics of Cybersecurity, or BEC. Second, it aligns BEC with NIST’s Risk Management Framework by treating persistent human biases as a special type of vulnerabilities in the Risk Assessment phase and by controlling these biases in the Risk Response phase. Third, it defines the BEC structure using a Zachman-like two-dimensional framework of cyberactors (Users, Defenders and Attackers) from three cybersecurity perspectives (Confidentiality, Integrity and Availability). The paper also provides examples of how common cybersecurity exploits map into the BEC framework.
1 Introduction
Cyber practitioners regard a human as the weakest link of cybersecurity, yet cyber strategies and policies do not reflect this reality. An implied assumption of cyber guidance is that, if decent people are properly trained, they will do the right thing. So why don’t they? People habitually take decision-making shortcuts with consequences ranging from undesirable to catastrophic. An exploration of faulty judgments and implementation of relevant countermeasures could enhance cybersecurity.
This paper proposes a framework for bridging the gap between theory and practice of the human role in cybersecurity through the identification and mitigation of persistent cognitive biases. The motivation for this work is the impact behavioral economics has made on standard economics by amending the rational-actor model with quantifiable irrationalities. Rational actors are assumed to know and do whatever is in their best interest. While economists have always been aware of various manifestations of irrational decisions, they previously disregarded them on the premise that individual irrationalities are random occurrences that cancel each other out without detriment to economic modeling. However, in the 1970s cognitive psychologists revolutionized the field by demonstrating that many biases are not random but rather typical and persistent, enduring even when individuals are aware of them. Their work gave rise to behavioral economics, which bridged the gap between economic theories and psychological realities.
This paper proposes a similar approach of bringing Behavioral Economics models into Cybersecurity to identify common cyberactor biases and develop mitigating models. Figure 1 illustrates the parallels between the two realms. Just as in the study of the marketplace, behavioral factors (B) modify economic models (E) demonstrating behavioral economics (BE) effects; similarly, in cyberspace, behavioral economics models (BE) of cognitive biases can enhance cybersecurity (C) in the proposed framework of Behavioral Economics of Cybersecurity, or BEC.
Figure 1. Analogy between BE and BEC studies
Section 2 describes how BEC fits into the existing cybersecurity work by extending the Risk Management Framework with persistent human biases. Section 3 defines principal cyberactors whose biases are addressed by BEC and breaks down actors’ activities into cybersecurity perspectives, thus producing the overall structure of the BEC framework. Section 4 further refines categories of BEC actors and provides examples of how common cybersecurity exploits map into the BEC. Section 5 provides some examples of applying BE findings to BEC.
2 BEC Extension of the RMF
At the intersection of economics and cybersecurity lies risk, which creates a symbiotic relationship between the two realms. In the marketplace, risk is increasingly associated with cyberspace activities; and in cybersecurity, risk management is the principal economic model. In economics, there are as many definitions of risk as there are economists, but risk calculation always comes down to the product of the probability of an event and its impact. For example, Bodie provides an economics definition of risk as follows, “…investment risk is uncertainty that matters. There are two prongs to this definition—the uncertainty and what matters about it—and both are significant” (Bodie & Taqqu, 2011, p.50). In this definition, “the uncertainty” represents the probabilistic component, and “what matters” is the impact. In both economics and cybersecurity, human biases increase the probability component of risk. This section describes proposed modifications of the current Risk Management Framework (RMF) by incorporating into it human biases as a new class of vulnerabilities.
In the United States, the RMF developed by the National Institute of Standards and Technology (NIST) is an authoritative source on risk in cyberspace. NIST does not define risk but describes Risk Management as “a comprehensive process that requires organizations to: (i) frame risk (i.e., establish the context for risk-based decisions); (ii) assess risk; (iii) respond to risk once determined; and (iv) monitor risk on an ongoing basis” (NIST SP 800-39, p.6). The calculation of risk takes place in the Risk Assessment phase of the RMF that aims
“to identify, estimate, and prioritize risk to organizational operations … resulting from the operation and use of information systems. The purpose of risk assessments is to inform decision makers and support risk responses by identifying: (i) relevant threats to organizations or threats directed through organizations against other organizations; (ii) vulnerabilities both internal and external to organizations; (iii) impact (i.e., harm) to organizations that may occur given the potential for threats exploiting vulnerabilities; and (iv) likelihood that harm will occur. The end result is a determination of risk (i.e., typically a function of the degree of harm and likelihood of harm occurring)” (NIST SP 800-30 Rev. 1, 2012, p.1).
The notion of cyber risk as a function of threats, vulnerabilities and impact is conceptualized by Nichols, Ryan & Ryan in a formula “Level of Risk = (Threat x Vulnerability) x Impact / Countermeasures” (2000, p.70), where Threats correspond to components of risk posed by hostile organizations or individuals, and Vulnerabilities are characteristics of friendly systems that constitute flaws exploitable by the threats. These relationships can be expressed as follows:
R = I * Pr / C, i.e., Risk (R) is Impact (I) times Probability of the incident occurrence (Pr), reduced by Countermeasures or Controls (C); and
Pr = T * V, i.e., Pr is the product of Vulnerabilities (V) and Threats (T) that could exploit them.
Thus, cyber risk calculation is essentially the same as that used in the economic risk calculation.
NIST SP800-30 rev. 1 enumerates vulnerabilities related to organizational, business, and technical issues (2012, Table F-1, p.F-1), however, the human vulnerability is missing from the current NIST description. If people are as predictably irrational as behavioral economists have repeatedly demonstrated, then cognitive biases represent a persistent source of vulnerabilities and should be incorporated in the RMF.
Significantly, the human side of cybersecurity is recognized in the Information Assurance Technical Framework (IATF) that defines the three components of Defense In Depth (DID) as people, operations and technology (2000, p.ES-1). In comparison, the current approach to risk management is focused only on operations and technology. BEC seeks to establish the human component in the cyber risk framework as shown in Figure 2.
Figure 2. BEC alignment of DID and RMF