Jens Grossklags
iSchool/UC Berkeley
jensg@ischool.berkeley.edu
Nicolas Christin
CMU/CyLab Japan
nicolasc@cmu.edu
John Chuang
iSchool/UC Berkeley
chuang@ischool.berkeley.edu
Collective awareness of the need for information security has
considerably risen in the past few years. Yet, user behavior toward
security in networked information systems remains obfuscated by
complexity, and may seem hard to rationalize [1].
This observation is perhaps not surprising, considering that
users' security decisions appear often in contradiction with their
own stated attitudes and desires. Indeed, when asked in surveys,
computer users say they are interested in preventing attacks and
mitigating the damages from computer and information security
breaches [1]. On the other hand, studies (e.g.,
[3,9]) evidence the low levels of privacy and
security precautions taken by a vast majority of users.
Academic research has isolated the misalignment of user incentives as a root cause for the observed dichotomy between behavior and stated attitudes and low overall system security. Specifically, users often fail to internalize the impact of their own decisions on 1) the motivation of other users to secure; and on 2) the level of security that can be achieved in an interconnected network [2,6].
This paper proposes to explore the relationship between economic and psychological-behavioral incentives for improved or declining system security. We aim to enhance the understanding of the often puzzling and frustrating security and privacy decision-making of individuals and groups to advance institutional and policy responses.
As a first step, we focus here on a weakest-link security scenario. For instance, consider a single unpatched system being infected by a worm, which, as a result, allows an attacker to assail (e.g., deny service to) the rest of the network without facing additional defenses. The incentives other users have to protect their own systems against security breaches are weakened, since irrespective of their individual decisions, they still suffer the consequences induced by the existence of the infected host.
The weakest-link scenario has been formalized as an economic game [7,8] and applied to system security and reliability [10]. We generalize the model by allowing users to not only protect their resources (e.g., by installing a firewall), but also to self-insure against the consequences of security breaches (e.g., by backing up valuable data) [6].
Applying game-theoretical analysis, such as computation of Nash equilibria, requires several assumptions regarding user behavior. All users are supposed to have all information about potential actions and associated payoffs available, to infallibly select actions that are profit-maximizing, and to perfectly take into consideration other users' optimizing decisions [4]. In the context of security games, these assumptions can be challenged, and individuals frequently deviate from perfect rationality, and do not always select purely selfish actions.
In an effort to better understand the origin of these departures from optimality, we have started to systematically study the actual behavior of individuals in the weakest-link security game. We contrast the behavior predicted by game-theoretic analysis with the behavior we observe in a a pilot study of a controlled laboratory experiment. Participants have to select protection and recovery strategies in an environment with non-deterministic attacks and a lack of information about parameters of the game.
Formal description. The game is played amongst players, denoted by who are all potential victims of a security threat. The attacker is exogenous to the game. The game consists of multiple rounds. In each round, security attacks occur probabilistically, according to a fixed, exogenous probability , which determines the baseline rate of security attacks. For instance, means that the attacker attempts to compromise the network half of the time, and is idle the rest of the time. All attempted compromises are not necessarily successful, however; the success or failure of an attempted attack depends on the defensive measures put in place by the players, as we discuss below. Successful security compromises will result in all players incurring losses. Indeed, in a weakest-link game, compromising one player (the so-called weakest-link) means compromising the whole network. However, the extent of each player's losses, , is dependent on the choices made by that player.
More precisely, each player receives an endowment in each round. The endowment can be utilized for two security actions: self-protection ( ) and self-insurance ( ) with linear associated (positive) effort costs and , respectively. Self-protection acts to probabilistically block attacks. Self-insurance deterministically lowers the penalty occurred during an attack that is not blocked. So, each player can face three states in each round: 1) no attack occurs, 2) an attack takes place but it is blocked due to self-protection (and self-protection of all other players), and 3) an attack happens and it is not blocked. Self-protection influences the likelihood of occurrence of states 2 and 3. Self-insurance lowers the hurt in state 3. Formally, we express the payoff to a player as:
In practice, many security mechanisms combine features of both protection and insurance as we have defined them. For example, a spam filter might not automatically block all dubious messages but redirect some of them into a special folder. Reviewing these messages separately from the main mailbox will often result in lower cognitive cost to the individual. Hence, the spam filter lowers both the probability of spam and the magnitude of the associated cost if spam passes the filter. Even though protection and insurance are often intertwined in practice, for the purposes of this research study we chose to clearly separate both actions in order to be able to dissociate their effects.
Nash equilibrium. Considering the payoff function in Eq. 1, if we assume symmetric costs homogeneous across users, that is, if for all , , and , the weakest-link security game has two Nash equilibria that are not Pareto-ranked: Either all individuals protect with a certain effort but neglect insurance (protection equilibrium), or everybody fully insures and decides not to protect (insurance equilibrium). Both Nash equilibria yield identical payoffs. We sketch the proof in Appendix A, where we also show that these results extend to the asymmetric case where individual players face different , and .
Will the game converge to a Nash equilibrium outcome? Prior experimentation centered on non-continuous and non-probabilistic versions of the weakest-link game with or without limited information about important parameters of the game [8]. Data shows that experiments usually converge and individuals are able to tacitly coordinate on a Nash equilibrium outcome. Disagreements between players usually disappear quickly with all players focusing on one Nash strategy.
Does the self-insurance equilibrium dominate other outcomes? Because the protection equilibrium is sensitive to defection by even a single player, we expect this equilibrium to be observed less frequently, in particular as the group size increases [6]. From a behavioral perspective, however, we would expect individuals to at first attempt to protect their resources.
Is experimentation a prominent part of players' strategies? We expect that the limited information environment of the game stimulates players to experiment. Similar to the findings of [5], we suggest that players in security games will systematically probe the action space. In contrast to [8], participants in our experiments are unaware of the type of security situation they are facing, i.e., they do not know that it is a weakest-link game, creating a further incentive for experimentation.
Setup. We recruit participants from a student subject pool at UC Berkeley, and have them participate in the experiment in a special computer laboratory, isolated from each other by separation walls. After reading and signing consent forms, they receive instructions for the experiment that set the context, explain the two main user actions (self-protection and self-insurance) and introduce the user interface.
The user interface provides individuals with two slider-type input devices that allow them to modify their security settings. Feedback is given both numerically and in graphical panels to help users recognize patterns and trends in the data.
The experiment proceeds continuously, without pause between payoff rounds. The length of a round is 5 seconds. The whole experiment lasts 150 rounds. The average attack probability is , i.e., attacks occur about every 3 rounds. Protection and insurance costs are symmetric ( ).
To capture the low-information feature of security decisions, participants do not receive any specific information about the structure of the model or its parametrization. We do not inform them about the number of players in their group, other players' actions, or payoffs. Participants, however, do receive feedback on the attack state of the past round.
Results. We describe the outcomes of two 2-player games, and one 3-player game. Fig. 1 displays the data for two 2-player games. All four players experiment with parameter settings throughout the whole duration of the session.
In the first game (Fig. 1(a)) both players follow different approaches in search of rewarding strategies. Player A's strategy selection resembles a heartbeat pattern with values kept for less than 5 periods. Protection and insurance levels are often modified in unison. Player B, on the other hand, usually keeps one parameter fixed while modifying the other. Furthermore, Player B keeps settings constant for longer periods on average. This first 2-player game does not converge to a Nash equilibrium.
The players in the second 2-player game (Fig. 1(b)) follow a different experimentation and convergence pattern. After an initial phase of exploration with relatively sudden, and sometimes extreme, changes in both parameters, the two players settle on a more moderate pattern of change. Both players converge to settings with high protection efforts. Surprisingly, even though few attacks beyond round 50 are successful, both players keep up a relatively high insurance effort.
We also provide data for a 3-player game (Fig. 2). Most remarkable is the strategy play by Player B, who quickly settles on a low protection and high insurance strategy with little experimentation. At round 65 we observe a short-termed complete reversal of this strategy for experimentation, during which the subject suffers from one security compromise that might be the cause for the quick return to the prior strategy. Player C experiments thoroughly with parameter settings that pit protection and security against each other. For most of the game beyond round 50 player C plays close to the individually rational strategy to insure and not protect. Player B selects a lower insurance level but approximately follows the same strategy from round 30 on. Surprisingly, player A never adapts, even though at least one player selects low protection settings from round 30 until the end of the game.
The initial results we report here suggest that the weakest-link security game has several properties that distinguish it from the classical weakest-link game (with non-probabilistic payoffs and without self-insurance). First, individuals experiment frequently and often thoroughly. Second, convergence to a Nash equilibrium is not achieved within a few periods. In fact, in the data gathered so far, we do not observe convergence to any of the predicted equilibria at all. Given that each game lasted 150 rounds (i.e., 12.5 mins), this result is surprising.
We find initial evidence that the individual approach to experimentation has a distinct impact on whether a player will find an individually rational strategy and the game will converge to a Nash equilibrium. Our results further evidence that some players hesitate to try strategies that require them to decouple the protection and self-insurance parameters from each other.
We contribute to a better understanding of the psychology of security decision-making, by providing economic models that capture important aspects of organizational structure and add a so far overlooked aspect of decision complexity, i.e., the difference between protection and self-insurance [6]. Our experiments aim to uncover how well individuals can follow economic incentives, and where complexity impedes the realization of good security outcomes.
A Nash equilibrium is a ``best response'' equilibrium, where each player picks the pair which maximizes his/her own payoff, given the set of values chosen by all other players. Nash equilibria are expected to be observed under the assumption that all players are perfectly rational, and know all strategies available to all players, as well as the associated payoffs.
Let us assume that all players have identical parameters (i.e., for all , , , , and ). If we denote by the minimum of the protection levels initially chosen by all players, a study of the variations of as a function of and yields that three types of equilibria exist [6].
First, a protection equilibrium occurs, when and either 1) or 2) and . That is, everybody picks the same minimal security level, and no one has any incentive to lower it further down. This equilibrium can only exist for low protection costs ( ), and may be inefficient, as it could be in the best interest of all parties to converge to , to have a higher chance of deflecting incoming attacks [6]. This protection equilibrium depends on the cooperation of all players, and is therefore very unstable. It requires only a remote possibility that any of the players will not select the full-protection equilibrium for the remaining player to defect. In the second type of equilibrium, all players will self-insure themselves completely ( and ), when and either 1) or 2) and . Essentially, if the system is not initially secured well enough (by having all parties above a fixed level), players prefer to self-insure. The effectiveness of this security measure does not depend on the cooperation of other players. A third, trivial equilibrium, is a passivity equilibrium, where all players choose , when and .
We can extend the presentation to an asymmetric case, where different players have different valuations , , and . For simplicity, consider first a two-player game. By definition, Nash equilibria are characterized by the reaction functions and reaching a fixed point, that is . Indeed, the effects of self-insurance on the payoffs received is independent of the other player's actions, and is therefore not a factor here. In Fig. 3, we see that a fixed-point is attained when (self-insurance-only equilibria, as discussed before), and when both and are greater than .
Generalizing to players, we obtain the following distinction: if, for all , , and either 1) , or 2) and , the minimum initial protection level, is greater than , then we have a Nash equilibrium where everyone picks . Otherwise, all players select . The value of self-insurance they select depends on their respective valuations. Players for whom insurance is too expensive ( ) do not insure, with , while others choose full self-insurance, that is . This result extends observations made by Varian [10] in the absence of self-insurance strategies.