Research and Advances
Computing Applications

An Incentive System For Reducing Malware Attacks

Providing hackers an environment—other than the Net—to test and exhibit theirmalware talents has its rewards.
Posted
  1. Introduction
  2. A Proposal
  3. Motivations and Expected Behavioral Impacts
  4. System Requirements and Operation
  5. Risks and Concerns
  6. Summary
  7. References
  8. Authors
  9. Footnotes

Viruses, worms, and other malware assaults on the integrity of the Internet seem like everyday occurrences. The consequences range from inconvenience, to organizational shutdown, to a compromised and unreliable Internet environment. Whitman ranked deliberate software attacks from viruses, worms, macros, and denial-of-service atacks as the greatest threats to information security, comparable in magnitude to the combined threats from technical failure and human error [12]. Furthermore, the last two factors can be exploited by malicious code. The CSI/FBI threat analysis review described by Power had similar findings [8]. Gordon et al. profile some of the profound economic liabilities posed by these attacks and describe how cyber-risk insurance has developed as a buffer against the resulting financial losses [5]. In the context of security threats, Davenport [3] even challenges an almost axiomatic characteristic of the Internet by arguing that Internet anonymity makes the accountability a functional social order requires impossible.

Here, we briefly summarize some of the research on what motivates hackers, and then describe a test environment and incentive structure we propose can help channel these motivations in a more constructive manner.

The theoretical and empirical research on hacker motivations has been limited. On the basis of interviews conducted with hackers, Jordan and Taylor concluded the most common motivating factors included a compulsive attraction to hacking, intellectual curiosity, an enhanced sense of control and power, and satisfaction from identifying with a group (other hackers) [6]. Rogers discusses types of hackers, classifying them into groups ranging from novice hackers, to disgruntled ex-employees (the group that commits most computer crime), to professional criminals, to cyber-terrorists [9, 10].

Van Beveren [11] has identified external enabling factors that encourage the development of hacker behavior, including a perceived "lack of negative consequences for those who have been caught hacking," the mutual reinforcement that occurs in online hacker communities, and the impact of "community recognition from other hackers." He emphasizes the seductive nature of the psychological experience of flow that occurs in computer and Web environments and the importance of "the thrill of illicit searches in online environments." He also suggests the way individuals perceive a virtual environment (as opposed to the physical world) is an important factor affecting criminal hacking and must be part of psychological explanations of hacking.

Rogers and Van Beveren identify an evolution in hacker mentality over time, suggesting the recent generation of hackers is driven more by greed, power, and revenge than by benign motivations like curiosity. An individual in the better-studied "cyber-punk" category is typically Caucasian, 12–30 years old, from a dysfunctional middle-income family, a loner, not career-oriented, escapist, and obsessed with computers. Hacking gives these people an enhanced sense of control over their lives, prestige, an outlet for hostility, and possibly recognition from the media. The loner self-characterization tends to conflict with the need for peer recognition and the desire to affiliate with other hackers [10].

Of course, researchers are well aware there is no reliable, general profile. Van Beveren suggests making the transition from neophyte to more advanced hacker is more difficult. System administrators could do this by making the beginning stages of hacking more difficult by fixing the more obvious software holes in systems. This would help break the initial positive feedback loop that encourages novice hackers to progress to more advanced forms of hacking. For further discussions, see Parker [7] and Adamski [1]. Rogers’s Web site on Psychology and Computer Crime at CERIAS at Purdue (www.cerias.purdue.edu) has useful references to the behavioral sciences, cyber crime, and IT. Denning’s comprehensive study of information warfare [4] discusses types of computer crime and motivations for criminal behavior.

Back to Top

A Proposal

Our proposed approach to reducing the hostile deployment of malware on the Internet is to develop a small-scale, isolated version of the Internet (a "Microcosm" of the Internet) to serve as a platform for malware developers to vicariously challenge the real Internet, but through a surrogate environment. The environment would be sponsored by a consortium of major universities and software companies. The incentives for malware developers to test their wares in this environment would be economic, psychological, and social. Given the cost of a single serious virus or worm attack on the Internet may be on the order of one billion dollars in direct and indirect costs to the U.S. economy, the economic benefit from avoiding even one attack would merit a substantial monetary reward.

The sponsors of the environment would provide this reward to any challengers whose malware succeeded in seriously disrupting the Microcosm. Successful challengers would also receive broad publicity by demonstrating to the world and their peers that they had developed the malware. The prominence of the consortium would guarantee this publicity.

We recognize this proposal is provocative. It will be challenged as outrageous by some, dismissed outright by others, or considered to be promoting illegal behavior. Nonetheless, we believe it is worth discussing: to understand its implications, possible flaws in its plausibility, or unrecognized side effects, and to empirically clarify its likelihood of success by appropriate surveys.

Our approach is motivated by behavioral models from economics and psychology, which are key relevant models for controlling malware development and deployment. While the underlying enabling causes are the (inevitable) security flaws in complex systems and the difficulty of enforcing sanctions, the activating causes are the human motivations that lead people to design and deploy malware. From this viewpoint, controlling this behavior becomes a question of what kind of incentive structure can satisfy the motivations, but redirect the behavior in a benign fashion. Malware is a potent product, but the current environment provides relatively unenforceable penalties and shaky defenses against it, and no mechanism for pricing its value (or the value of deterring its deployment). If malware could be prevented, there would be no need to price it, but since it cannot, the rational response is to price it in a way that deters its illicit deployment. Our proposal addresses the situation by monetizing the malware.

In the current Internet environment, the incentive structure for most malware developers is nil or primitive. They must remain cloaked in anonymity, they receive no financial gain for their products, and they garner neither public or professional recognition for their clever but destructive work. Furthermore, the sanctions imposed if the malware developer is caught are severe, including criminal and civil prosecution and personal ignominy. The rationale for the proposed system, which combines a test-bed intranet with an incentive mechanism, is based on recognizing and accepting the fact that behavioral forces are the driving factors behind this human phenomenon of malware development. Under this proposal, work now unrewarded in any objective sense would instead be rewarded by economic, professional, and ego incentives.

Such a system would have both preventive and therapeutic benefits. Preventively, the system would attract or siphon off real attacks on the real Internet by transforming the incentives for attackers, making it behaviorally more rational for them to publicly target the Microcosm than to covertly target the Internet. Therapeutically, the system would sharpen existing software and hardware systems through a continual process of repeated software refinement in response to repeated extreme testing by the motivated and inventive malware challengers. It could also attract new individuals, never engaged in illicit activity, to apply their creativity productively within the context of the Microcosm.


Economic motivations are possible where the detrimental side effects of malware can be exploited by developers for gain. While hackers of individual systems steal intellectual property or commercially valuable information, malware deployers may expect to profit indirectly from an attack.


Back to Top

Motivations and Expected Behavioral Impacts

Is the system we propose likely to attract the interest of hackers? To analyze this issue, we present a simple (largely a priori) classification of the motivations of malware developers as either psychological, therapeutic, economic, or terroristic. After briefly discussing each category, we will project the anticipated effect of the proposed system on these different motivations. We emphasize that our focus is not on generic hackers who wish to gain entry to particular computer systems for whatever purpose, but on individuals who develop malware deployed to disrupt the overall Internet.

Psychological motivations include ego satisfaction because of the fame (or infamy, albeit anonymous) associated with having created a virus that the whole world, and particularly the developer’s peers, knows about. Malicious individuals may be additionally motivated by sadistic pleasure or thrill-seeking. Some may not grasp the gravity of their actions, like oblivious juvenile delinquents, or may view the malware as a prank, though even such immature individuals know how to calculate the cost of their acts.

Cohen [2] quotes G. Jelatis of Secure Computing to the effect that adolescents involved in hacking often stop after the age at which they become criminally liable. Therapeutic objectives motivate malware developers who are, or at least claim to be, driven by a kind of idealism. They may want (or claim to want) existing systems to be improved so as to reduce security risks in the future, but they believe that, because of institutional and social inertia, there is no other way to bring about the needed changes except through public demonstrations of the weaknesses of existing systems.

Economic motivations are possible where the detrimental side effects of malware can be exploited by developers for gain. While hackers of individual systems steal intellectual property or commercially valuable information, malware deployers may expect to profit indirectly from an attack. Terroristic motivations, including engaging in information warfare, drive individuals or groups whose intention is to cause widespread real damage, with possible particular reference to the U.S. industrial, commercial, social, or governmental environment. Those involved may be antisocial solo terrorists, or politically motivated terrorist cells engaged in asymmetric socioeconomic warfare, or agents of hostile nations who are probing the Internet for weaknesses on an ongoing basis or honing their skills for possible future use.

With these kinds of motivations in mind, what are the likely effects of the proposed incentive system on malware developer behavior? The behavior of psychologically motivated malware developers seems most amenable to redirection by the proposed incentive system, because it addresses issues of social and peer/professional recognition and provides the prospect of economic gain for any developer clever enough to seriously disrupt the Microcosm. Even psychologically immature individuals with little grasp of the consequences of their actions can understand this kind of motivation: recognition and money can appeal even to a malicious individual who derives sadistic satisfaction by causing disruption. For example, think how such an individual would feel if (after the proposed system was operational and widely known) he or she successfully and anonymously released a virus that wreaked havoc on the real Internet. After such a person saw how damaging the virus was to the Internet, would it not confound him or her to realize how much he or she could have profited and become recognized by instead submitting the virus for testing on the Microcosm? We believe there is a non-negligible chance that such people would perform this utilitarian calculus beforehand, and alter their behavior.

Therapeutically motivated malware developers may be attracted by the system’s incentives because the creation of such a forum would at least have moved the world in the direction such individuals are interested in—one where systems are more secure and more thoroughly tested, and design flaws are fixed. The impact on economically motivated malware developers is less clear. It would depend on how much they expected to profit from releasing the malware on the Internet. They would again make a utilitarian calculation comparing how much they might have expected to profit from direct or indirect effects associated with their malware deployment, versus how much they would profit under the system we propose, including the lower risks they would incur. There would at least be a chance of enticing them to work within the system, more so than at present.

Terrorist motivations would be the least affected by our approach. In particular, highly skilled individuals who exhibit criminal tendencies are an especially dangerous group [11] and unlikely to be attracted to the Microcosm. Other factors that may limit its appeal to inveterate hackers include its corporate sponsorship, the lack of actual victims, and the absence of the illegal thrills of hacking.

The motivations of at least some individuals who initiate serious virus or worm attacks will lend themselves to benign redirection. If only one in 10 cases were successfully redirected and the proposed system caused no net increase in the misbehavior, it would pay for itself. Existing Internet systems would become freer from flaws over time because of the effort focused on challenging them, and so would gradually become less vulnerable to malware terrorism.


The purpose of the proposed Microcosm system is to provide an environment close enough to the Internet in structure, software platforms, and end-host patterns of usage that the behavior of a virus randomly introduced into the system approximates the behavior of that virus on the real Internet.


Back to Top

System Requirements and Operation

The purpose of the proposed Microcosm system is to provide an environment close enough to the Internet in structure, software platforms, and end-host patterns of usage that the behavior of a virus randomly introduced into the system approximates the behavior of that virus on the real Internet. It must both reflect a broad spectrum of platforms and be designed to emulate the behavior and vulnerabilities of a broad spectrum of users. It is not immediately obvious what the detailed requirements of such a system would be. A single small network might provide functionality adequate to accomplishing most of the emulation objectives, or the system might have to be fairly large, requiring many servers, hosts, routers, and so on in order to capture a representative cross-section of operating systems, versions, firewalls, browsers, traffic patterns, software usage patterns, and software packages. Assuming the system were collocated, network delays might have to be artificially introduced into router connections for verisimilitude. Initially, constructing a close simulacrum of Internet hardware and software complexity would seem to be the best and most obvious way to ensure that the results on the Microcosm reflected real Internet behavior, but even a small environment might be effective.

How would the system be run, and what possible operating problems might the system experience? We have indicated it could be sponsored by a consortium of universities and software companies. Malware could be submitted with some type of identification, needed to verify the originator in case the submission turned out to be effective. To minimize worthless submissions, a fee could be charged for submissions, or a brief technical description of the malware could be required so the system’s administrators could determine whether the malware was a plausible challenger or whether the submitter was knowledgeable.

Back to Top

Risks and Concerns

Could the very existence of such an incentive system exacerbate the problem of malware development rather than alleviate it? Certainly, the proposed incentives would interest more individuals in malware development, thereby increasing the pool of expertise in this area and consequently the potential threat. However, the rational behavior argument we have made contends the microcosm rather than the Internet would be the focus of this interest because of the rewards a participant could receive. Also, as we have observed, some researchers emphasize the importance of creating systems environments that make it difficult for neophyte hackers to obtain positive feedback in their initial efforts as a preventive measure in discouraging neophytes, so establishing a supporting environment for the behavior is arguably problematic. On the other hand, the extended knowledge base resulting from this marketplace could itself be brought to bear on the problem of malware.

Another issue is the cost of establishing and operating the system. Liability is a key concern. For example, honeynets; (systems that provide the important service of monitoring visiting hackers to gather statistics on their patterns of behavior; see www.project.honeynet.org), must be careful not to inadvertently propagate malware. A basic microcosm that only accepted viruses for testing in a locked-in environment should limit direct liability risks. Ethical acceptability is also critical. We believe the proposal is ethical since it intends to supplant malicious behavior with sanctioned testing under controlled conditions.

Back to Top

Summary

Viruses are not only a technical phenomenon. They do not spring up by spontaneous generation or emerge as side effects as systems age or deteriorate. They are invented by people for some reason. The originators are often difficult to identify or prosecute because of anonymity or because of their youth. It is also difficult to protect against the security flaws that viruses exploit in our increasingly complex and interconnected systems. Inevitably, an appropriate response to this human phenomenon must include understanding, altering, and redirecting the motivations that cause this activity, at least those motivations that can be addressed by feasible incentive structures.

Human behavior can be altered by training, by inculcating moral codes to make individuals appreciate the implications of their actions, by applying stronger penalties for misbehavior—or by instituting incentives and rewards, either economic, psychological, or social, for good behavior. We propose the latter approach as having an important role to play in reducing the problem of Internet viruses and other malware. An Internet Microcosm would provide a venue within which these motivational forces could operate.

Back to Top

Back to Top

Back to Top

    1. Adamski, A. Crimes related to the computer network. Threats and opportunities: A criminological perspective; www.infowar.com/new, 1999.

    2. Cohen, R. Experts call hacker motivation key to prevention. Infosec Outlook 1, 2 (May 2000). Carnegie Mellon University, CERT Coordination Center, Pittsburgh, PA.

    3. Davenport, D. Anonymity on the Internet: Why the price may be too high. Commun. ACM 45, 4 (Apr. 2002), 33–35.

    4. Denning, D. Information Warfare and Security. ACM Press, Reading, MA, 1998.

    5. Gordon, L.A., Loeb, M. P., and Tashfeen, S. A framework for using insurance for cyber-risk management. Commun. ACM 46, 3 (Mar. 2003), 81–85.

    6. Jordan, T. and Taylor, P. A sociology of hackers. Sociol. Rev. 46, 4 (1998), 757–780.

    7. Parker, D. Fighting Computer Ccrime: A New Framework for Protecting Information. John Wiley & Sons, New York, 1998.

    8. Power, R. CSI/FBI computer crime and security survey. Comput. Secur. Issues Trends 8, 1 (Jan. 2002), 1–22.

    9. Rogers, M. Psychology of hackers: Steps toward a new taxonomy; www.infowar.com, 1999.

    10. Rogers, M. A social learning theory and moral disengagement analysis of criminal behavior: An exploratory study. Ph.D. Thesis, Dept. of Psychology, University of Manitoba, Winnipeg, 2001.

    11. Van Beveren, J. A conceptual model of hacker development and motivation. J. E-Business 1, 2 (Dec. 2000), 1–9.

    12. Whitman, M. Enemy at the gate: Threats to information security. Commun. ACM 46, 8 (Aug. 2003), 91–95.

    This work was supported, in part, by the "NJ I-TOWER" Grant from the New Jersey Commission on Higher Education, award #01-801020-02.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More