To commemorate the 20th anniversary of the 2003 Computing Research Association (CRA) Gordon-style Conferencea on Grand Challenges in Trustworthy Computing, the original attendees were invited to a virtual retrospective. This landmark conference, held November 16–19, 2003, at Airlie House in northern Virginia, brought together 50 technology and policy experts in security, privacy, and networking to identify transformative research challenges. The resulting report became a cornerstone for researchers and funding agencies at a pivotal moment in the evolution of cybersecurity (then called information and communication security).
The 2003 attendees identified four grand challenges:
Eradicate major cyber threats such as viral attacks, spam, and denial of service attacks within a decade.
Develop trustworthy large-scale systems for critical infrastructure, ensuring resilience against targeted attacks using scientific principles and reliable development methods.
Provide user-friendly security and privacy for dynamic and ubiquitous computing systems, empowering end users with comprehensible tools.
Create quantitative risk management tools for information systems similar to the rigor of financial risk management techniques within a decade.
In 2023, participants were invited to reconvene online for a six-week period to review the progress made on these challenges, evaluating successes, failures, and lessons learned. This article distills those discussions, offering insight for a new generation of cybersecurity professionals who face many of the same threats today in an ever-evolving landscape.
The 2003 Grand Challenges Report5 was published at a historical turning point when the importance of cybersecurity was emerging as a critical enabler of the digital revolution. Today, cybersecurity is a central topic in policy discussions on election integrity, IT infrastructure, privacy, and global information sharing. As many of the original participants remain active in the field, this retrospective not only provides a historical perspective but also serves as an inspiration to tackle new challenges and secure the future of computing infrastructure.
Key Insights
From the Perspective of 2003
Context. When CRA asked us to organize the second in what would become a series of conferences intended to challenge the research community to address important computing problems, the structure of the Internet was, in many ways, still vague and indistinct. Less than 10% of the world’s population was connected to the Internet in 2003; now, the global population of Internet users is estimated to be nearly 70% of humanity.b It would be four years before the iPhone would be introduced to the world, Facebook did not yet exist (it was established in 2004), and it was still early in Google’s evolution from one of many search engine companies to a dominant provider of Internet services (Gmail was not launched until 2004). Sun Microsystems and HP were two of the world’s main computer companies. The global market for online advertising was slightly more than $7 billion, and would grow 30-fold to nearly $210 billion over the next 20 years. Military and intelligence applications had embraced computerization in the previous decade. However, most critical infrastructure still relied on human operators and mechanical controls.
By 2003, Peter Neumann’s “Risks to the Public” forum had been a regular feature of ACM SIGSOFT and, eventually, Communications of the ACM (CACM) for nearly 20 years. Inspired by ACM President Adele Goldberg’s 1984 letter to the ACM Membership,6 citing how “increasingly, human lives depend upon the reliable operation of systems,” the Risks forum was notable for having few citations to what we would today call cybersecurity incidents. However, epidemic-style attacks on network-connected devices were on the rise and accelerating at an alarming rate. When the conferees met, the damage caused by cyberattacks was on a path to tripling every year.c
In January 2003, the SQL Slammer worm took only 10 minutes to propagate, disabling half of the DNS root servers in the world and forcing critical services such as banking, 911 calls, and air traffic control offline.d Within six months, two more destructive malware attacks (the SoBig virus and Blaster worm2) degraded network services worldwide. Preventing such attacks from crippling information and communications technology seemed beyond the reach of computer scientists and engineers. In addition, there was widespread apprehension about the growing gap in the numbers and training of professionals to face the threats posed by nation-states, organized crime, and a generation of anarchists and criminals who all had access to the same technology that was used to defend vulnerable systems.
The National Information and Communications Security Strategy was heavily influenced by the events of September 11, 2001,13 but did not have definite recommendations on how to understand and counter known threats.
The Board of the Computing Research Association (CRA),e which represents the principal academic, governmental, and industrial computing research organizations, overwhelmingly favored a national meeting designed to produce a research agenda geared to these trends. A diverse group of experts was chosen based on vision statements crafted in response to a published request for participation.f
Identifying the challenges. The group assembled at Airlie House in Northern Virginia in November 2003 to draft a list of challenges to drive research and development for the next generation. We structured the meeting to consider “out-of-the-box” approaches to make infrastructure immune from attacks by various threat actors and thus more trustworthy for all users. Trial balloons, candidate challenges, multiple working sessions, and the secluded setting of the venue presented opportunities to argue priorities and trade-offs. The steering committee sought a small set of high-level goals instead of an arbitrary “top 10” list.
Each morning, conferees self-assembled into groups for reasons they did not need to announce. As a result, some groups represented mature lines of research and were packed with experts with long publication records. Other groups were formed to consider nascent research areas where important ideas were beginning to emerge. In all cases, however, heterogeneity reigned. The groups were free to adopt their own methods for imagining and prioritizing research goals. Straw polls and reading lists were common, as were formal position statements and role-playing exercises to test the human impact of success and failure. Almost everyone started with vision statements to illustrate why some “hard problems” were more important than others. Trial challenge balloons were launched and shot down; notes became formal statements that were reviewed, reworked, and sorted. Votes were taken, but they were not always determining factors. The themes, mostly related to the idea of trust, emerged early in the week:
We have to trust at some level, but we must always test and verify the results for plausibility with this assumption. This is a fundamental design requirement.
Humans rarely trust completely; it is earned. Identity helps create a context to allow accumulated experience.
Compartmentalization is key; the attacker who compromises a single application is not able to take the entire network down as a result.
As themes circulated through the groups, key elements that sound thoroughly modern found their way into overarching questions:
Can you spread risk electronically?
Should identity be open or private?
Can you make a trustworthy system from malicious components?
The decision-making process, as well as the challenges themselves, evolved from these discussions. Consensus was not a requirement, but it was achieved nonetheless. The result was a document summarizing four main challenges.
Reporting the results. The results of the workshop were announced at a briefing at the National Press Club and published as a report.5 Although we cannot say exactly how much influence the meeting had, the four challenges appear to have affected the national information security and privacy research agenda.g The workshop influenced the report Cyber Security: A Crisis of Prioritization12 to President Bush from the President’s Information Technology Advisory Committee (PITAC), the Hard Problem List,7 and the National Academies 2007 report Toward a Safer and More Secure Cyberspace.10 Conferees from the 2003 meeting were involved in the production of all of these documents.
The Grand Challenges
Assumptions. In 2003, information technology was becoming increasingly ubiquitous in our daily lives. The computing infrastructure was growing more complex and interconnected on a worldwide scale. The historical intertwining of Moore’s Law and increasingly capable software continued to push the boundaries of computing power and complexity,4 making it more difficult for developers to assure users that systems were trustworthy. Furthermore, the Internet-driven digital transformation that had swept through various industries clarified that computing security and trustworthiness were becoming critical to global infrastructure.
Participants had to make assumptions about the landscape that supported such an overarching vision. First, the Internet and an array of public and private specialized networks were already playing critical roles in computing, so it was only possible to imagine progress by assuming a reliable network infrastructure. Furthermore, the networks and infrastructure of the underlying system should be sound. We assumed a reliable end-to-end infrastructure to avoid building systems on shifting sands. Second, the scale of assurance problems demanded a new generation of effective methods and tools to design and build reliable systems. Finally, it seemed unreasonable that a single “silver bullet” solution would appear. Each system and application domain would require understanding the human and societal factors that contribute to trustworthy systems.
These were ambitious assumptions, but the trajectory of technology was headed in the right direction. Increasingly powerful computers were already becoming smaller, cheaper, and more easily embedded in other systems. The computers were also more mobile. Networking and mobile computing were becoming ubiquitous, reaching a global community and inviting more of humanity to participate in the digital revolution. E-commerce, e-government, on-demand services, telecommuting, telemedicine, and entertainment were among the industries that would ultimately be affected. With all of this development, it was assumed that large amounts of engineering data would be available to system designers, reducing the number of “trial and error” approaches to technology and policy.h
In the years following the conference, the cybersecurity community made significant progress in strengthening the underlying infrastructure, validating many of the group’s assumptions. For example, the widespread adoption of encryption protocols such as the Secure Sockets Layer (SSL) and, later, Transport Layer Security (TLS) improved the overall trustworthiness of the network infrastructure. Secure networking protocols such as IPv6 and the Domain Name System Security Extensions (DNSSEC) added to network robustness, although the pace of widespread adoption seemed glacial to many of us. Similarly, the development of security tools and virtualization technologies made it possible to isolate different components of a computing system and reduce attack surfaces, enabling a more reliable end-to-end infrastructure. Advances in foundational areas, including cryptology, led to the widespread adoption of strong encryption and authentication mechanisms, making it more difficult for attackers to compromise systems and steal sensitive data.
The challenges. The grand challenges were of a different order than the results of previous similar efforts. In particular, they were not incremental. Problems for which there was a clear pathway from existing knowledge to a solution never made it to the “grand” list. In addition, our view was that a grand challenge should define its success criteria. We also believed that each challenge statement should include a time frame and a deliverable solution to a well-posed technical problem. The conferees agreed that, otherwise, it would be impossible to say when a general statement of intent, no matter how ambitious, was satisfied. Furthermore, grand challenges must be relevant to the direction of the field. Each should be accompanied by an explanation of why existing approaches were insufficient and required different methods. Finally, grand challenges should be “grand” in the literal sense: They should excite the imagination. They should require a level of innovation or creative invention that commands the attention of the most capable and fearless scholars. They should also be worthy of investment from the research community due to their potential for a broad impact.
While whittling down scores of candidate ideas, conferees embraced an overarching vision of the general direction of computing technology. From a traditional engineering standpoint, computers would become more reliable and support various policies and personal choices. New ways of approaching security would need to anticipate a future in which computers would be more intuitive and predictable. Such a future would require assuring end-user control over the flow of information during code execution, resulting in systems that are easier to control and less brittle, adapting more readily to unanticipated physical conditions and use cases. Most importantly, researchers would find ways to ensure that security is an integral property of the system so that systems are secure by design. There would need to be understandable tools and methods for expressing trust. Knowing that systems evolve and change over time, futureproofing system security (perhaps by reusing complex modules) would be of great practical importance. Such a future would require assuring end-user control over information flows and code execution, and addressing the impact of Moore’s Law on the capabilities of bad actors. The attendees estimated that addressing these problems would cost $400–600 million over 10 years. Failure to do so might well contribute to social disruption, political chaos, and significant lawlessness.
After several rounds of discussion and revision, various subgroups produced lists of candidate challenges. The workshop attendees then discussed and amended these candidates, reaching a group consensus on four items:
GC 1: Within the decade, eradicate widespread viral, spam, and denial-of-service attacks.
GC 2: Create scientific principles, tools, and development methods to build large-scale systems to operate critical infrastructure, support democratic institutions, and promote significant social goals, ensuring their trustworthiness even though they are attractive targets.
GC 3: For the coming dynamic and ubiquitous computing systems and applications, create an overall framework to provide end users with comprehensible security and privacy that they can manage.
GC 4: Over the next 10 years, the aim is to create and implement quantitative models, methods, and tools to manage risks in information systems that are comparable to quantitative financial risk-management techniques.
Hits and misses. We asked the 2023 online panel to rate the community’s performance in addressing the four challenges. The initial responses were discouraging. Many participants said that not a single challenge was met. Others pointed out—correctly—that we were only dimly aware of the scale and trajectory of the problems we were addressing. GC1, for example, was assumed to be a five-year, $600 million problem—a woeful underestimate.
Some failures can be traced to assumptions that did not anticipate the pace and scale of technological change. One central area where the conference fell short was agreement on the impact of inexpensive mobile technology on the overall security landscape.i The explosion in numbers of connected devices introduced new attack vectors and made it more challenging to secure systems. Consumer-grade IoT devices have become ubiquitous, and their users need more training to appreciate the effect of connectedness on security and privacy. In a parallel enterprise trajectory, gigabit networks, powerful distributed computing capabilities, and the rise of cloud computing also created new challenges for security and privacy. Challenges such as these were not adequately anticipated in 2003, and the cybersecurity community has had to catch up as new threats and risks developed.
Other assumptions underestimated the capabilities of threat actors and their ability to influence the global distribution of technology to penetrate vulnerable systems. Ransomware and botnets are examples of attacks that were dimly (if at all) considered in 2003. In 2003, e-Commerce models for the packaging, weaponizing, and sale of malware did not exist, and sophisticated malware-based attacks were rare. Cryptocurrency, a core enabler of current online extortion and crime, was not imagined by workshop attendees. Supply-chain attacks were understood as a potential issue in 2003, but the current magnitude and complexity of modern development were beyond our knowledge at the time.
It is apparent from transcripts of conference breakout sessions that eliminating epidemic-style attacks was thought to be possible as a complete solution rather than being an ongoing problem requiring the continual elimination of multiple threats. We now know that the goal of eliminating threats must be addressed by more than just the research enterprise and that this requires resources well beyond our estimate of $600 million.
GC4 not only did not see a solution in the stated 10-year window, but also has seen little progress in the past 20 years. Some of us opined that this was because the notion of “risk” is vague, subjective, and situational. Risk means something different to a retailer than it does to a military agency or a social media user at home. It is not evident that we can even have a common unit of measuring of risk across all environments. Perhaps the attendees did not think expansively enough about the variety of potential uses of computing and the corresponding forms of failure. Or perhaps this continues to be a grand challenge and needs more focused attention because it is such a difficult problem.
Others noted a lack of understanding of the subject matter areas of certain applications. Here, for instance, is one version of GC2: “By November 2008, design, build, and deploy an electronic system to safely and securely tabulate the votes in a national election with 100% accuracy.” Unfortunately the problem statement does not apply to elections in the U.S., as a U.S. election typically comprises a mix of many thousands of independent contests, each using mutually incompatible legislation and rules determined by states and localities. When combined with human errors and misunderstandings, a fragmented electoral system makes 100% accuracy impossible. Even if a federal mandate were possible, the system envisioned by GC2 would ignore other aspects of conducting a complete election (guaranteeing ballot secrecy, for example) that have nothing to do with tabulation.1 Also, mostly due to Robert Mueller’s report9 and other intelligence investigations conducted after the 2016 presidential election, we also now know of entirely new risk vectors, such as disinformation enabled by social media, that undermine trust in systems that support government and social functions.
Some progress has been made against GC3 but the goal appears farther away than before. NIST has been at the forefront of this effort with standards such as the Cyber Security Framework (NIST-SP-800-171), but these are not well-suited for end users. The diversity of end-user systems coupled with their connectivity has increased the complexity involved to the point where few end users can understand—let alone manage—what is necessary. Perhaps the appropriate application of AI technologies in the coming years may provide a simplified means of understanding and control, although that will add yet more complexity that will need to be addressed.
Critiques of the original grand challenges have merit. However, as became apparent in the online forum, criticism should be paired with recognition of progress achieved over the past 20 years. Among this progress is the idea that cybersecurity is an “enabler” for designers. Similarly to brakes that enable cars to go faster but with greater safety, the purpose of security is to enable computing technology to be applied in high-stakes applications with greater confidence. Similarly, while eliminating attacks may not be achievable, cybersecurity research has reduced overall susceptibility and allowed technical and business solutions to reduce the incidence of DDOS attacks. Advances in cognitive security and design theories have created interdisciplinary approaches to replace usable security with human-centered security.
Even the basic notion of trust has been reexamined considering knowledge developed in pursuit of the grand challenges. Despite the 2003 speculations about whether trustworthy systems could have malicious components, the widespread adoption of resilient and zero-trust designs was not anticipated. These methods enable developers to build systems that detect, contain, and recover from compromised states, creating a more secure operating environment with unsecure system components.
New Challenges
Today’s computing environment is vastly different than the one anticipated in 2003. To illustrate the nature and speed of change in cybersecurity, consider the developments in the few weeks leading up to the virtual retrospective meeting. By early January 2023, it had already become clear that large language models (LLMs) might be unpredictable, disruptive forces shaping information technology. Hundreds of millions of users shared information on prompts, and individuals used these new engines to suggest threat models and began investigating vulnerabilities in IT systems, breaking fragile assumptions about scale and capabilities. The public launch of ChatGPT was immediately followed by a new National Cybersecurity Strategy that promised to rebalance and realign existing approaches to take into account the changing threats and economics of the cybersecurity marketplace.11
The new strategy also promised to rebalance market forces. Cybersecurity measures had been added to the already long list of tasks heaped upon users who, in essence, assumed responsibility for applying security patches, tracking threats and vulnerabilities, and understanding how to detect and contain rogue software delivered to their computers, sometimes by manufacturers themselves. Despite decades of research on usable security, the unfairness of this approach had become apparent to many, including the 2003 conferees. The recent National Cybersecurity Strategy explicitly shifted the burden (and risk) from users to hardware and software vendors, drawing high praise from the 2003 conferees who had participated in the retrospective meetings.
In addition to the scale and unanticipated capabilities of attackers, the growth of the cybersecurity workforce skills gap has become a dominant concern and has stretched thin our educational resources in ways not imagined in 2003. In a similar vein, an increasingly divisive and contentious social and political scene illustrates the role that insiders, nation-states, political actors, and domestic terrorists could play in defining the threat landscape.
Almost all of these developments create problems that seem to require interdisciplinary thinking. Technology alone cannot support cybersecurity research and development. Sophisticated policy solutions, tools for law enforcement, and empirical methods not discussed by the 2003 conferees will undoubtedly play critical roles in defining cybersecurity challenges over the coming years. The field must expand to embrace economics, psychology, law, social equity, international affairs, cyber-physical systems, and the basic philosophy of social networks.
The 2003 meeting identified research challenges that did not make the list but that still needed attention. The relationship of safety to security and a well-founded theory of privacy are both still elusive. And the lack of a mature theory of operating technology security is alarming, given the rapid digital transformation of industrial control systems. In addition to reviving and revising the original GC problems, new grand challenges to counter side-channel attacks,8 software supply chain vulnerabilities,j and domestic digital abuse deserve attention.3 Our understanding of combined hardware and software security for emerging applications and potentially significant technologies such as blockchain and decentralized finance, autonomous IoT, and quantum computing is still in its infancy. Finally, the explosive growth of generative AI has already disrupted offensive and defensive cybersecurity technologies in unimaginable ways.
Summary and Recommendations
As the Grand Challenge committee recognized in 2003, the speed of change and reliance on information technology are still increasing today. Now, as then, we face the risk of significant disruption on an unprecedented scale, including failures in power, transportation, and communication systems; privacy breaches; data tampering; and novel types of theft and fraud. To the 2003 threats from criminals, anarchists, extremists, cyber terrorists, and indiscriminate attackers, we add escalating attempts by nation-states, terrorist networks, and insiders attempting to hijack the tools of democratic governance. These attacks compromise security and, ultimately, trust. A computing infrastructure must be resilient against such attacks to be considered trustworthy. However, the challenges to achieving resilience are not obvious.
The time is ripe for a new Grand Challenge Symposium on Cybersecurity. A new panel to define cybersecurity’s current grand challenges would carry the same possibilities and drawbacks that existed 20 years ago. Any new effort would likely benefit from a careful examination of how assumptions made in the 2003 workshop limited the workshop’s results. That should include some “blue sky” what-if speculation about the development of the field in the years to come. However, even if the base assumptions used are proven incorrect, our experience has shown that such a gathering has a lasting influence on the research community, providing structure to debates and proposals that would otherwise occur in fragments. Over the past two decades, more than half of the principals of the 2003 GC Symposium became research leaders in cybersecurity, and almost all went on to prominence. Their experience in debating research challenges undoubtedly informed a generation of colleagues, students, and constituents, and rather than stifling debate—by prematurely declaring some problems important and others not—became the seed for more robust discussions.
Acknowledgments
The 25th Anniversary Security Symposium of Purdue University’s Center for Education and Research on Information Assurance and Security (CERIAS) was held on the campus of Purdue University in West Lafayette on March 28–29, 2023. The meeting coincided with the 20th anniversary of the 2003 Computing Research Association (CRA) Gordon-Style Conference that we co-chaired to define Grand Challenges in Trustworthy Computing. The 2003 participants met online over six weeks to review and comment on the successes and failures of the Grand Challenge recommendations. We summarized those discussions in a public panel session at the CERIAS Symposium. This article distills the lessons learned from the 2003 workshop. The authors are grateful for this support.
The authors thank the following 2003 GC participants for their contributions and support in 2023 for this report: Virgilio Almeida, Annie Antón, Terry Benzel, Luis Bettencourt, David Clark, Steve Crocker, Jeremy Epstein, David Evans, Simson Garfinkel, Dan Geer, Anup Ghosh, Virgil Gligor, Seymour Goodman, Doug Jacobson, Jay Lala, Carl Landwehr, Peter Lee, Ruby Lee, Clifford Neuman, Cristina Nita-Rotaru, John Richardson, Angela Sasse, R. Sekar, Daniel Simon, Sean Smith, Anil Somayaji, David Stork, Ravi Sundaram, Bhavani Thuraisingham, Peter Wayner, and Jeanette Wing.
The 2003 workshop and the report were partially supported by NSF grant CCR-0335324 and the Computing Research Association.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment