The Internet has become a global, complex, layered, and increasingly indispensable ecosystem. For purposes of this column, "Internet" includes the underlying digital transport infrastructure including subsea and land-based fiber and cable, orbiting satellites, the networks of routers, the Domain Name System, datacenters and their networks, edge devices of all kinds (laptops, desktops, pads, smartphones, Internet-enabled devices, and sensors), the World Wide Web, content distribution systems and, for all I know, the kitchen sink. Adding to that are all the institutions associated with implementing, operating, standardizing, and regulating various aspects of this multifaceted construct. Not to be forgotten are all the organizations and individuals utilizing the Internet in their daily activities. Such a comprehensive definition explains why governments of the world have interest in and concerns about the Internet.
The grand collaboration that allows the Internet to work has delivered countless benefits to all the interested parties. It is a generative infrastructure that invites innovation in all dimensions: applications, enabled and enabling devices, new communication technology, business models, social networking, financial services, political engagement, scientific discovery, and much more. It has put amplifying power into the hands of countries, corporations, and consumers unlike anything in the past. By implication, this system is equally available for beneficial and harmful purposes as is often the case with infrastructure open to public use. How can we preserve its beneficial uses and diminish unwanted harmful abuse?
We can start by evolving the technical infrastructure to incorporate stronger defensive measures in its implementation. Among these, I would include Domain Name System Security (DNSSEC), two-factor authentication, transport layer security, Border Gateway Protocol security, operating system security, increased redundancy for resilience, end-to-end cryptography, stronger identity protection, improved flow and congestion control, and implementation of IPv6.
By definition here, the Internet is not some ethereal "cyberspace." It has physical presence in legal jurisdictions of many and varying kinds ranging in scope from local to global. It is subject to the imposition of regional, national, and local rules, not all of which are necessarily compatible. This results in several forms of fragmentation occurring in various layers in the Internet's architecture. Within their jurisdictions, governments can impose and enforce demands on the operators of the Internet's component parts. Competitors sometimes create "walled gardens" to keep users in and others out. Users adopt virtual private network tools to shift their apparent locations or configure to use alternative DNS servers to avoid restrictions. Governments "seize" domain names by imposing demands on domain name registries within their legal jurisdiction.
From a technical perspective, some of the constraints on openness and freedom cannot be escaped within a national jurisdiction, but their effects might be limited to a national geography. For example, a country might enforce use of particular DNS resolvers but would not be able to enforce this outside its jurisdiction. For those wishing to preserve the openness and freedom of the Internet, containing such constraints will be a desirable outcome. There is, however, some tension between preserving openness and freedom while also holding harmful actors to account. One must identify bad actors reliably before they can be prosecuted. Cross-jurisdictional cooperation may be needed to achieve that end.
There are layers and players in the Internet ecosystem and their positions and roles should be measured when considering the safety and security of the Internet environment. In the name of "data sovereignty," some countries require personal information be held within that country's geopolitical boundaries. A different technical response might be to require such data is strongly encrypted and access to it is subject to strong authentication procedures, no matter where it is physically located. We have not dealt with cybersecurity and cyberattacks in this column and there is precious little space left to do it. Much of the harm experienced in the Internet environment is a consequence of vulnerable software exploited by harmful actors for their nefarious purposes. These range from vandalistic hackers to governments with serious intent to do harm for political, financial, or military purposes. Because software permeates and animates the entire Internet ecosystem, inventing better software development tools should be a high priority for all interested parties. Preservation of the Internet's value is as complex as the ecosystem it now represents and is a desirable focus in the decades ahead.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.
Although the basket of security technologies would, in a perfect world, resolve most problems, I think we need to look at simpler implementations. Assume that all internet traffic could be traced to its author. This non-repudiation would mean that devices like Onion rerouting would fail.
Imagine a ransom attack if you knew the attacker.
The question in my mind is whether this transparency can be achieved on the internet. We might look at blockchain ledgers for some ideas.
Underlying all of this is the quite crude maturity of the software development practice itself. It is what we refer to as make to order. Maturity level 3 what surrounds most of our daily lives outside of software is where we need to be assemble to order is the phrase associated with this maturity level. In summary, since all of the software, including the underlying security software is written in maturity level 1 practices, the hackers will continue to win.
Displaying all 2 comments