https://bit.ly/3nnBWrl April 26, 2021
Of all the perils he faced during World War II, Winston Churchill said German submarine wolfpacks were his greatest concern, because their attacks on merchant ship convoys threatened to choke Britain's economic lifelines. Today, it seems there is another emerging undersea threat, one that has the potential to disrupt the global economy by severing fiber-optic lines of communication that run along the world's various seabeds.
There are nearly 400 undersea cables that stretch for almost three-quarters of a million miles, the densest concentrations of them being in the North Atlantic and the North Sea, the Mediterranean, and in Southeast Asia and around Japan. They carry virtually all (97%) international communications, and their exact locations are reasonably well-known. They are also increasingly vulnerable to being tapped or even cut by advanced submarine craft of a range of types, from manned mini-subs to remotely operated undersea drones, and even fully autonomous "U-bots."
The Russian Navy seems to have taken to heart the late historian John Keegan's statement, in his Price of Admiralty (https://amzn.to/2Sg2nn5), that by the 1980s the submarine had become more important than the aircraft carrier as an instrument of sea power. Russia's undersea capabilities are exceptional and include a range of vessel types that can approach even cables located at great depths, thanks to the operating capabilities of their U-bots. Admiral Nikolai Yevmenov, the overall Russian naval commander, is himself a deeply experienced submariner. His forces reflect his expertise.
"Any country with a capability for tapping into or severing the undersea cables that drive globalization is a very serious concern."
Given the vast majority of the world's international communications still run via wires, any country with a capability for tapping into or severing the undersea cables that drive globalization is a very serious concern. In Russia's case, possible "mass disruption" of this sort would be a less grave matter given its much lower dependence upon undersea cables than other countries. Russia can, therefore, be viewed as having a strategic advantage in this aspect of cyberwar, which in this form is about conducting physical attacks upon or exploitations of critical information infrastructure.
This concern has led NATO member states, for example, to establish initiatives for the protection of this essential—and almost entirely privately owned—element of the "global commons." Indeed, by September of this year there are intended to be two naval commands up and running—one in the U.S., the other in Europe—that will have the principal responsibility for the defense of undersea cables.
Beyond such military measures, it seems to me this is a situation that calls for diplomacy as well. The world community does not hesitate to craft agreements controlling production and intending to ban use of weapons of mass destruction. So, too, there should be no hesitation about putting limits on those things able to cause "mass disruption." Because the U-bot threat is directed at information systems, it should be seen as falling under the rubric of cyberwar. Like the other weapons that operate in this realm, in and out of cyberspace, there is very little probability of reaching an arms control agreement to prevent their further development. This still leaves open the possibility of crafting behavior-based agreements, binding on all, to refrain from interfering with global communications that flow through the world's undersea cables.
During his second term, President Barack Obama met with President Xi Jinping to discuss the possibility of reaching an agreement to refrain from attacking critical information infrastructures. Both leaders saw it as in the interest of the U.S. and China to pursue such an agreement, but momentum was lost in recent years. It is time now to rekindle that kind of creative thinking about how to secure the global commons. Along with many other nations, I believe the Russians would also join in support of such an initiative. My belief derives from my early personal experience (back in the '90s) with Russian cyber experts who introduced the possibility of cyber arms control in the week-long session we had together.
The alternative? Continue to grow a global economy increasingly dependent on ever-more-vulnerable lines of communication. Simply put, an unacceptable risk.
https://bit.ly/2RrMgCp March 15, 2021
On December 7, 1968, Douglas Engelart presented a demonstration (https://bit.ly/3xM7ZpG) that showed how newly emerging computing technologies could help people work together. More generally, Engelbart devoted his professional life to articulating his view of the role of computing in addressing societal problems. He emphasized the potential for technology to augment (https://bit.ly/3th3ik3) human intelligence. Since that time, many others have developed the concept of intelligence augmentation (IA).
For example, the field of healthcare sees IA as a more ethical framing. One report (https://bit.ly/3b14X77) defines IA as "an alternative conceptualization that focuses on AI's assistive role, emphasizing a design approach and implementation that enhances human intelligence rather than replaces it." This report argues "health care AI should be understood as a tool to augment professional clinical judgment."
"We think it's important to always have the human in the loop to understand if things are working and, if not, to understand why and make creative plans for change."
In education, applications of artificial intelligence are now rapidly expanding. Not only are innovators developing intelligent tutoring systems (https://bit.ly/3aZMpUA) that support learning how to solve tough Algebra problems, AI applications also include automatically grading essays or homework (https://bit.ly/3egrRt6), as well as early warning systems (https://eric.ed.gov/?id=ED594871) that alert administrators to potential drop-outs. We also see AI products for online science labs that give teachers and students feedback. Other products listen to classroom discussions and highlight features of classroom talk that a teacher might seek to improve or observe the quality of teaching in videos of pre-school children. A recent expert report (https://bit.ly/3vHN6tW) about AI and education uncovered visions for AI that would support teachers to orchestrate classroom activities, extend the range of student learning outcomes that can be measured, support learners with disabilities, and more.
In colloquial use, the term AI calls forth images of quasi-human agents that act independently, often replacing the work of humans, who become less important. AI is usually faster and based on more data, but is it smarter? In addition, there are difficult problems of privacy and security—society has an obligation to protect children's data. And there are even more difficult issues of bias, fairness, transparency, and accountability. Here's our worry: a focus on AI provides the illusion that we could obtain the good (superhuman alternative intelligences) if only we find ways to tackle the bad (ethics and equity). We believe this is a mirage. People will always be intrinsic to learning, no matter how fast, smart, and data-savvy technological agents become. People are why agents exist. We think it is important to always have the human in the loop to understand if things are working and, if not, to understand why and make creative plans for change.
Today, students and teachers are overwhelmed by the challenges of teaching and learning in a pandemic. The problems we face in education are whole child problems. Why are parents clamoring to send children back to school? It's not just so they can get some work done! Learning is fundamentally social and cultural; enabling the next generation to construct knowledge, skills, and practices they will need to thrive is work that requires people working together in a learning community. Schools also provide needed social and emotional support. We are simultaneously at a critical juncture where the need to address ethics and equity are profound. In addition to trust and safety considerations, prioritizing the impact, and understanding how it changes interactions and what those implications are for students and teachers is essential when evaluating AI or any technology.
"We recommend a focus on IA in education that would put educators' professional judgment and learners' voice at the center of innovative designs and features."
Thus, we recommend a focus on IA in education that would put educators' professional judgment and learners' voices at the center of innovative designs and features. An IA system might save an educator administrative time (for example, in grading papers) and support their attention to their students' struggles and needs. An IA system might help educators notice when a student is participating less and suggest strategies for engagement, perhaps even based on what worked to engage the student in a related classroom situation. In this Zoom era, we also have seen promising speech recognition technologies that can detect inequities in which students have a voice in classroom discussions over large samples of online verbal discourse. In some forward-looking school districts, teachers have instructional coaches. In those situations, the coach and teacher could utilize an IA tool to examine patterns of speaking in their teaching and make plans to address inequities. Further, the IA tool might allow the coach and teacher to specify smart alerts to the teacher—for example, for expected patterns in future classroom discussions that would signal a good time to try a new and different instructional move. Later, the IA tool might make a "highlights reel" that the coach and teacher could review to decide whether to stay with that new instructional move, or to try another.
The important difference between AI and IA may be when an educator's professional judgment and student voice are in the loop. The AI perspective typically offers opportunities for human judgment before technologies are adopted or when they are evaluated; the IA perspective places human judgment at the forefront throughout teaching and learning and should change the way technologies are designed. We worry the AI perspective may encourage innovators to see ethics and equity as a barrier they have to jump over once, and then their product is able to make decisions for students autonomously. Alas, when things go wrong, educators may respond with backlash that takes out both the bad and the good. We see the IA perspective as acknowledging ethics and equity issues in teaching and learning as ongoing and challenging.
By beginning with the presumption that human judgment will always need to be in the loop, we hope IA for education will focus attention on how human and computational intelligence could come together for the benefit of learners. With IA, restraint is built into the design and technology is not given power to fully make decisions without a diverse pool of humans participating. We hope IA for education will ground ethics and equity not in a high-stakes disclosure/consent/adoption decision, but rather in cycles of continuous improvement where the new powers of computational intelligence are balanced by the wisdom of educators and students.
©2021 ACM 0001-0782/21/7
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from email@example.com or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.
No entries found