Many ACM members concerned about the recent disclosures of massive worldwide surveillance of civilians wonder how to respond. My recommendation is to use public keys1 for all electronic communication and storage. Public key encryption, along with corresponding private key decryption, enables personal privacy and security because information cannot be snooped or altered between sender and receiver. Public keys involve no single point of failure because private keys are distributed among potentially billions of users. Moreover, technology is available to make public key encryption and decryption almost invisible to those users.
In 2010, Google increased the security of Gmail by enabling default encryption of communication between browsers and its servers. Unfortunately, Gmail's use of a single key set for all users can be a point of failure. If keys in the key set are stolen or otherwise obtained, the communication of millions of Gmail users risks compromise through dynamic "man-in-the-middle" attacks mounted in the communications infrastructure. Also, Google servers can be a point of failure for organizational and technical reasons; for example, intruders are reported to have successfully compromised Gmail servers and obtained the Gmail contact information and email contents of economic targets and political opponents. The lesson is that everyone should be using public keys to preserve privacy and prevent alteration of their email communications.
Apple uses individual user encryption keys in its iMessage service but does not offer public keys in its iCloud service. Consequently, users who back up their messages in iCloud risk losing their privacy and security. Private information should be stored in the cloud only after it is encrypted with a public key. Information to be shared can be encrypted through shared keys established for that purpose.
A privacy card (stored in, say, a user's pocket or purse) can maintain the security of the user's keys and encrypted biometric information, helping prevent a card from being used by someone other than its owner. Devices can use a combination of near-field and Bluetooth radio communication with a privacy card to develop temporary shared secrets for secure communication without revealing the user's keys or biometric information and to securely share public keys with the privacy cards of others. Extracting private information from a card lost or stolen is extraordinarily difficult, as information can be erased if the card is tampered with. Moreover, a card's keys can be revoked and a new card authenticated through commercial and/or community services.
Fortunately, solutions involving public keys are available, including for Gmail (https://www.penango.com/), Windows (http://www.gpg4win.org/), iOS (https://gpgtools.org/gpgmail/), and Linux (http://gnupg.org/), as well as for telephoning, videoconferencing, and instant messaging (https://jitsi.org/).
An important starting point is medical systems using privacy cards to establish confidentiality, as they increasingly rely on digital communications. But this is just one starting point. Universal use of public keys promises to be both a hallmark and a foundation of free and open societies.
Carl Hewitt, Palo Alto, CA
Beware BYOD
A new independence movement known as Bring Your Own Device, or BYOD, has taken hold in computing, even in computer science classrooms. BYOD was coined by Ballagas et al.1 in 2004 in the context of interacting with public displays but became popular after 2009 when Intel began supporting its employees who wanted to use their personal cellphones and tablets at work,2 requiring permission to connect their private devices to workplace networks.
Companies allowing BYOD could view their acquiescence as being responsive to the needs of employees and as a way to cut the costs of corporate computing. BYOD also found a home in education, from K12 to college. However, as more and more educational software became available and e-books replaced heavy textbooks, students also required more endpoint devices.
BYOD requires a major change in attitude no matter how applied. In many schools, teachers routinely confiscate student cellphones to be returned to parents at a later time. However, BYOD also potentially allows school districts to cut their IT budgets, possibly helping the movement make inroads there, but BYOD in computer science education represents a particularly dangerous trend for multiple reasons:
Distraction. Students in classes other than computer science can be told to use their devices selectively, with a teacher defining "listening time" and "device time" and keeping the two apart. In computer science education, students are likely to need their devices continuously, putting computer games and movie Web sites at their fingertips, with some students finding multitasking between, say, lecture and movie perfectly reasonable;
Cheating. Students working on programming problems with their own devices have access to communication tools that enable unauthorized collaboration; that is, they can make full use of social media and "messenger" programs to cheat;
More cheating. Students working on computing problems with their own devices may likewise have access to the Web where solutions to many problems are readily (and temptingly) available;
Still more cheating. Websites (such as those belonging to rent-a-coder brokers) make it easy to find programmers willing to solve homework problems for a fee; and
Incompatibility. Experience at the New Jersey Institute of Technology shows programs running successfully on one device (such as a student's) might not even compile on another device (such as a teacher's), even in nominally standardized environments, risking undeserved poor grades.
BYOD results in an educational environment beyond the control of instructors in a way that can make it impossible to apply consistent grading standards. BYOD in computer science education is thus harmful to achieving students' professional goals and instructors' educational objectives.
James Geller, Newark, NJ
What a License Really Certifies
Vinton G. Cerf's "From the President" editorial "'But Officer, I was Only Programming at 100 Lines Per Hour!'" (July 2013) raised the issue of how to license "software designers and implementers." The state of Texas first licensed software engineers in 1998. ACM pulled out of the Texas software engineering licensing effort in 1999, while the IEEE continued on by itself. Texas and many other states now offer licensing through the "Principles and Practices Exam of Software Engineering" exam (http://ncees.org/exams). Cerf implied licensing consists solely of an exam. However, typical state licensing requirements include a degree from an accredited program, the fundamentals of engineering exam, four years of documented engineering practice, the professional engineer exam in the area of practice, and three letters of recommendation from licensed professional engineers familiar with the candidate's engineering practice. If such credentials are not sufficient to formally determine competency, what is?
Duncan M. (Hank) Walker, College Station, TX
Author's Response:
I did not intend to imply that licensing consisted solely of an exam but rather assumed it would likely be in addition to any other evidence of training and competence. It would be of interest to know what has been the outcome of the Texas licensing program. In particular, under what circumstances might work opportunities be restricted to persons holding such a license. Can Walker cite other states with similar programs? Are there conditions under which the license is a "must have" as opposed to a "nice to have" insignia in the state of Texas?
Vinton G. Cerf, ACM President
Who's Right?
Looking to rebut Vinton G. Cerf's New York Times op-ed essay "Internet Access Is Not a Human Right" (Jan. 4, 2012), Stephen Wicker's and Stephanie M. Santoso's Viewpoint "Access to the Internet Is a Human Right" (June 2013) missed a few things; they said, for example, Internet access is intertwined with "human capabilities that are considered fundamental to a life worth living." Hmm. Does that mean, say, a monk who chooses to live in a community without Internet access is "diminished or denied"? The ordinary dictionary definition of rights (as in http://www.m-w.com "something to which one has a just claim, as the power or privilege to which one is justly entitled" or "something that one may properly claim as due") requires no action by anyone else. Could it be that a person who chooses to live in a remote place with no access to an ISP is likewise "denied his or her rights"? Moreover, who might be sued or arrested for violating those rights? We in the Western world live at the historical pinnacle of human luxury and comfort where many constantly try to expand "rights." Such attempts are misguided for many reasons, including the related diminishment of the fundamental rights inherent in just being a human. When everything is a right, nothing is.
Alexander Simonelis, Montréal, Canada
Join the Discussion (0)
Become a Member or Sign In to Post a Comment