Recent editorial policy seems to have let ACM morph into what I would call the left-leaning ACM. Examples include Moshe Y. Vardi’s editorial “ACM’s Open-Conference Principle and Political Reality” (Mar. 2017) where he addressed bathroom laws in several U.S. states with respect to men who might want to use the “ladies room” and vice versa. Vardi said, “In January 2017, the ACM SIGMOD Executive Committee decided to move the SIGMOD/PODS 2017 conference out of North Carolina” due to its HB2 Public Facilities Privacy & Security Act, as passed in March 2016, prohibiting flexibility for the transgendered, even though Vardi, for the record, disagreed with the move. None of this is relevant to computers or programming.
Another example is Thomas Haigh’s “Historical Reflections” column “Defining American Greatness: IBM from Watson to Trump” (Jan. 2018). In its “Watson and Trump” section, Haigh said, “Trump promised to make America great again by building walls, stepping back from its commitment to the defense of NATO allies, and tearing up trade deals.” I see zero relevance of such a statement to computers nor was it completely accurate. What then-candidate, now-President Donald J. Trump has said about NATO is the U.S. will require its “allies” to “pay their fair share of the cost” of the common defense, defined as a percentage of GDP, rather than continue to mooch off the American taxpayer. The U.S. president is obliged to keep illegal aliens out of the U.S. and “tear up trade deals” that are bad for America, as spelled out in the Constitution. I see no relevance to computer technology in mentioning Trump. I should think a headline saying, for example, “Defining American Greatness: IBM” without mentioning Trump would have been sufficient.
Haigh also neglected the darker side of IBM’s history (such as selling tabulating machines to Nazi Germany to help identify citizens with even a fraction of Jewish lineage so they could be rounded up for genocide1 or efforts to crush its competitors, resulting in a 1956 U.S. Department of Justice consent decree and in 1973 Telex Corp. being awarded in Federal Court $352.5 million from IBM for antitrust violations. As published, the column could have come from IBM’s PR department.
Having had the honor of being published in Communications (“The NSA and Snowden: Securing the All-Seeing Eye,” May 2014), I can attest to the rigor of the editorial review when I strayed even a little short of the highest standards. I am not complaining, as it made for a better, more credible article. I urge ACM’s leadership and Communications’ editors to reconsider their editorial policy and ask the two authors I mention here to explain their motivations or revise the organization’s name to reflect its left-leaning inclinations.
Bob Toxen, Peachtree Corners, GA, USA
I fail to see how my editorial can be called “left leaning.” I also fail to see how policies on locations for ACM conferences are outside the scope for Communications. And, as Toxen’s own Communications article shows, Communications is definitely not only about computers and programming.
Moshe Y. Vardi, Houston, TX, USA
To understand computing’s history we need to understand IBM, and to understand IBM we need to understand IBM’s evolving political context. IBM’s old slogan was “World Peace Through World Trade.” As a charter member of Eisenhower’s “military industrial complex” IBM helped America build unrivaled military, scientific, and economic might while safeguarding democracy in Europe. I argued that IBM’s later shift to gaming short-term financial metrics, which shrank the company and shifted jobs from the U.S. to India, illustrated the appeal of Trump’s diatribes against “globalists.” By accusing me of leftist bias for glorifying free trade and old-school corporate capitalism (just think about that for a second), Toxen unwittingly captured the rise of open vs. closed political alignments over traditional left vs. right ones.
Thomas Haigh, Shorewood, WI, USA
For Old(er) Users, Talking Still Beats Texting
Bran Knowles’s and Vicki L. Hanson’s contributed article “The Wisdom of Old(er) Technology (Non)Users” (Mar. 2018) took a condescending attitude, saying old(er) users must learn to be more “fully participating, independent citizens in our increasingly digital society.” As a 70-something software engineer who still teaches computer science and cybersecurity in a U.S. university, I was put off by such arrogance.
Consider the following touchstones of today’s mass digital culture:
Perceptions of risk and responsibility. We old(er) users do not fear technology but rather the carelessness of the people administering it. After getting letters describing how our data had been exposed and stolen from the U.S. Office of Personnel Management, Target, Equifax, and others, why should we trust it to yet another organization’s data sieve? Moreover, it is not that we fear making decisions we previously left to others, but finally realize it is pointless to even try to keep up with the everyday tweaks to the system;
Values. When I originally was in college (1965–1969), before a professor would arrive in the classroom, the classroom would naturally be abuzz with conversation over very human concerns, say, test results, dating, or an upcoming basketball game. When the professor entered the room, a hush would replace the conversation. Today, when I (now as the prof) enter a classroom, I see students hunched over cellphones, with the only sound thumbs striking glass;
Cultural expectations. Those were Knowles’s and Hanson’s biases, not mine, apparently seeing us old(er) people as obsolete. We in turn choose to see the younger generation as impulsive narcissists who could use some advice. But that would require today’s generation to put down their phones and talk to us;
Listening. The fact that younger people choose to ignore us has not changed in 6,000 years. Moreover, I admit we did not do it either when it was our turn; and
Changing interface. Since 1987, I have used Mail, Lotus Notes, Roundcube, multiple versions of Outlook, and other systems I can no longer name. What I do need is a reliable way to send and receive information. Do I need HTML? Not really. And even when I use it, it gets stripped out anyway when I email something to colleagues in government agencies, something I do often. Same with fancy colors, fonts, backgrounds, and cute pictures. Please also do not insist on constantly changing the features just to sell a new version.
We old(er) humans are simply not all that enamored of the latest and greatest tech (recall that, in many cases, we created it), nor are we impressed by the ability to add emojis to our digital correspondence. We have learned that talking is more satisfying than texting, and visits from grandchildren are better than Facebook. Do not pity us—though, if you like, you may envy us.
Joseph M. Saur, Virginia Beach, VA, USA
Saur reflects many of the frustrations we reported in our article, but we must not forget that young people use many digital tools not by choice but out of social and economic necessity. Our concern is that society is becoming less accommodating to people lacking resources or desire to develop digital skills, and that chipping away at the freedom to reject technologies will silence important debates about their effects on our lives. Instead of forcing older adults to digitize, their objections need greater attention.
Bran Knowles, Lancaster, U.K., and Vicki L. Hanson, Rochester, NY, USA
Don’t Trust the Deadly Dilemma
Imagine the year is 2028, and self-driving cars have the run of U.S. roads, with more than 25 million at any given moment. Imagine further a well-designed cyberattack or self-motivated artificial intelligence bot causing simultaneous malfunctions in the braking systems in, say, 70% of them, while also directing others into crowds to maximize some evil intent. Tens of millions are injured or killed in possibly the greatest one-day tragedy ever.
Now imagine a peaceful alternative, with self-driving technology revolutionizing road transportation. Not only does the technology allow drivers to use their time more efficiently, it also significantly reduces the number of car crashes, potentially to zero. Major causes of crashes are practically eliminated, most notably due to driver error. Compare against the current reality of human-operated vehicles, whereby motor-vehicle collisions in the U.S. alone, for example, are associated with approximately 37,000 deaths per year.2
In the context of designing trustworthy self-driving cars, Benjamin Kuipers, in his review article “How Can We Trust a Robot?” (Mar. 2018) addressed the “deadly dilemma,” or an AI designed to choose between two bad options, both very likely harmful to humans, illustrating a rare, but plausible, situation in which the computational intelligence controlling a self-driving car must choose between two alternatives—one that could result in the driver’s injury or death and the other that will save the driver but is certain to cause harm to others.
The fundamental assumption of the deadly dilemma is that self-driving cars will indeed be in broad use someday. It does not assume other technologies that might help eliminate crashes will significantly advance by the time they fill the road. Ignoring it could disqualify the dilemma’s inverse correlation with increased trust. Technologies expected to obviate both possibilities, as outlined in the deadly dilemma, include GPS navigation and car- and ground-located sensors designed to identify other cars, along with pedestrians and bicyclists; braking systems that can stop a car just in time; and new types of construction materials in cars and roads more likely to protect humans than their current counterparts.
History has recorded many technologies that were not viewed, even by their users, as trustworthy at first. But transportation-related technologies have ultimately won our trust, though all have resulted in some number of human deaths and injuries. Despite the occasional destructive results, most are still in use because they overall improve human quality of life while saving money and time. Anticipating the level of trust in a new, potentially harmful technology, particularly autonomous AI-directed machines, should thus account for all other related technologies that, combined, could result in some generally acceptable risk that will not prevent the broad adoption of the new technology. Determining the trustworthy option in the deadly dilemma must account for all associated technologies. Otherwise, it is not just potentially misleading but really no dilemma at all.
Uri Kartoun, Cambridge, MA, USA
Join the Discussion (0)
Become a Member or Sign In to Post a Comment