The Cerf’s Up column "Social and Ethical Behavior in the Internet of Things" (Feb. 2017) by Francine Berman and Vinton G. Cerf was a welcome reminder of the importance of ethical issues involving sociotechnical systems in general and the Internet of Things in particular. Berman and Cerf did a great service giving them a high profile and thoughtful exposition. Here, we focus on their claim "Technologies have no ethics." Many computing professionals express this opinion, and we are confident many more believe it. But we think it is, as stated, a mistake, indeed a perilous mistake.
It is true that technologies do not "have ethics" in exactly the same way human beings have ethics. A human being is a carbon-based, biological entity, and any computer artifact (or other technological device) is fundamentally silicon-based and mechanical. Despite their differences, humans and technologies are interrelated and co-dependent. Society shapes technology, and technology shapes society. Technologies are the creations of humans; without humans, the technologies would not exist, and humans drive the creation of technology. Humans imbue their creations with moral significance, meaning their creations embody ethical decisions. Those ethics may be noble or they may be sketchy, but human ethics live inside every technology.
The 2009 book Technology and Society: Building Our Sociotechnical Future by Deborah G. Johnson and Jameson M. Wetmore, as well as the work of many other scholars of science and technology studies, addressed sociotechnical systems, including the Internet of Things. In that context, we explore some aspects of how ethics, technology, and computing professionals are related. Sociotechnical systems include people, devices, policies, and the connections among them. To properly understand any technology, the entire sociotechnical system of which they are a part must be understood. As technologies are developed, they are already part of a sociotechnical system that develops concurrently with the technology.
At each stage of development of a sociotechnical system, people make decisions. Any system, including its technological components, embodies those decisions. Most decisions, regardless of how "technical" they may appear at face value, have an ethical component, because technical decisions and human values are intertwined. Sociotechnical systems matter to people, and those people are important stakeholders in the systems. Decisions that shape the systems and artifacts matter to people and thus have ethical significance.
Because any technology is best understood as part of a sociotechnical system, and because both the technological artifacts and the systems of which they are a part have ethically significant human decisions embedded in them, these artifacts and systems "have ethics." Such ethics are not identical to the ethics of a person, but they exist. Technologies have ethics; people put them there. And as technologies are developed, those ethics need to be considered by the people who put them there.
ACM Committee on Professional Ethics
Authors Respond:
Any blanket generalization is risky so we accept the argument that one may find an ethical element built into some technologies. Many technologies are, however, sufficiently neutral that they can be used and abused in accordance with human choices, regardless of the intent of the technology developer. The Internet is merely one of many examples. Perhaps the way we can end up in the same place as the Committee is to observe that programmers (and, more generally, technologists) should feel ethical responsibilities in the course of developing new technology to assure it resists accidental or deliberately induced malfunction.
Francine Berman, Troy, NY, and Vinton G. Cerf, Mountain View, CA
Enough Already with Patent Profusion
I was seriously dismayed by the descriptions of three U.S. design patents in Pamela Samuelson’s Legally Speaking column "Supreme Court on Design Patent Damages in Samsung v. Apple" (Mar. 2017): "a black rectangular round-cornered front face for the device"; "a rectangular round-cornered front face with a surrounding rim or bezel"; and "a colorful grid of 16 icons to be displayed on a screen." The profusion of such patents is intended primarily to stifle competition rather than protect truly innovative work and has a detrimental effect on computational scientists and the general public alike. These examples represent clear evidence that the U.S. patent system is in serious need of reform.
Nicholas Horton, Amherst, MA
Reengineer Peer Review to Eliminate Reviewer Bias
Elizabeth Varki’s Viewpoint "Where Review Goes Wrong" (Mar. 2017) served computer science with its courageous and honest disclosure of struggles with the flawed scholarly peer-review system. As an ACM Fellow with more than 200 publications, I can attest to the problems she identified. I, too, have had papers rejected from venues on the basis of rants from the same reviewers. I was once able to make my case and solicit fresh reviews because carbon copies proved the same typewriter had been used. Digital documents and submission portals now make bias or abuse all but impossible to prove. Reviewing the same paper for more than one publication or conference is unethical, and reviewers should be required to recuse themselves on these grounds.
As someone who has seen the publication process from all sides—author, referee, conference organizer, and editor of multiple journals in multiple disciplines—I can say blind peer review, the putative gold standard in science, is seriously flawed. Double-blind review is a sham. A reviewer who is current and competent in the subject matter will almost always know who are the authors of a submitted paper, from content, style, reputation, or cited references. Social science research has repeatedly proved double-blind reviewing is a myth. Identical papers submitted under female or ethnic names are more likely to be reviewed unfavorably and rejected. Single-blind review converts to certainty only the probability that authors are disadvantaged.
One issue Varki did not raise is the competence of referees to review a particular paper. As an editor and conference organizer, I know how difficult it is to secure enough capable referees. The more innovative and advanced the paper and author, the more likely reviewers will be less experienced and underqualified. History attests to cases of work that ultimately proved groundbreaking but was repeatedly rejected due to poor reviewing. My own work on coupling and cohesion, which spawned a rich research literature and eventually entered the canon of software engineering, was repeatedly rejected until a fluke opportunity brought it to the world in a journal then at the academic margins. I have seen solid papers by others rejected by reviewers who were self-evidently unqualified to evaluate the paper, even sometimes by their own admission.
It is time to consider re-engineering the entire peer-review process to reflect research evidence from the social sciences and the realities of contemporary academic publishing. Radical though it may seem, a fair process might be an open one without anonymity. So-called anonymous review that is only selectively anonymous leads to abuses and complications. In the deeply incestuous communities of scientific specialties and subspecialties, anonymous reviewing as now practiced is a hidebound fiction that fails in the ultimate purpose of peer review—ensure the quality of the cumulative literature and guarantee fair and open access to all qualified contributors.
Larry Constantine, Rowley, MA
To Inspire Future Engineers, Start at Home
Several fallacies stood out in Gregory Mone’s news article "Bias in Technology" (Jan. 2017), which made the tacit but arguable assumption that working in tech has enough social value that getting more women and African-Americans into tech jobs is a laudable goal. Mone said African-Americans represent 1% of the work force at Google and Facebook and 4.6% of students awarded a bachelor’s degree in computer science but wondered why only 1% or 2% at "some major companies are African American." Consider that 1% to 2% is actually an extremely high percentage. In his book Work Rules! Insights from Inside Google That Will Transform How You Live and Lead, Laszlo Bock, Google’s Senior VP of People Operations (what other companies call "human resources") said, "We receive more than two million applications every year. […] Of these, Google hires only several thousand per year, making Google 25 times more selective than Harvard, Yale, or Princeton. Approximately 0.1% to 0.3% of applicants get jobs at Google. I imagine the numbers are about the same for Facebook, Apple, and Microsoft.
Mone quoted Kaya Thomas, a second-year computer science student at Dartmouth, saying, "If you want to sell to everybody, you have to hire everybody." This statement is wrong in both theory and practice. The design principles that guide Facebook, Apple, and Microsoft to create great products used by billions of people around the world are universal and have nothing to do with affirmative action. Consider, the deeper reason for diversity. Diversity is important in an engineering organization not because of a social agenda or because you want to sell to everybody but because of the value of hiring smart people who think different from you and the opportunity to learn from them and because of the resilience diversity brings to building a great team.
As an engineer who has worked at Intel and as an entrepreneur, I am proud that my startup—Clear Clinica (cloud monitoring of clinical trial data)—has equal representation of men and women, native Israeli, American-born, Russian-born, religious, non-religious, and ultra-Orthodox. We do this, not because of a social agenda but because diversity makes business sense, helping our team ship great products and survive challenges.
We can also learn an important lesson that goes beyond business sense. American tech companies employ diverse work forces, including every other ethnicity you can think of. Whether or not we agree with the learning ethic of families of any of them, we can agree that the drive to succeed in science and technology starts at home at a young age. I do not pretend to know how to change home values, but if I were looking to encourage people to go into engineering, I would start by encouraging parents to inspire their children to achieve in science, math, and computing.
Danny Lieberman, Modiin, Israel
In Constricting an Art Form, Digitization Can Open It As Well
Esther Shein’s news story "Computing the Arts" (Apr. 2017) explored the relation between digitization and the arts. The history of European written music illustrates this development. The historic act of fixing Gregorian chant in a notation that used a seven-note octave during the European Middle Ages could be seen as a constriction of expression, as it eliminated the vitality of diverse vocal pitches in favor of just seven notes. But this particular form of digitization of music also opened the way for polyphony and the intense harmonies of later European music. Meanwhile, mathematics of a different type was behind the practice of perspective in Renaissance art. Digitization can derive insights by looking back at such historic precedents.
Andy Oram, Boston, MA
Join the Discussion (0)
Become a Member or Sign In to Post a Comment