Opinion
Computing Profession Letters to the editor

Keep the ACM Code of Ethics As It Is

Posted
  1. Introduction
  2. Authors Respond:
  3. 'Law-Governed Interaction' for Decentralized Marketplaces
  4. Author Responds:
  5. Scant Evidence for Spirits
  6. Still Looking for Direction in Software Development
  7. Author Responds:
  8. References
  9. Footnotes
Letters to the Editor, illustration

The proposed changes to the ACM Code of Ethics and Professional Conduct, as discussed by Don Gotterbarn et al. in "ACM Code of Ethics: A Guide for Positive Action"1 (Digital Edition, Jan. 2018), are generally misguided and should be rejected by the ACM membership. The changes attempt to, for example, create real obligations on members to enforce hiring quotas/priorities with debatable efficacy while ACM members are neither HR specialists nor psychologists; create "safe spaces for all people," a counterproductive concept causing problems in a number of universities; counter harassment while not being lawyers or police officers; enforce privacy while not being lawyers; ensure "the public good" while not being elected leaders; encourage acceptance of "social responsibilities" while not defining them or being elected leaders or those charged with implementing government policy; and monitor computer systems integrated into society for "fair access" while not being lawyers or part of the C-suite.

ACM is a computing society, not a society of activists for social justice, community organizers, lawyers, police officers, or MBAs. The proposed changes add nothing related specifically to computing and far too much related to these other fields, and also fail to address, in any significant new way, probably the greatest ethical hole in computing today—security and hacking.

If the proposed revised Code is ever submitted to a vote by the membership, I will be voting against it and urge other members to do so as well.

Alexander Simonelis, Montreal, Canada

Back to Top

Authors Respond:

ACM promotes ethical and social responsibility as key components of professionalism. Computing professionals should engage thoughtfully and responsibly with the systems they create, maintain, and use. Lawyers, politicians, and other members of society do not always fully understand the complexity of modern sociotechnical systems; computing professionals can help this understanding. Humans understand such concepts as harm, dignity, safety, and well-being; computing professionals can apply them in their technical decisions. We invite Simonelis to read the Code and accompanying materials in more detail, as many of his claims, in our opinion, misread the Code. We also invite everyone else to read it, too; https://ethics.acm.org/2018-code-draft-3/

Catherine Flick, Leicester, U.K., and Keith Miller, St. Louis, MO, USA

Back to Top

‘Law-Governed Interaction’ for Decentralized Marketplaces

Given today’s sometimes gratuitous efforts toward centralized control over the Internet, I found it refreshing to read Hemang Subramanian’s article "Decentralized Blockchain-Based Electronic Marketplaces" (Jan. 2018) arguing that applications like electronic marketplaces and social networks would benefit from a decentralized implementation, describing a mechanism based on Bitcoin’s concept of blockchain imposing distributed protocols, or what is called "smart contracts" in this context.

Subramanian did not, however, mention the existence of a different, older, technique for implementing decentralized applications called "law-governed interaction," or LGI, introduced in 1991 (under a different name) by Minsky.2 It was implemented at Rutgers University some 10 years later and is still under development. LGI can be used to implement a range of decentralized applications, including decentralized marketplaces3 and (in 2015) decentralized social networks, the very applications that attracted Subramanian’s interest.

It would have been instructive if Subramanian had, say, compared and contrasted LGI with blockchain-based mechanisms for enforcing distributed protocols, as they are two radically different mechanisms for achieving essentially the same objective.

Naftaly Minsky, Edison, NJ, USA

Back to Top

Author Responds:

Comparing LGI and blockchain-based smart contracts would be a great idea, as Minsky says, as they are radically different approaches to decentralization. However, from an adoption standpoint what matters most is mass adoption at scale. For that to happen, the value created by decentralization would have to be shared among all users in some tangible way. Blockchain-based decentralization, in addition to ensuring secure low-cost distributed transactions, could make network effects fungible through the issuance of cryptocoins that can be exchanged for fiat currency; for example, Steem is a popular social network that issues virtual currency powered by the blockchain.

Hemang Subramanian, Miami, FL, USA

Back to Top

Scant Evidence for Spirits

Arthur Gardner’s letter to the editor "A Leap from Artificial to Intelligence" (Jan. 2018) on Carissa Schoenick et al.’s article "Moving Beyond the Turing Test with the Allen AI Scientific Challenge" (Sept. 2017) asked us to accept certain beliefs about artificial intelligence. Was he writing that all rational beings are necessarily spiritual? "That which actually knows, cares, and chooses is the spirit, something every human being has," he said. And that all humans are rational? Why and how would someone (anyone) be convinced of such a hypothesis?

Not every human, to quote Gardner, "knows, cares, and chooses." One might suspect that no human infant does, but may, in fact, learn and develop them over time.

What evidence for spirits? Would Gardner accept an argument that there are no spirits? If not, would this not be a rejection of the scientific method and evidence-based reasoning? Scientific hypotheses are based on experimental design. Valid experimental designs always allow for "falsi-fiability," as argued by philosopher of science Karl Popper (1902–1994).

Falsifiability (sometimes called testability) is the capacity for some proposition, statement, theory, or hypothesis to be proven wrong. That capacity is an essential component of the scientific method and hypothesis testing. Through it, we say what we know because we test our beliefs using observation, not faith.

Humans are not rational by definition. They can think and behave rationally or not. Rational beings apply, explicitly or implicitly, the strategy of theoretical and practical rationality to the thoughts they accept and the actions they perform. A person who is not rational has beliefs that do not fully use the information he or she has.

"Man is a rational animal—so at least I have been told. Throughout a long life I have looked diligently for evidence in favour of this statement, but so far I have not had the good fortune to come across it," said British philosopher Bertrand Russell (1872–1970), tongue firmly planted in cheek.

One might believe, without evidence, that "The leap from artificial to intelligence could indeed be infinite," as Gardner claimed. However, every day newspapers in 22 countries are designed by my company’s AI-based software for classified pagination and display ad dummying. What was once done by rational, thinking human designers is now done by even more expert computer programs. And I started on this journey in 1973 by writing chess algorithms.

To replace humans, these programs have no need to know what a human is or to care.

Our "clever code" may just be our DNA that through long biological evolution has developed into what we today call consciousness and rationality. Perhaps these are just emergent properties of a murmuration of neurons.

Richard J. Cichelli, Nazareth, PA, USA

Back to Top

Still Looking for Direction in Software Development

I have been in IT for 30 years, working on every kind of platform and thus feel qualified to address several points about systems development raised by Stephen J. Andriole in his Viewpoint "The Death of Big Software" (Dec. 2017). For example, I see in many current "agile" cloud-based projects a fundamental lack of direction. For projects that fail to perform as promised, the lack of a more in-depth requirements process can lead to missing critical integrations with other systems. I have personally seen at least a dozen projects spiral out of control and never reach a real live human user. For example, in 2016, I worked with a U.S. government agency on a very large project it had promised to deliver by 2020 but that failed a system test in the cloud because it could not meet its own integration and scalability goals. Even as the development team managed to occasionally pick off relatively minor user requests, it ignored the user-story requirements with deeper technical complexities, as in how to integrate with other systems. Lack of integration led to missed deadlines for delivering the key integrations by system test dates, as mandated by Article I, Section 2 of the United States Constitution.4

As far as how an organization can get its data back if it moves from one cloud provider to another, the container "solution" might sound nice to users but can actually be worse than having table dumps from legacy systems. The lack of documentation around containers, both architecturally and with respect to how containers function within the workflow process and how the system will actually process data, makes designing for portability exceptionally difficult or impossible for IT managers to maintain during changes throughout the systems life cycle. Another challenge of working with containers involves security analysts being able to perform proper system assessments. It is, in fact, some of the same micro services Andriole explored that can lead to security flaws that are then available for exploitation by aspiring hackers with a library of scripts that can be run against the containers and the host operating system.

Though I have great regard for cloud projects and the technology that allows faster and more-flexible solutions to address business needs, IT managers must make sure they do not lose the major benefits of enterprise resource planning products. I spent the 1990s moving from piecemeal systems to a system where a business user can track raw materials all the way to the end product and bought and sold with just a few clicks. I would hate to see IT managers lose that by going back to disparate processes lacking the transparent integration I know is possible.

Dan Lewis, Washington, D.C., USA

Back to Top

Author Responds:

The death of big software is attributable to failure, control, governance, cloud, and monolithic software, and I thank Lewis for addressing failure, cloud, and monolithic software. Cloud "containers" represent a first step toward hostage prevention. I agree that cloud security due diligence should always be aggressive. I also agree that integration is always important but that monolithic architectures do not guarantee integration (at the expense of flexibility) and that micro-services-based architectures can integrate and provide functional flexibility, with the right tools.

Stephen J. Andriole, Villanova, PA, USA

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More