BLOG@CACM
Artificial Intelligence and Machine Learning

Misnomer and Malgorithm

Posted
Robin K. Hill, University of Wyoming

In response to a previous piece on the articulation of design responsibility [Hill2018], by which I mean the egregious practice of casually attributing judgment and volition to programs, I've received some comments. My view is that the attribution, in our locutions, of decision-making power to certain applications of programs and algorithms is wrong in both senses of "wrong"—both false and harmful.

The most obvious, and most misleading, instance of malarticulation is the trending use of "algorithm". One or two comments mentioned the common modern use of that word to mean an agent that makes (bad) judgments, giving rise to claims that that technology is not value-neutral. The concern is valid but the connotation hangs on context, and the implications of the literal assertion are dangerous. "Oh, well, sure," educated people will say, "We agree that tech is technically neutral." Yes, it's technically neutral. In fact, technically, it's nothing more than technical, and therefore nothing more than neutral.

This needs to be cleared up. Computer science knows the algorithm as an objective computational object, breathtaking and beautiful, an abstract imperative structure (so I claim [Hill2016]), deterministic and independent of context. I will call this objective procedure, a mechanism that performs calculations under a decision structure, the i-algorithm; maybe we can think of the i as "imperative structure". But the public knows the algorithm as a mysterious agent making dubious decisions, a source of judgments, supposed to be reasonable, on complex issues in real life. I will call this subjective procedure the j-algorithm; we can think of the j as "judge". These are homonyms but not synonyms, and we understand that. Computer scientists, told that an i-algorithm is political, simply code-switch to the homonym j-algorithm, the thing that assesses parole requests and loan applications (poorly), in order to continue the communication. This communication infelicity is not new—scientists have to put up with "bug", "exponential", "schizo", and other abuses of terminology. The problem with "algorithm" is that the two senses of the word are, in a way, contradictory, and in exactly the way that matters.

Current events show that many j-algorithms are value-laden, biased, and insidious. Crudely, the j-algorithm in the news these days is a compound of the i-algorithm and data or a decision tree. It's the product, the whole package, and the devil is in the material outside the (i-)algorithm, whether the decision structure comes from symbolic variables or from input data. For the current purpose, we can treat those the same, because both multi-layered neural networks and old-fashioned expert systems extract features from past cases that feed into the ranking of choices. The deepest of data mining can find only correlations that are there, sometimes spurious, sometimes stereotypical. As long as we rely on features already present, the fact that they remain diffuse and unidentifiable in deep learning does not confer a capacity for judgment qualitatively greater than that of a symbolic system in which features are surfaced and explicit as symbols. So in neither case is the (i-)algorithm making what we would call a decision.

Even students and concerned programmers, who know the problems of bias in data mining, will exploit the shorthand ambiguity to say, "We'll just make the algorithm more fair." Stop! You're talking about the i-algorithm, which is as fair as anything can be, absolutely, as objective as objectivity offers, as sterile of influence, as free of opinion, as innocent of prejudice as your can opener, and maybe more so. When you attribute unfairness, you mean the elicitation of features, the selection of data and the programmed ranking and the deployment protocol and the construction and labeling of the result delivered and its interpretation and application—the aspects that are up to people. These practices have victims, first and foremost those who suffer from bad recommendations implemented as decisions, in leases, applications, permits, prison sentences, and other orders, major and minor.

So a statement like "This algorithm ignores contributions from experience" is reasonable as long as it is interpreted j-style, understood as "The design(er) of this j-algorithm ignores contributions from experience"—whereas to attribute that ignorance to the i-algorithm implies that it, the imperative structure, can become smarter, or wiser, or smart or wise at all. Certainly people should be challenging the use of j-algorithms, at least the harmful ones, malgorithms, if you will. Academic programs and Tech watchers already do, interrogating the implications of science and technology in all facets, and not a minute too soon. This is not news, and neither perhaps is the conflation of the j-algorithm with the i-algorithm. Enumeration of the deleterious effects, however, may be—as follows.

Why We Shouldn't Confuse I-Algorithms with J-Algorithms

  1. We lose the critical contrast between programs and judgments.
    • People start to think that there is no such thing as a straighforward "technical" or "mechanical" facility that can be trusted to operate correctly, in its proper milieu.
    • We expect algorithms to escape the bonds of mechanism, to implement mindful practices. We start to think that algorithms just need more data, or more rules, or more feedback layers, to take the right steps.
  2. We endorse reliance on algorithms generally in situations where experts must be consulted for good decisions. In this way, we can make the problem worse: We can inflict misplaced authority on larger swathes of society.
  3. We deflect responsibility from the people who develop, market, select, deploy, and accept these systems; in fact, we hand them excuses. Not only do recommender systems deliver dubious assessments, they deliver a justification (albeit equally dubious) for acting on those assessments, namely, technology.
  4. Finally, we promulgate a false distinction. With or without technology, organizations, public and private, strive for efficiency, which means sometimes settling for perfunctory assessment. To wit—firms of middle-class white men have long populated their employee ranks with middle-class white men, no matter whether a personnel clerk or a recommender system reviews the applications. Because we learn by induction, and induction is fallible, we as individuals, right along with agencies and companies, will continue to act on quick superficial assessments in haste to get things done. While that works well enough, often enough, the danger lies in giving the same short shrift to situations that require thoughtful and painstaking analysis.

Efforts to merge the two terms, to make j-algorithms into i-algorithms by, for example, developing better recommender systems, is a straw man that diverts attention from efforts to develop better people. Work to imbue algorithms with judgment sidetracks what I see as the obvious best policy in many cases of malgorithm application: Don't use them. But that's a different topic. On this topic, what to call the objects of computing so that confusion abates, we're still seeking consensus.

 

References

 

[Hill2018] Hill, R. 2018. Articulation of Decision Responsibility. Blog@CACM May 21, 2018. In print edition, CACM 61:5.

[Hill 2016] Hill, R. What An Algorithm Is. Philosophy & Technology, March 2016, 29:1, 35-59, DOI:10.1007/s13347-014-0184-5. Free online from Springer at http://rdcu.be/m1SZ

 

 

Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More