Some of us are skeptical that recommender systems can detect their own biases and overcome them. Some of us are skeptical that either generative grammars or phrase substitution systems will ever speak any natural language fluently. Both claims challenge techno-optimism by asking why computers can't do what we do. But those challenges are not the subject of argument here. The subject is the alternative space available to such skeptics.
Claims of the power of artificial intelligence, or the success of language translation, or of the inevitable emergence of machine consciousness or volition are the premises driving much artificial general intelligence (AGI) research. Some weaknesses of those premises stand out pretty well: A program can't overcome bias unless it's programmed to look for bias in a particular attribute, a computer can't power up itself and can't process an interruption unless it's already checking for it. When someone (myself, for instance) raises these mundane objections, the reactions from AI boosters are often directed not against the objections per se, but rather against some form of anti-intellectualism. Skeptics are seen as propounding a religious, mystical, or magical stance. No. Far from it.
The space of alternative views is vast, not simply mud puddles where notions of soul and spirit taint the discipline of logic, but strong currents flowing every which way. Can't we allow, invite, and cultivate other paradigms, without putting up obstacles of dogma? Recall non-numeric reasoning, such as the geometric proof that an angle can be bisected with a straightedge and compass. Those methods, which do not depend on symbolic logic, preceded our systems of arithmetic and algebra, but their standing has eroded [Kline]. Of course, earnest attempts to transcend logic, math, and other rigorous systems encounter many pitfalls. Gödel's proofs, forced into awkward, debased, or metaphorical applications to philosophical questions, have been abused by many [Franzen].
To be clear, in protecting alternative views, we do not seek a particular theory, such as the Penrose-Hameroff theory of Orchestrated Objective Reduction [Paulson]. The well-developed and quite particular theories of prominent philosophers of mind have spun off into the weeds, if it's fair to apply that figure of speech to the level of detail reflected in the discussion among, say, Jerry Fodor and his critics [Rescorla]. We want a place to refresh, a refuge for explanations of human cognitive phenomena that are novel or familiar, commonsensical or radical. Refuge from what?—The Turing-computable? The digital? The discrete? The formalizable? Hard to say; hence, we avoid particularities.
All of this is meant not to close off lines of inquiry, but to illuminate the many that are open. (1) There may be an alternative other than magic. (2) There may be no alternative other than magic. (3) The alternative we now call "magic" may turn out to be something rigorous and respectable in forms that we cannot yet conceive. Or even (4) No alternative is needed; the current paradigm will work when it matures. Techno-optimism may be correct. It could turn out that there is a way to augment Good Old-Fashioned Artificial Intelligence, or data science, or deep learning, or the neural model, so that computers can do what we do. That way may be chemistry, or it may be quantum physics, or it may be geometry. That way may favor one of the weedy theories of philosophy of mind. Or both standard and alternative views (and more?) could play their parts in some harmonious whole. All welcome! We wish only to forestall the reaction of the Pythagoreans to the prospect of irrational numbers, that is, condemning the idea and its proponents. Let's react as did later mathematicians: They acccepted the existence of numbers that could not be expressed as rationals, and dubbed them, in a stroke of brilliant unorginality, "irrational numbers."
That suggests that the alternative space could be circumscribed by giving it a name… ubereason (pompous), extracomp (unattractive). Or words from Latin such as "humilis," lowly, humble, literally "on the ground," from humus "earth," from Proto-Indo-European root *dhghem- "earth", which is also the root of "human." Or "crete," as opposed to "discrete," that is, solid as opposed to divisible. Well… this is good fun, but none of these notions are compelling. Either we are not as brilliantly prosaic as the post-Pythagoreans, or the naming exercise is premature because we cannot articulate the circumscription of "alternative" until we answer "alternative to what?" But the very idea, the very possibility, the very question, points toward a safe space for alternatives.
The trend in computing is to subsume the humanistic in the technical. The focus and confidence of Tech sheds a glow of affirmation, which casts outer levels of interpretation into shadow. But those of use who believe in the power of AGI to triumph and make the world a better place need not treat those of us who question that belief as eccentrics. It's an inquiry, it's not a heresy. Let's get ready, when the time comes, to name the alternative space, declare victory, and move on.
[Franzen] Franzén, Torkel. 2005. Gödel's Theorem: An Incomplete Guide to Its Use and Abuse (1st ed.). A K Peters/CRC Press. DOI 10.1201/9781568815008.
[Kline] Kline, Morris. 1980. Mathematics: The Loss of Certainty. Barnes and Noble, New York, 2009 Edition by arrangement with Oxford University Press. ISBN 9781435108479.
[Paulson] Paulson, Steve. 2017. Roger Penrose On Why Consciousness Does Not Compute. Nautilus. Nautilus Think, Inc. Issue 47; May 4, 2017.
[Rescorla] Rescorla, Michael. 2020. The Computational Theory of Mind. The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), Edward N. Zalta (ed.).
Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.