BLOG@CACM
Artificial Intelligence and Machine Learning

Chat Generative Pre-trained… Testimony

Posted
University of Wyoming lecturer Robin K. Hill

In epistemology, the philosophical study of knowledge, the traditionally honored methods of knowledge acquisition are perception and deduction, and often, induction. They are solid; they are authoritative. We can trust what we see or hear or smell, and we can trust what we deduce, validly, from previous knowledge, and we can trust what we observe over and over again.

But notice that much, perhaps most, of what we know comes from what other people tell us. What is the status of that… that chatter, or quasi-information, or statements of unknown veracity, or whatever it is? We call this mechanism "testimony" (not limited to its legal sense of what is said in court, but any act of telling from one person to another, spoken or written). Insofar as we commonly treat the verb "to know" as factive, meaning that if you know P, then P is true, we can't state flatly that testimony is a source is knowledge, because people tell each other falsehoods all the time. (This piece sets aside the problems of information disorder, which could be viewed as false testimony running at large.)

And now, suddenly we are getting falsehoods and truths delivered by programs, AI chatbots, also in the form of telling somebody something. Can the research on testimony, which grapples with these complications, inform our understanding of chat applications based on Large Language Models? Barest background: The epistemological investigation is generally grounded on a definition of knowledge that amounts to justified true belief. In action, testimony involves a speech act of assertion, A, from a speaker or testifier T to a listener or receiver R. A great deal of material awaits the reader who wants more [Lackey, Green].

Because T can be wrong, confused, or mistaken, or deliberately misleading with regard to the statement P, it would be safe to adhere to the theory that testimony results only in R's belief (rather than knowledge) that P. However, that theory is constantly belied by our standard acceptance of what people tell us. We say, "I know my birthday because my parents told me." We say, "Thanks to that guy on the corner over there, I know how to get to the restaurant." We treat testimony as knowledge without a second thought (except when we do have second or third thoughts, that is, doubts). And note that teaching is testimony, and we would be loath to disavow it as a source of knowledge.

The lively commentary on AI chatbots reflects wild enthusiasm for its apparent discourse skills, along with measured enthusiasm [Hoffman], along with caution, along with perplexity, along with dread. What we really want to know is, simply, when and where and how AI chatbots can help us, a question with which this author struggles. Here, I assume that the raw input comes from gigantic text corpora, and ignore processing methods, commercial arrangements, copyright issues, and so forth. Here, I ask what we need to understand about AI chatbots in terms of epistemology or its artificial parallel, formulating questions to ask by starting with inquiries from the study of testimony.

The basic question is, "Testimony—We use it all the time, but what is its role in knowledge?" Researchers come at this big question from many angles:

  1. Does T know P? Is that required for successful testimony? Is it good enough if T believes P?
  2. Is T an agent with some intention with regard to P? Or to S?
  3. Does this acquisition of knowledge reduce to deduction or induction, rather than constituting a separate category?
  4. Does R have to perform some epistemic act, such as integration into a mental model, to acquire the knowledge that P? [For example, Floridi pg. 283]
  5. Does testimony carry content in addition to the proposition P?

So we can now ask, "Is an AI chatbot a source of knowledge? What is its role? Is that role a new one?" We can come at this from several angles, some analogous to those above:

  1. Who is T? Does T have to be human, or a single human?
  2. Does the AI chatbot know that P?
    1. Does the consensus of contributions constitute justification? It would seem so; if that's not justification (outside of perception or logic), then what is?
    2. Does the AI chatbot believe that P, under a suitable definition of "believe"?
    3. Can we formulate a concept of veracity that captures "true according to lots of people who know," similar to what we use for dictionaries or encyclopedias (the notion of truth that features most strongly in daily life)?
  3. Is the AI chatbot an agent in its assembly of input, even if T is actually the collective of contributors?
  4. Does this version of inquiry reduce to deduction or (more likely) induction?
  5. Does the user have to process the AI chatbot's response in some way in order to gain benefit?
  6. Does an AI chatbot response carry content in addition to the proposition P?

Let's consider question 1.b. Sometimes we use "belief" in knowledge representation to apply to a proposition, maintained in some predicate form, in a database. That won't work here. The AI chatbot does not "keep track" of information about the world. Is there some other way for a computational device to express propositional commitment?

Let's take the last question, about additional content. In the case of the AI chatbot, we can answer this, "Yes!" The response tells us what a lot of people say on this topic. That's useful. Or, at least, it's useful subject to the coherence and diversity of those people. Note that this feature, and others that depend on the training of the LLM, might be diluted by hand-coding or restricted input.

Other questions are left to the reader to contemplate and to, this author hopes, apply to open questions regarding the proper place of AI chatbots. These issues are just a juicy sample of the compelling linguistic and philosophical work on testimony, in which all of the elements in the definitions undergo energetic and disputatious articulation and analysis. And so it should be. This work addresses one of the great questions of human communication: How can we learn so much (or anything at all) from other people, fallible as that channel is? And how do AI chatbots expand the scope of that question?

References

[Floridi] Luciano Floridi. 2011. The Philosophy of Information. Oxford University Press.

[Green] Christopher R. Green. 2023. The Internet Encyclopedia of Philosophy, ISSN 2161-0002.

[Hoffman] Hoffman, Reid. 2023. Amplifying Our Humanity Through AI. John Templeton Foundation (from Greylock).

[Lackey] Lackey, Jennifer and Ernest Sosa (eds.), 2006, The Epistemology of Testimony. Oxford University Press.

 

Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More