https://bit.ly/3ElW1J7 August 3, 2022
Humans were always fragile creatures, most of our success in the ecosystem was driven by the efficient use of new tools. When a new tool arrives that augments our capabilities, we often question the fairness of using it. The debate usually does not last long when the tool has clear benefits. Boats have an advantage over swimming, writing solves our memory problems, this paragraph was improved using a grammar checker, and so forth.
Text generated by AI tools, such as GPT-3 (https://bit.ly/3e3icZQ), has seen an impressive increase in quality, and the AI-generated text is now hard to distinguish from human-generated text. Some people argue that using AI-generated text is cheating, as it gives the user an unfair advantage. However, others argue that AI-generated text is simply another tool that can be used to improve writing. The text in italic type drives this point home, as it was fully AI-generated after giving GPT-3 the appropriate context with the preceding text (going forward in this article, all the AI-generated text is marked in italic). To make the process more confusing, the AI-generated text can be further improved with tools that improve the grammatical presentation and choice of terms. At some point, it becomes hard to distinguish who wrote what.
Blended Writing and Provenance
We can place the question of whether blended writing with AIs will become an acceptable approach to a more efficient use of our capabilities and time. Tools for spelling and grammatical correction are now in everyday use and do not raise any ethical concerns. Nevertheless, AI-generated text, even if accepted from an ethical standpoint, raises questions on the provenance of the generated text. Luckily, there is already an abundance of tools for plagiarism detection (for the purpose of this article, all the AI-generated text has been checked for plagiarism using Quetext (https://bit.ly/3rrCy1U)). In the case of GPT-3, a closed-book system with no access to external content after the pre-training phase, the generation of “ipsis verbis” text seems statistically unlikely for any long output, so the plagiarism check is likely an abundance of care.
OpenAI, owner of GPT-3, does provide guidelines (https://bit.ly/3fvsnXd) for content co-authored with GPT-3. The gist is: Do no harm, refrain from using harmful content; clearly identify the use of AI-generated content; attribute it to your name, you are responsible for the published content.
Authorship
For example, if an AI were to generate a piece of scientific text, should the AI receive credit as an author? (Interestingly, the AI itself generated this question.)
Still, it did not decide to write it on its own. Language models incorporate more knowledge than a human can aspire to in a lifetime, and so do search engines, but that knowledge is static. Only when queried does the model tap into the stored knowledge and construct a plausible continuation to some input prompt. There are limits (https://bit.ly/3rx13dS) also to the amount of context data that can be accounted for in the prompt, a few paragraphs.
After being provided with the previous two paragraphs, it concluded: If an AI were to generate a piece of scientific text, it should not receive credit as an author. I concur with this observation.
However, if it starts feeding on its outputs, without a steering hand, this conclusion might have to be reexamined.
Or, if an AI were to generate a piece of scientific text and a human were to edit it, should the AI receive credit as an author? In this case, I (note: the AI) believe the AI should receive credit as an author, as it was responsible for the original content. However, the human editor should also be given credit, as they helped to shape and improve the final product.
Let’s leave it at that.
Separating Wheat and Chaff
It is likely that the amount of AI-generated content will increase and that some of that content will not be labeled as such. (An unintended consequence is that future language models will incorporate those outputs in their training.) To us humans, the more pressing concern is whether we can distinguish human from AI-generated content.
The answer is negative, as the current systems are already very good at fooling humans. Benchmarks of GPT-3 on the human accuracy at identifying if a short text, of about 200 words, was machine-generated lead to a result of 52%, almost equal to random guessing, expected at 50%. The designers of GPT-3 also point out that “automatic discriminators may have greater success at detecting model generated text than human evaluators.” The study of AI-Generated Texts, sometimes called ‘Neural Texts’, is a recent field of research. A recent survey of the field concludes that “evaluating the quality of neural texts is still an open problem.” (Actually, this AI-generated quote and implicit reference to a survey are not true, and I was unable to find the quoted text on the Web. This illustrates the risks of the blind use of AI-generated text with no additional fact-checking.)
Questioning the Oracle
In Ancient Greece, the Oracle at Delphi was consulted by many, and Pythia, the high priestess of the temple of Apollo, served as an intermediary to its prophetical knowledge. In this and other traditions, there is a human intermediary that bridges the knowledge from a source that is often incomprehensible. In the Judeo-Christian tradition, God is seen as the ultimate source of knowledge and wisdom, but humans often need intermediaries to help them understand and apply that knowledge. One of the most famous examples is the prophet Moses, who was seen as a mediator between God and the people of Israel.
Prompt design is critical for the efficient querying of general-purpose language models, such as these. Good outputs require a correct framing of the task in natural language and ideally a set of appropriate examples. Here, we only used text continuations by feeding paragraphs and harvesting continuations, but more specific tasks, like language translation, require carefully framed prompts and examples as context.
We may see a new job title, Prompt Designer, joining the ranks of specialists in Search Engine Optimization and Database Administration.
If we are summoning images, and not only text, the art of building a good prompt becomes even more arcane and intricate. Maybe a branch of AI will deal with these aspects that are more akin to natural sciences, as described in Subbarao Kambhampati’s insightful article (https://bit.ly/3Rcf5NW) on the topic. It might well be that we will see a new job title, Prompt Designer, joining the ranks of specialists in Search Engine Optimization and Database Administration.
The Muses
We have seen ample evidence of the potential and risks of blended writing with AIs. One of the main risks is its ability to create convincing content that is factually wrong. The algorithm is not tuned for fact-checking, it is tuned to produce credible content. I believe it has no means to distinguish fiction from reality. This makes the co-creation of non-fiction content much more challenging. A way out is not to use AI-generated content directly, but to use it instead as a source of inspiration. Even under this more restrictive setting, it can be very useful in bringing up ideas for content to be written. Writers used to struggle with the first blank page; maybe now they just need to come up with a title and ask for AI inspiration.
Here is how this article would have started, from the AI perspective: Title: On the ethics of writing with AIs Author: GPT-3
There are a few different ethical considerations to take into account when writing with AIs. First, it is important to consider the impact of the AI on the author’s writing. Will the AI be able to understand the author’s intent and writing style? Will it be able to replicate the author’s voice? There are also ethical considerations around the AI’s impact on the reader. Will the AI be able to create a believable and interesting story? Will the AI be able to understand the readers’ reactions and respond accordingly? Another ethical consideration is the impact of the AI on the publishing industry. Will the AI be able to create works that are good enough to be published? If so, will this lead to fewer opportunities for human authors? There are also concerns about the AI’s impact on society. Will the AI be used to create propaganda or fake news? Will the AI be used to manipulate people’s emotions? These are just some of the ethical considerations to take into account when writing with AIs. It is important to consider all of these factors before writing with an AI.
Enough content to get anyone inspired.
Acknowledgments
I would like to thank José Campos, Raquel Menezes, and Jim Bauwens for feedback and pointers to relevant works. The content of this column was enhanced with Grammarly hints and GPT-3 generated content (identified in italic type).
Kurenkov, A.
The Inherent Limitations of GPT-3, Last Week In AI, https://bit.ly/3CrWdE7
Kambhampati, S.
AI as (an Ersatz) Natural Science? Communications, https://bit.ly/3SS4yY4
Brown, T. et al
Language Models are Few-Shot Learners, NeurIPS 2020, https://arxiv.org/pdf/2005.14165.pdf
Join the Discussion (0)
Become a Member or Sign In to Post a Comment