Science pollutes. In its successful understanding of phenomena, science allows us to solve problems by concentrating on some of their characteristics to find optimal solutions along these chosen characteristics. It provides analytical and precise dedicated solutions along a particular set of dominant attributes of interest where all other criteria are set aside. The adoption of such dedicated one-sided solutions that can maximize the “profit” in one area of interest can lead to pollution, namely a cumulative adverse effect on other aspects of the problem that are collaterally related to the focus of the dedicated solution. We therefore have environmental pollution by maximizing the rate of industrial production or social pollution of poverty in some parts of the population due to a sterile maximization of economic growth. These causes of pollution emerge out of a scientific understanding of the world (physical or socioeconomic) through which it is possible to find and engineer solutions dedicated to a particular area of interest.
In effect, such dedicated solutions are artificial. They apply only in idealized problem settings6 where we can indeed ignore aspects of the problem of no or minor interest. Nevertheless, in the real world, over time the cumulative side effects of such idealized solutions can be significantly adverse. To give an explicit example, modern transportation artifacts are based on dedicated solutions that ideally optimize the time of travel. These solutions ignore the disproportionately large—compared with natural forms of transportation—amount of energy they required. And although this relatively large amount of energy is small for each isolated case of travel, in time it has an adverse cumulative effect on the natural balance of our environment and brings about the polluting effect of climate change.
Admittedly, this is an abstract and simplistic account of pollution. Nevertheless, it gives us a sufficient basis to consider the question of whether artificial intelligence (AI) has or will have polluting effects of its own, beyond any usual polluting effects. Will AI pollute our mental world and if so, in what way? Can we prevent or at least mitigate any such pollution from AI?
Polluting AI
As we might expect, it is quite challenging to foresee what forms the mental pollution of AI could take. A simple case of pollution of AI, which is already visible and much discussed, is that of the propagation of biases contained in the training data used when building AI systems. By concentrating on the primary concern of delivering an optimal result of some sort, for example, the best shortlist of candidates or the most suitable limit for an individual’s credit card, such dedicated solutions downgrade other criteria, such as that of fairness and other ethical requirements. The ethicacy of these systems, that is, their efficacy in achieving ethical behavior, is typically not an a priori matter of concern and systems can fall into cases of poor ethical quality where biases are perpetuated and exacerbated.
Another simple and well-known example of a polluting effect of AI is the case of systems whose operation is tightly dependent on strong profiles of their human users. By concentrating on one aspect of the problem, namely that of matching as best they can their solutions with the personal interests of the user, these dedicated solutions appear to be intelligent and valuable. But in the long run the success (and indeed correctness) of such dedicated solutions can have the adverse effect of enclaving the human users in their current interests and promoting a cultural habit for personalized recommendation. These effects create a bias against broadening one’s interests thus polluting the mental world of the individual.
In an analogous way, the undoubtedly important systems of large language models (LLMs), concentrate their solutions on the primary criterion of forming outputs that conform to the statistical distributions in their (context-sensitive) training data, downgrading other criteria. For example, at least in their early stages, the criterion of being able to explain their output by some form of reasoning that lies underneath the appearance of the output produced, was, and in a certain way it is still, considered secondary. In effect, these systems learn many results of reasoning that exist in their training data but without learning the ability to perform the reasoning by themselves.
Can this appearance of intelligent solutions without an underlying justification for the solutions, have over time a pollution effect on the mental world of human users? One possibility could be that of encouraging a culture toward dogmatic or doctrine thinking, at the expense of dialectical and critical thinking about a problem and its alternative solutions. The human population has already developed an automation bias, where people would more readily accept a piece of information from a machine. The widespread use of AI systems developed without a primary consideration of accountable explainability as an integral part of their design and problem solving, could have the polluting effect of nudging people toward superficial thinking, relying more on outside sources, wise shears, of information to provide ready-made solutions without scrutiny, and thus edging us towards dogmatic thinking.
In effect, AI could lead to a human culture of correct thinking over free thinking, where human inquiry relies on undisputed correct systems to such an extent that we are biased against seeking alternative solutions and thinking differently for new perspectives. This could lead to a significant, from the evolutionary point of view, adverse effect of stifling the variation in human thinking. This pollution effect on the logo-diversity of the human mental world could be accentuated by the fact that AI machines, at least the machines based on LLMs for artificial general intelligence (AGI), are by comparison with human minds, clones of each other. Their sphere of experience is limited in the essentially common realm of digital information in cyberspace which is compressed under essentially the same and relatively simple in comparison with the human brain, architecture. Cloned thinking then feeds back to accentuate dogmatic correct thinking over free plurality thinking.
But why should AI pollute? We know that the energy required to create today’s AI machines, such as the AI systems of LLMs, is significantly greater than the energy consumed by a child’s brain as it grows. This energy difference tells us, like in the case of transportation mentioned above, that the artificial minds of these machines are likely to be inherently different from human minds. Put simply, artificial intelligence is artificial: its form of intelligence is artificial compared to human intelligence, although it may not appear so. Thus, as a foreign or alien form of intelligence, the use of AI is likely to be a source of contamination: The disparity between the artificial intelligence of AI and that of natural human intelligence could eventually become toxic in the natural world of the human mind.
What to Do?
How can we then proactively address the potential polluting effects of AI while allowing it to achieve its promised potential of catalyzing positive societal reforms in many different areas? The simple answer is to humanize AI. This will need shifting the emphasis away from dedicated solutions that optimize a main operating requirement and design from the start systems that would operate under holistic solutions of problems. Solutions that would balance several aspects of the problem, much in the same spirit that we are now applying to the problem of urban building, where different environmental and societal aspects need to be considered alongside the concern of simply maximizing the capacity and use of the building.
Holistic solutions for AI systems would consider the general need for a high-level of cognitive and ethical competency, to allow these systems to naturally integrate in the human mental realm. Designing AI systems under holistic solutions would thus help us address the polluting effects of AI. For example, to address the enclosure of users in a filter bubble of their current personal interests, as previously mentioned, the design of a system can include balanced provisions for encouraging the users to diversify their interests.5 But applying holistic solutions is not a simple matter of expanding our design methods. This is because analytical holistic solutions to multicriteria problems in open environments rarely exist and if they do, they are typically idealized and brittle in dynamic settings. Hence it is difficult to fully design and realize AI systems under holistic solutions. We are thus led to adopt a hybrid approach of model-centric design with data-centric training. In this the explicit design of systems, for example, of conforming to moral norms and regulations, would be complemented with a process of training under data of holistic operation, for example, data of normative balance between good practical utility and ethically aligned operation. The aim is to train the systems to learn to follow and sustain a holistic approach of seeking, as Aristotle would put it, the middle ground and to maintain through continuous training a habitual holistic operation under some normative guidance.
At a foundational level, adopting a holistic approach that moves us away from dedicated solutions is a move away from the aim to produce super-intelligent or perfect systems. This shift requires that we recognize and accept the difficulty of the task of building naturally intelligent systems that are non-artificially intelligent. As first recognized by Turing, there can be no perfect intelligent systems operating in open environments: intelligent systems would be fallible. This impossibility of perfection points us away from all-or-nothing optimal solutions towards satisficing solutions. Solutions that are akin to those naturally exercised by evolutionary systems (as studied by H.A. Simon8) whose quality is sufficient for the adaptive survival of the system. In the case of AI systems, satisficing solutions are holistic solutions that are sufficiently good for the user to work with and take matters forward.
Well-designed AI systems would allow for the possibility to be in a dilemma where they can recognize the limit of their ability in specific cases. Instead of aiming for perfection, systems should be able to provide a thorough explanatory analysis of the proposed solutions and their plausible alternatives. The intelligence of systems would be judged by how usefully they can provide sufficiently good solutions. This is indeed the essential challenge of the new area of Explainable AI (XAI), that is, providing not simply an a posteriori passive role for explanations of a committed choice of solution, but rather explanations having an active role in forming solutions that better inform users toward a (joint) final decision. LLMs are now recognizing this challenge of XAI by incorporating on top of their language processing “packaged” forms of reasoning, for example, goal-oriented reasoning.
At a more technical level, accepting that there are no perfect intelligent systems leads us to recognize that strict and absolute logical reasoning is not appropriate for AI. Definitive logic that has served as the foundation, the calculus for computer science,2 needs to be replaced by a new more flexible logical calculus.1 We must use frameworks that are closer to flexible plausible reasoning. The recent development of generative AI with its underlying basis of probability theory follows this move. But probability logics are more appropriate for low-level recognition problems, for example, problems of perception based on cashed experience. Furthermore, such probability-based forms of thinking do not offer, at least directly through the probability logic itself, a cognitively natural form of explicability for their conclusions.
For higher-level and synthetic cognitive thinking, more natural forms of plausible and explainable reasoning are required. The normative condition for such higher-level cognitive logics cannot simply be that of maximal probability. Instead, it should allow a natural form of non-determinism, such as in the case of argumentative reasoning with its strong link to the nature of human reasoning at large.7 In argumentation logic the flexible normative condition of the existence of a coherent reasonable justification delimits the validity of reasoning. Plausible conclusions are thus statements that can be reasonably supported and whose degree of definiteness depends on the plausibility of opposing conclusions.
Building AI systems that exhibit natural cognitive forms of thinking, thus moving AI from artificial brains to more complete artificial minds, would require the integration of sub-symbolic and symbolic computation. Neural-symbolic AI3 aims at this, trying to understand how cognitive reasoning can emerge from within the neural architecture of a system, as is the case of the human mind. Although this emergent fusion would be the ultimate goal it is useful, at this stage, to complement this effort with studies of integration of separate neural and symbolic components that feed into each other within an overall cognitive architecture of a system. For example, symbolic reasoners could provide a high-level logical view of the underlying language processing of LLMs, in effect capturing different possible logical forms of natural language.4 Furthermore, symbolic AI systems can be employed to train systems with reasoning data: data that is not simply an input-output function but one that also contains the justification/reasoning that connects them. Such a form of reason-based training would not only help the AI system to learn to explain its decisions but would also have a positive effect on the quality of learning helping to achieve a better compression of the data.
How can we use symbolic systems to produce reasoning data for sub-symbolic AI systems? Could this be a way to ethically train systems through data of normative guidance rather than exclusively forcing norms on their design? In general, how can we cognitively train AI systems so that we can endow them with human-like and humanly aligned thinking characteristics that, together with a human-like interface, they learn and evolve to be explainable, contestable and debatable entities.
This need for AI systems to be self-accountable, preferably in a human-like fashion, is recognized by all parties involved in AI. The general thrust to address this need is to regulate the systems through posterior tests. Both in the U.S. and E.U. governments are legislating for this. But perhaps this is not strong enough. Following the concern expressed in a recent open letter by many AI researchers and practitioners asking for slowing down the development of AI, especially that of AGI for which there does not seem to be a pressing need, we can introduce pre-release trials for systems before their open and large-scale deployment. Trials analogous to those used for medicine and its development of drugs and vaccines, where their life-saving deployment is only allowed after establishing that there is little chance for them to have long-term polluting side effects on human health.
What will we be testing in such medical-style trials for AI systems? The details would depend on the type and purpose of the system. Still, a general overall requirement will be to test their natural quality of explanatory reasoning, since having this can be seen as the starting point for testing that systems are ethically aligned with the human moral values they are expected to follow. We would test that a system is able to clearly explicate the information, facts, or beliefs that it holds, on which its results follow and that the system’s reasoning is internally coherent by avoiding self-contradictions. A system should be able to recognize itself that it is inconsistent in its argument—analogously to the cognitive dissonance that humans feel when they are found to be self-contradictory. In general, these tests would aim to evaluate the extent to which systems are forthcoming about their beliefs or conceived truths and how these connect to their results in a coherent reasonable way.
AI based on holistic rather than dedicated solutions is necessarily a universal matter that will need to be addressed in a highly interdisciplinary endeavor. To tightly integrate the sciences with the humanities in a deep synthetic way, as demanded by truly cognitive and ethical AI, is one of the major challenges of the 21st century. Given the extreme state of specialization that all disciplines are facing today this is indeed a difficult task. Disciplines are based on an ever-increasing deep search for dedicated knowledge. “Good AI” on the other hand, should seek the holistic consilience of knowledge10 for its basis.
Conclusion
Is it pollution, or is it a natural development of things, where the human mind adapts to its new environment and remains in control of things? The possibility of AI driven mental pollution is something that we need to consider and judge the question of its seriousness, just like we are now realizing that we need to consider the question of the severity of environmental pollution. Some say that AI will eventually replace the human species. But can the human mind leapfrog the crafting of itself by evolution over the millennia and create something that surpasses it? The real danger, if there is one, is that AI pollution affects our mental condition to such an extent that humanity evolves in forms that would not be sustainable, at least in a way that we would wish for its future.
The anxiety of whether we can harness AI in a positive way without inhumane short- and long-term effects is prevalent today (see, for example, Suleyman9). We all, especially AI researchers, have a responsibility to consider the question of the possible pollution of AI and work together in the society to address proactively such possible dangers.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment