Despite big successes in artificial intelligence (AI) and deep learning, there have been critical assessments made to current deep learning methods.8 Deep learning is data hungry, has limited knowledge transfer capabilities, does not quickly adapt to changing tasks or distributions, and insufficiently incorporates world or prior knowledge.1,3,8,14 While deep learning excels in natural language processing and vision benchmarks, it often underperforms at real-world applications. Deep learning models were shown to fail at new data, new applications, deployments in the wild, and stress tests.4,5,7,13,15 Therefore, practitioners harbor doubt over these models and hesitate to employ them in real-world application.
A broad AI is a sophisticated and adaptive system, which successfully performs any cognitive task by virtue of its sensory perception, previous experience, and learned skills.
Current AI research has tried to overcome the criticisms and limitations of deep learning. AI research and machine learning in particular aims at a new level of AI—a “broad AI”—with considerably enhanced and broader capabilities for skill acquisition and problem solving.3 We contrast “broad AI” to “narrow AI,” which are the AI systems currently applied. A broad AI considerably surpasses a narrow AI in the following essential properties: knowledge transfer and interaction, adaptability and robustness, abstraction and advanced reasoning, and efficiency (as illustrated in the accompanying figure). A broad AI is a sophisticated and adaptive system, which successfully performs any cognitive task by virtue of its sensory perception, previous experience, and learned skills.
Figure. Hierarchical model of cognitive abilities of AI systems.3
To improve adaptability and robustness, a broad AI utilizes few-shot learning, self-supervised learning with contrastive learning, and processes sensory inputs using context and memory. Few-shot learning trains models with a small amount of data using prior knowledge or previous experience. Few-shot learning has a plethora of real-world applications, for example, when learned models must quickly adapt to new situations, for new customers, new products, new processes, new workflows, or new sensory inputs.
With the advent of large corpora of unlabeled data in vision and language, self-supervised learning based on contrastive learning became very popular. Either views of images are contrasted with views of other images or text descriptions of images are contrasted with text descriptions of other images. Contrastive Language-Image Pre-training (CLIP)10 yielded very impressive results at zero-shot transfer learning. The CLIP model has the potential to become one of the most important foundation models.2 A model with high zero-shot transfer learning performance is highly adaptive and very robustness, thus is supposed to perform well when deployed in real-world applications and will be trusted by practitioners.
A broad AI should process the input by using context and previous experiences. Conceptual short-term memory9 is a notion in cognitive science, which states that humans, when perceiving a stimulus, immediately associate it with information stored in the long-term memory. Like humans, machine learning and AI methods should “activate a large amount of potentially pertinent information,”9 which is stored in episodic or long-term memories. Very promising are Modern Hopfield networks,11,12,16 which reveal the covariance structures in the data, thereby making deep learning more robust. If features co-occur in the data, then modern Hopfield networks amplify this co-occurrence in samples that are retrieved. Modern Hopfield networks are a remedy for learning methods that suffer from the “explaining away” problem. Explaining away is the confirmation of one cause of an observed event that prevents the method from finding alternative causes. Explaining away is one reason for short-cut learning5 and the Clever Hans phenomenon.7 Modern Hopfield networks avoid explaining away via the enriched covariance structure.
Graph neural networks (GNNs) are a very promising research direction as they operate on graph structures, where nodes and edges are associated with labels and characteristics. GNNs are the predominant models of neural-symbolic computing.6 They describe the properties of molecules, simulate social networks, or predict future states in physical and engineering applications with particle-particle interactions.
Europe’s Opportunity for a Broad AI
The most promising approach to a broad AI is a neuro-symbolic AI, that is, a bilateral AI that combines methods from symbolic and sub-symbolic AI. In contrast to other regions, Europe has strong research groups in both symbolic and sub-symbolic AI, therefore has the unprecedented opportunity to make a fundamental contribution to the next level of AI—a broad AI.
Europe has strong research groups in both symbolic and sub-symbolic AI, therefore has the unprecedented opportunity to make a fundamental contribution to the next level of AI—a broad AI.
AI researchers should strive for a broad AI with considerably enhanced and broader capabilities for skill acquisition and problem solving by means of bilateral AI approaches that combine symbolic and sub-symbolic AI.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment