A survey on neural-symbolic learning systems
More advanced knowledge-based systems, such as Soar can also perform meta-level reasoning, that is reasoning about their own reasoning in terms of deciding how to solve problems and monitoring the success of problem-solving strategies. As limitations with weak, domain-independent methods became more and more apparent,[41] researchers from all three traditions began to build knowledge into AI applications.[42][6] The knowledge revolution was driven by the realization that knowledge underlies high-performance, domain-specific AI applications. Note the similarity to the propositional and relational machine learning we discussed in the last article. Interestingly, we note that the simple logical XOR function is actually still challenging to learn properly even in modern-day deep learning, which we will discuss in the follow-up article. Thus, while the hierarchical levels of abstraction are typically presented by the hidden layers of neural networks, they may also be thought of as “complicated propositional formulae re-using many sub-formulae” (quotation from the abstract of “Learning Deep Architectures for AI” by Y. Bengio [15]).
Historically, the two encompassing streams of symbolic and sub-symbolic stances to AI evolved in a largely separate manner, with each camp focusing on selected narrow problems of their own. Originally, researchers favored the discrete, symbolic approaches towards AI, targeting problems ranging from knowledge representation, reasoning, and planning to automated theorem proving. The ultimate goal, though, is to create intelligent machines able to solve a wide range of problems by reusing knowledge and being able to generalize in predictable and systematic ways. Such machine intelligence would be far superior to the current machine learning algorithms, typically aimed at specific narrow domains. But neither the original, symbolic AI that dominated machine learning research until the late 1980s nor its younger cousin, deep learning, have been able to fully simulate the intelligence it’s capable of. Semantic networks, conceptual graphs, frames, and logic are all approaches to modeling knowledge such as domain knowledge, problem-solving knowledge, and the semantic meaning of language.
Recommended articles
Like in so many other respects, deep learning has had a major impact on neuro-symbolic AI in recent years. This appears to manifest, on the one hand, in an almost exclusive emphasis on deep learning approaches as the neural substrate, while previous neuro-symbolic AI research often deviated from standard artificial neural network architectures [2]. On the other hand, the deep learning context appears to have led to a renewed realization of the importance of neuro-symbolic AI research, and consequently a significant increase in research papers, meetings and prominent public appearances of the topic [2], as well as discussion of the topic in public media [4]. This increase in activity is probably primarily due to the fact that advances in deep learning now make it possible to address challenge problems in neuro-symbolic AI that were quite out of reach before the advent of deep learning, thus adding to its attractivity for research and applications. However, we may also be seeing indications or a realization that pure deep-learning-based methods are likely going to be insufficient for certain types of problems that are now being investigated from a neuro-symbolic perspective.
For each taxonomy, we provide detailed descriptions of the representative methods, summarize the corresponding characteristics, and give a new understanding of neural-symbolic learning systems. Parsing, tokenizing, spelling correction, part-of-speech tagging, noun and verb phrase chunking are all aspects of natural language processing long handled by symbolic AI, but since improved by deep learning approaches. In symbolic AI, discourse representation theory and first-order logic have been used to represent sentence meanings. Latent semantic analysis (LSA) and explicit semantic analysis also provided vector representations of documents. In the latter case, vector components are interpretable as concepts named by Wikipedia articles.
Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning
Python includes a read-eval-print loop, functional elements such as higher-order functions, and object-oriented programming that includes metaclasses. NSI has traditionally focused on emulating logic reasoning within neural networks, providing various perspectives into the correspondence between symbolic and sub-symbolic representations and computing. Historically, the community targeted mostly analysis of the correspondence and theoretical model expressiveness, rather than practical learning applications (which is probably why they have been marginalized by the mainstream research).
This AI Paper Introduces Φ-SO: A Physical Symbolic Optimization Framework that Uses Deep Reinforcement Learning to Discover Physical Laws from Data – MarkTechPost
This AI Paper Introduces Φ-SO: A Physical Symbolic Optimization Framework that Uses Deep Reinforcement Learning to Discover Physical Laws from Data.
Posted: Thu, 23 Nov 2023 08:00:00 GMT [source]
While the interest in the symbolic aspects of AI from the mainstream (deep learning) community is quite new, there has actually been a long stream of research focusing on the very topic within a rather small community called Neural-Symbolic Integration (NSI) for learning and reasoning [12]. Amongst the main advantages of this logic-based approach towards ML have been the transparency to humans, deductive reasoning, inclusion of expert knowledge, and structured generalization from small data. Research in neuro-symbolic AI has a very long tradition, and we refer the interested reader to overview works such as Refs [1,3] that were written before the most recent developments.
Neural-symbolic learning systems: foundations and applications
Note the similarity to the use of background knowledge in the Inductive Logic Programming approach to Relational ML here. Perhaps surprisingly, the correspondence between the neural and logical calculus has been well established throughout history, due to the discussed dominance of symbolic AI in the early days. These soft reads and writes form a bottleneck when implemented in the conventional von Neumann architectures (e.g., CPUs and GPUs), especially for AI models demanding over millions of memory entries. Thanks to the high-dimensional geometry of our resulting vectors, their real-valued components can be approximated by binary, or bipolar components, taking up less storage. More importantly, this opens the door for efficient realization using analog in-memory computing. The General Problem Solver (GPS) cast planning as problem-solving used means-ends analysis to create plans.
Other ways of handling more open-ended domains included probabilistic reasoning systems and machine learning to learn new concepts and rules. McCarthy’s Advice Taker can be viewed as an inspiration here, as it could incorporate new knowledge provided by a human in the form of assertions or rules. For example, experimental symbolic machine learning systems explored the ability to take high-level natural language advice and to interpret it into domain-specific actionable rules.
Title:SymbolicAI: A framework for logic-based approaches combining generative models and solvers
For instance, the neural language models which are popular in Natural Language Processing are increasingly playing the role of knowledge bases, while neural network learning strategies are being used to learn symbolic knowledge, and to develop strategies for reasoning more flexibly with such knowledge. This blurring of the boundary between symbolic and neural methods offers significant opportunities for developing systems that can combine the flexibility and inductive capabilities of neural networks with the transparency and systematic reasoning abilities of symbolic frameworks. At the same time, there are still many open questions around how such a combination can best be achieved. This paper presents an overview of recent work on the relationship between symbolic knowledge and neural representations, with a focus on the use of neural networks, and vector representations more generally, for encoding knowledge. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems. The second reason is tied to the field of AI and is based on the observation that neural and symbolic approaches to AI complement each other with respect to their strengths and weaknesses.
Henry Kautz,[18] Francesca Rossi,[80] and Bart Selman[81] have also argued for a synthesis. Their arguments are based on a need to address the two kinds of thinking discussed in Daniel Kahneman’s book, Thinking, Fast and Slow. Kahneman describes human thinking as having two components, System 1 and System 2. System 1 is the kind used for pattern recognition while System 2 is far better suited for planning, deduction, and deliberative thinking. In this view, deep learning best models the first kind of thinking while symbolic reasoning best models the second kind and both are needed. While the aforementioned correspondence between the propositional logic formulae and neural networks has been very direct, transferring the same principle to the relational setting was a major challenge NSI researchers have been traditionally struggling with.
From a more practical perspective, a number of successful NSI works then utilized various forms of propositionalisation (and “tensorization”) to turn the relational problems into the convenient numeric representations to begin with [24]. However, there is a principled issue with such approaches based on fixed-size numeric vector (or tensor) representations in that these are inherently insufficient to capture the unbound structures of relational logic reasoning. Consequently, all these methods are merely approximations of the true underlying relational semantics. This is easy to think of as a boolean circuit (neural network) sitting on top of a propositional interpretation (feature vector). However, the relational program input interpretations can no longer be thought of as independent values over a fixed (finite) number of propositions, but an unbound set of related facts that are true in the given world (a “least Herbrand model”).
We see Neuro-symbolic AI as a pathway to achieve artificial general intelligence. By augmenting and combining the strengths of statistical AI, like machine learning, with the capabilities of human-like symbolic knowledge and reasoning, we’re aiming to create a revolution in AI, rather than an evolution. However, as imagined by Bengio, such a direct symbolic learning neural-symbolic correspondence was insurmountably limited to the aforementioned propositional logic setting. Lacking the ability to model complex real-life problems involving abstract knowledge with relational logic representations (explained in our previous article), the research in propositional neural-symbolic integration remained a small niche.