Why Neuro-Symbolic must be Integrated
INSA: Integrated Neuro-Symbolic Architecture
As pure scaling of Generative AI is yielding diminishing returns (ref)(ref)(ref), we’re increasingly seeing efforts to add symbolic and other systems to LLMs (ref)(ref)(ref). These add-on may take the form of reasoning engines, knowledge graphs, or world models (ref)(ref)(ref). These additions are obvious incremental improvements that go beyond just RAG.
While these efforts can clearly help overcome some of the limitations of LLMs for certain applications, not only do they require specific custom setup and engineering, they also suffer several inherent limitations and thus cannot be the right path towards true fluid, adaptive human-level AI or AGI.
Hybrid Neuro-Symbolic systems do not have a shared representation of knowledge and skills. This disconnect causes translation and interpretation errors when switching between systems. It also has no robust way to reconcile contradictions between the two representations. Furthermore, there’s a large performance penalty caused by the communication bottleneck.
Human intelligence relies on an integrated structure that can seamlessly switch between sub-conscious, pattern-based (‘neuro’) operation and deliberate, symbolic-based thought. The lack of deep integration in these AI systems prevents them from supporting this crucial requirement.
In practice, the ‘impedance mismatch’ (sorry, my electronics background) between systems implies lots of expensive custom engineering and ongoing tweaking and support to coordinate sub-systems.
In a way these hybrid systems offer the worsts of both worlds: The brittleness of symbolic AI, the hallucination and massive data and compute requirements of Generative AI, plus the inability of these back-prop systems to learn incrementally in real-time, i.e. to update their core model.
It doesn’t have to be this way: INSA, a non-hybrid, fully integrated system based on Cognitive AI does not suffer these limitations. In addition it requires many orders of magnitude less training data and compute.
A commercially-proven INSA implementation utilizes a super-high-performance vector graph database (1000 times faster other external systems) for all its knowledge and skill representation. This fully integrates short- and long-term memory as well as dynamic context. The various learning and cognitive algorithms are also deeply integrated into this graph substrate — and with each other.
We expect that the AI industry will soon move to such a first principles cognitive approach - something that DARPA calls ‘The Third (and final) Wave of AI’.




In your view, how long will / would it take to develop such Cognitive AI?