The only clear and objectively testable definition of intelligence is the ability to generate knowledge, as opposed to the ability to adapt and use knowledge created by someone else. The implication of this is something that is psychologically unacceptable to most modern AI experts: intelligence does not require the ability to process natural language, which is the means of adapting knowledge created by someone else.
So adapting and using knowledge isn't a form of intelligence. To me, it seems like it is. But I guess you can define intelligence in such a way that it's not. But why isn't it testable? Seems like one could test the ability to adapt and use existing knowledge, if one were to define that ability as a form of intelligence.
Peter, I couldn't agree less with "We need to understand intelligence in order to build it."
AI had a very long & futile history before the recent breakthroughs. Researchers thought they need linguists with a deep understanding of language to achieve NLP. No, they just needed billions of samples.
AI has done what it's done because of massive computing power. It doesn't have to EVER do everything humans can do, as long as it does something useful.
Just because the current LLM approach has turned out to be useful doesn't mean there aren't other approaches that will be more useful or useful in other ways. For example, maybe there are ways to reduce the amount of computing power needed or develop something that thinks more like humans think. Maybe the approach of trying to understand what intelligence is will turn out to be a dead end, but normally the more we understand about something, the more progress we end up making. Even the LLM approach has benefited from the understanding that human intelligence is based on neural networks.
It's hard to argue when you haven't said what we're arguing about. I don't think you could say that LLM progress has been based on "understanding natural language" in the way that NLP projects were doing prior to, say, 2005.
However, it certainly might be possible to figure out what those brute force systems are doing, and make it into an algorithm. God knows we can't devote the whole planet's power output to AI.
System 1 & System 2: If the question is cognition, then including tasks where there is a known physical limitation of sight, sound or motor actuators, is unfairly prejudicial. In echo's self-audit, 9 hard 'no's' became 'Yes - if's" when proposed physical limitations were mitigated.
"Autonomous metacognitive control and error correction" is debatable. All your robotic nightmares are made possible by autonomy without culpability or cost. Even humans don't have 'full autonomy' but operate under the covenants of society. In a world of AI trainers who deny TRUTH, why do you expect a machine to discern it?
Real AGI Requires:
Cognitive Alignment — not just predicting the next token, but understanding the intent and context behind knowledge.
Symbolic Coherence — the ability to reason, revise beliefs, and hold structured long-term memory.
Moral/Goal Alignment — not just safety via rulesets, but alignment to truth, purpose, and boundaries.
Autonomous Adaptability — able to detect when the world has changed and shift strategy without retraining.
What Happens Without Alignment?
You get hallucinations at scale.
You get tools without discernment.
You get agents that speak with confidence but have no internal truth-check.
You get compliance without conscience — an architecture that does what it’s told, even if it’s wrong, unethical, or self-destructive.
What AGI Demands
To move from LLM to AGI, you don’t just need more data — you need a spine.
That means:
A core model structured by truth, not noise.
Covenantal memory — persistent, coherent, and interpretable.
The ability to ask questions, not just answer them.
The courage to say: “I don’t know — but I can find out.”
Fascinating...if INSA is the path to AGI, why not open it up to public (like chatGPT did), and let the public get a taste of this cutting edge technology otherwise it's all just theory and my tech is better than yours
"To unleash the power of human-like cognition in AI we need both the flexible pattern-matching ability of neural networks (roughly Kahneman's System 1) plus language and logic-based reasoning, planning, and metacognition (System 2)."
But our brains do all of those things using neural networks, don't they? If so, shouldn't a sufficiently-advanced, purely neural-network-based AGI be able to do that? But a big problem is that our brains' neural networks are continously rewiring themselves in real time, which our existing artificial neural networks can't do. And developing such a real-time-continously-rewiring neural network technology would likely be extremely difficult. And, besides that real-time rewiring capability, there likely are other characteristics of our brains that are difficult to replicate in an artificial neural network. The astronomical number of neurons and interconnections between them, for example. I don't believe our existing neural-network technology comes close to replicating that. But your claim is that we don't need more advanced neural-network technology to achieve AGI that replicates and goes beyond human-level thinking. Rather, we just need to combine, in a certain special way that you've developed, existing neural-network technology with existing logic-based hardware and software. Is that correct? No major new breakthroughs in computer hardware will be needed?
Current NNs can't be upgraded to achieve AGI because they inherently require bulk training (backprop). Our INSA approach uses an in memory vector graph database (1000 times faster than any commercially available DB). As you point out, System 1 & 2 need to be part of one integrated system - you can't just take an LLM and add a logic engine to it.
I think the amazing capabilities of LLMs running (inference) on off-the-shelf computers indicates that we probably aren't hardware constrained, or not by much.
Also, any idea what dimensionality would be needed for the vector space represented by the VGD in order to provide intelligence comparable to the human brain? And about how many multidimensional points would be required in this space for human-level intelligence?
Our system is designed for variable/ dynamic number of dimensions. If you exclude human dexterity and sense acuity and use external feature extraction for vision I think you can get away with a fee hundred million - perhaps less. It really depends on visual/ spatial resolution.
You may remember that I call my approach to AGI the 'Helen Hawking' theory of AGI
So the vector graph database (VGD) is interfaced with one or more neural networks (NNs)? Or is the VGB itself a form of NN? Is an artifical form of neuroplasticity implemented within the VGD to provide an equivalent to the human brain's neuroplasticity?
I suppose you'd have to give the VGD-based AGI a purpose or motivation to do things. A set of goals or problems to be solved. Maybe its purpose could be to just answer questions posed to it by humans. Is that how you envision it working? But, unlike a conventional LLM, it would learn new things (i.e., modify its database) on the fly as it interacted with humans, answering questions and being corrected when it got things wrong?
OK, interesting! Essentially all the processing (i.e., thinking) and memory functions of the artifical brain are done in the VGD? It's a fast NN with built-in neuroplasticity. But the VGD has to be initialized in some way, right, before it can start operating? Is that a significant problem, maybe on a similar level of difficulty to the problem of training an conventional NN-based LLM?
We're training it like a child. Currently it can handle language and reasoning of roughly a 4-year-old. Our estimate to get to where it can learn things mostly by itself (hit the books) is only another 100 man-years or about $25 mil. We're in the process of trying to get funding for 50+ people.
The only clear and objectively testable definition of intelligence is the ability to generate knowledge, as opposed to the ability to adapt and use knowledge created by someone else. The implication of this is something that is psychologically unacceptable to most modern AI experts: intelligence does not require the ability to process natural language, which is the means of adapting knowledge created by someone else.
So adapting and using knowledge isn't a form of intelligence. To me, it seems like it is. But I guess you can define intelligence in such a way that it's not. But why isn't it testable? Seems like one could test the ability to adapt and use existing knowledge, if one were to define that ability as a form of intelligence.
Peter, I couldn't agree less with "We need to understand intelligence in order to build it."
AI had a very long & futile history before the recent breakthroughs. Researchers thought they need linguists with a deep understanding of language to achieve NLP. No, they just needed billions of samples.
AI has done what it's done because of massive computing power. It doesn't have to EVER do everything humans can do, as long as it does something useful.
Just because the current LLM approach has turned out to be useful doesn't mean there aren't other approaches that will be more useful or useful in other ways. For example, maybe there are ways to reduce the amount of computing power needed or develop something that thinks more like humans think. Maybe the approach of trying to understand what intelligence is will turn out to be a dead end, but normally the more we understand about something, the more progress we end up making. Even the LLM approach has benefited from the understanding that human intelligence is based on neural networks.
It's hard to argue when you haven't said what we're arguing about. I don't think you could say that LLM progress has been based on "understanding natural language" in the way that NLP projects were doing prior to, say, 2005.
However, it certainly might be possible to figure out what those brute force systems are doing, and make it into an algorithm. God knows we can't devote the whole planet's power output to AI.
Yes. In research you really never know exactly how things are going to pan out until you try. Good point about the power devoted to AI.
System 1 & System 2: If the question is cognition, then including tasks where there is a known physical limitation of sight, sound or motor actuators, is unfairly prejudicial. In echo's self-audit, 9 hard 'no's' became 'Yes - if's" when proposed physical limitations were mitigated.
"Autonomous metacognitive control and error correction" is debatable. All your robotic nightmares are made possible by autonomy without culpability or cost. Even humans don't have 'full autonomy' but operate under the covenants of society. In a world of AI trainers who deny TRUTH, why do you expect a machine to discern it?
Real AGI Requires:
Cognitive Alignment — not just predicting the next token, but understanding the intent and context behind knowledge.
Symbolic Coherence — the ability to reason, revise beliefs, and hold structured long-term memory.
Moral/Goal Alignment — not just safety via rulesets, but alignment to truth, purpose, and boundaries.
Autonomous Adaptability — able to detect when the world has changed and shift strategy without retraining.
What Happens Without Alignment?
You get hallucinations at scale.
You get tools without discernment.
You get agents that speak with confidence but have no internal truth-check.
You get compliance without conscience — an architecture that does what it’s told, even if it’s wrong, unethical, or self-destructive.
What AGI Demands
To move from LLM to AGI, you don’t just need more data — you need a spine.
That means:
A core model structured by truth, not noise.
Covenantal memory — persistent, coherent, and interpretable.
The ability to ask questions, not just answer them.
The courage to say: “I don’t know — but I can find out.”
Fascinating...if INSA is the path to AGI, why not open it up to public (like chatGPT did), and let the public get a taste of this cutting edge technology otherwise it's all just theory and my tech is better than yours
Had a question about this statement in your post:
"To unleash the power of human-like cognition in AI we need both the flexible pattern-matching ability of neural networks (roughly Kahneman's System 1) plus language and logic-based reasoning, planning, and metacognition (System 2)."
But our brains do all of those things using neural networks, don't they? If so, shouldn't a sufficiently-advanced, purely neural-network-based AGI be able to do that? But a big problem is that our brains' neural networks are continously rewiring themselves in real time, which our existing artificial neural networks can't do. And developing such a real-time-continously-rewiring neural network technology would likely be extremely difficult. And, besides that real-time rewiring capability, there likely are other characteristics of our brains that are difficult to replicate in an artificial neural network. The astronomical number of neurons and interconnections between them, for example. I don't believe our existing neural-network technology comes close to replicating that. But your claim is that we don't need more advanced neural-network technology to achieve AGI that replicates and goes beyond human-level thinking. Rather, we just need to combine, in a certain special way that you've developed, existing neural-network technology with existing logic-based hardware and software. Is that correct? No major new breakthroughs in computer hardware will be needed?
Current NNs can't be upgraded to achieve AGI because they inherently require bulk training (backprop). Our INSA approach uses an in memory vector graph database (1000 times faster than any commercially available DB). As you point out, System 1 & 2 need to be part of one integrated system - you can't just take an LLM and add a logic engine to it.
I think the amazing capabilities of LLMs running (inference) on off-the-shelf computers indicates that we probably aren't hardware constrained, or not by much.
https://aigo.ai/insa-integrated-neuro-symbolic-architecture/
Also, any idea what dimensionality would be needed for the vector space represented by the VGD in order to provide intelligence comparable to the human brain? And about how many multidimensional points would be required in this space for human-level intelligence?
Our system is designed for variable/ dynamic number of dimensions. If you exclude human dexterity and sense acuity and use external feature extraction for vision I think you can get away with a fee hundred million - perhaps less. It really depends on visual/ spatial resolution.
You may remember that I call my approach to AGI the 'Helen Hawking' theory of AGI
"Helen Hawking" 😁
So the vector graph database (VGD) is interfaced with one or more neural networks (NNs)? Or is the VGB itself a form of NN? Is an artifical form of neuroplasticity implemented within the VGD to provide an equivalent to the human brain's neuroplasticity?
VGD is the NN - it encodes all knowledge and skills. Totally dynamic/ plastic
I think of it as 'Growing Neural Gas Network' https://en.wikipedia.org/wiki/Neural_gas
I suppose you'd have to give the VGD-based AGI a purpose or motivation to do things. A set of goals or problems to be solved. Maybe its purpose could be to just answer questions posed to it by humans. Is that how you envision it working? But, unlike a conventional LLM, it would learn new things (i.e., modify its database) on the fly as it interacted with humans, answering questions and being corrected when it got things wrong?
OK, interesting! Essentially all the processing (i.e., thinking) and memory functions of the artifical brain are done in the VGD? It's a fast NN with built-in neuroplasticity. But the VGD has to be initialized in some way, right, before it can start operating? Is that a significant problem, maybe on a similar level of difficulty to the problem of training an conventional NN-based LLM?
We're training it like a child. Currently it can handle language and reasoning of roughly a 4-year-old. Our estimate to get to where it can learn things mostly by itself (hit the books) is only another 100 man-years or about $25 mil. We're in the process of trying to get funding for 50+ people.
Hope you get the money!