I have a kind of private fantasy that we could (if so inclined) build a machine that had a fundamental set of axioms (and that the axioms could be configurable to enable various experiments) and that the machine could be left to "learn" (a lot of activity, theory, and funding behind that word, granted), while everything it learned subsequent to its "awakening" would be assigned a 'truth probability' metric (-1,+1) based on a degree of congruence/contradiction with the axioms so it would be able to not only tell you why (the logic chain) it came to a particular conclusion but also you could see how that chain terminates at specific axioms. I don't know that this is really a useful idea in the context of the wider discussion here, but as a user of various AI LLMs I am constantly frustrated by their opacity, and I feel a lot like those early French Airbus 320 pilots trying to land the plane with ice on the wings while fighting with the guy who programmed the autopilot ...
That's pretty much how we're developing our AGI, except that the 'axioms' are really built-in objectivity: to use real-world data and to integrate it a non-contradictory way -- asking additional questions as needed. This system could be set-up with different 'axioms' but I don't really see the point. https://petervoss.substack.com/p/insa-integrated-neuro-symbolic-architecture
One of the neat applications of the system will be to get a comprehensive 'argument tree' for any questions. Did Covid originate in a lab... It would then give you all the pro & con arguments complete with references and logical fallacies involved and the leaves would give you a true/ false/ certainty value.
The covid examples is a good one, precisely because of the controversies around it. A lot of "real world" data is disputed by various ... "factions" shall we say? ... for various reasons and on various bases.
The point I was alluding to is that, like the Airbus pilots, a lot of the time we end up conflicting with the politics of the programmers (or, more often, the people who pay the programmers), and that is where explicit, auditable, and configurable axioms might provide more value to more people. (By which I mean to say, paying customers.) The commercial failure of many modern films can be placed fairly accurately on the choice of the studios to directly oppose their most loyal ticket-buying customer demographics on political grounds having little to do with the business model, using their films as the vehicles for these attacks. I contend that the AI industry would be better served avoiding such mis-steps. Are the people who inject their politics into everything likely to provide us with such AI systems? I don't think so, and hence my concept remains a private fantasy. I don't think this has anything to do with any particular political strain, except reflexive authoritarianism, to which all of us are prone in varying degrees. My idea of a solution to this eternal problem of human nature is not to harangue or accuse anyone whose politics differ from mine but to call a technological truce; make the political parameters open, auditable, and configurable. Perhaps I am overly optimistic about my species.
There are certain beliefs and policies that are inherently (objectively) harmful to individual human flourishing -- those should be exposed. AGI, which is inherently more rational will help with that.
I hope you are doing well. I just caught the Bill Dally interview of Yann LeCun at GTC 2025. Yann started off early by saying "I'm not so interested in LLMs anymore. They're kind of the last thing..."
As you have said all along, he clearly stated that LLMs will not get us to AGI, or what he likes to call Advanced Machine Intelligence.
Well, I certainly thought of you when I heard the interview.
" ... and we need just a few millions words to master language and reasoning. Not tens of trillions."
I'm pretty sure I don't know millions of words or even go looking for them in the language / reasoning toolbox but then I'm pretty sure I haven't mastered these things either.
Not sure I've consumed that many words either. And combinations of the same words in daily use vary little unless someone is trying to sound clever. When we have to look up a word or remind ourselves of its obscure meaning we rarely use it again unless a simpler alternative is unavailable.
In general, I have found that deeper aspects of reality where thoughts and words cease to exist allow for greater understanding of our experience than can be expressed through language alone.
Why are we pursuing AGI as a species when we already possess the most sophisticated machinery for this purpose?
Why don't we solve problems that we have to solve ourselves and develop future technology in support of human endeavors?
It seems to me that the drive behind AGI development is to replace humans altogether.
Your last line is correct and, as a Catholic believer, leads to me to think that this entire endeavor is straight out of the bowels of Hell and will lead us nowhere good or nice.
I have a kind of private fantasy that we could (if so inclined) build a machine that had a fundamental set of axioms (and that the axioms could be configurable to enable various experiments) and that the machine could be left to "learn" (a lot of activity, theory, and funding behind that word, granted), while everything it learned subsequent to its "awakening" would be assigned a 'truth probability' metric (-1,+1) based on a degree of congruence/contradiction with the axioms so it would be able to not only tell you why (the logic chain) it came to a particular conclusion but also you could see how that chain terminates at specific axioms. I don't know that this is really a useful idea in the context of the wider discussion here, but as a user of various AI LLMs I am constantly frustrated by their opacity, and I feel a lot like those early French Airbus 320 pilots trying to land the plane with ice on the wings while fighting with the guy who programmed the autopilot ...
That's pretty much how we're developing our AGI, except that the 'axioms' are really built-in objectivity: to use real-world data and to integrate it a non-contradictory way -- asking additional questions as needed. This system could be set-up with different 'axioms' but I don't really see the point. https://petervoss.substack.com/p/insa-integrated-neuro-symbolic-architecture
One of the neat applications of the system will be to get a comprehensive 'argument tree' for any questions. Did Covid originate in a lab... It would then give you all the pro & con arguments complete with references and logical fallacies involved and the leaves would give you a true/ false/ certainty value.
The covid examples is a good one, precisely because of the controversies around it. A lot of "real world" data is disputed by various ... "factions" shall we say? ... for various reasons and on various bases.
The point I was alluding to is that, like the Airbus pilots, a lot of the time we end up conflicting with the politics of the programmers (or, more often, the people who pay the programmers), and that is where explicit, auditable, and configurable axioms might provide more value to more people. (By which I mean to say, paying customers.) The commercial failure of many modern films can be placed fairly accurately on the choice of the studios to directly oppose their most loyal ticket-buying customer demographics on political grounds having little to do with the business model, using their films as the vehicles for these attacks. I contend that the AI industry would be better served avoiding such mis-steps. Are the people who inject their politics into everything likely to provide us with such AI systems? I don't think so, and hence my concept remains a private fantasy. I don't think this has anything to do with any particular political strain, except reflexive authoritarianism, to which all of us are prone in varying degrees. My idea of a solution to this eternal problem of human nature is not to harangue or accuse anyone whose politics differ from mine but to call a technological truce; make the political parameters open, auditable, and configurable. Perhaps I am overly optimistic about my species.
There are certain beliefs and policies that are inherently (objectively) harmful to individual human flourishing -- those should be exposed. AGI, which is inherently more rational will help with that.
https://medium.com/@petervoss/improved-intelligence-yields-improved-morality-775950db696f
EXposed and OPposed.
Agreed.
Hello Peter,
I hope you are doing well. I just caught the Bill Dally interview of Yann LeCun at GTC 2025. Yann started off early by saying "I'm not so interested in LLMs anymore. They're kind of the last thing..."
As you have said all along, he clearly stated that LLMs will not get us to AGI, or what he likes to call Advanced Machine Intelligence.
Well, I certainly thought of you when I heard the interview.
https://www.youtube.com/watch?v=eyrDM3A_YFc
Best Regards,
Michael
Yes, indeed - the consensus is shifting. Slowly.
Best Regards,
Peter
" ... and we need just a few millions words to master language and reasoning. Not tens of trillions."
I'm pretty sure I don't know millions of words or even go looking for them in the language / reasoning toolbox but then I'm pretty sure I haven't mastered these things either.
Not sure I've consumed that many words either. And combinations of the same words in daily use vary little unless someone is trying to sound clever. When we have to look up a word or remind ourselves of its obscure meaning we rarely use it again unless a simpler alternative is unavailable.
In general, I have found that deeper aspects of reality where thoughts and words cease to exist allow for greater understanding of our experience than can be expressed through language alone.
Why are we pursuing AGI as a species when we already possess the most sophisticated machinery for this purpose?
Why don't we solve problems that we have to solve ourselves and develop future technology in support of human endeavors?
It seems to me that the drive behind AGI development is to replace humans altogether.
Your last line is correct and, as a Catholic believer, leads to me to think that this entire endeavor is straight out of the bowels of Hell and will lead us nowhere good or nice.