Strong take on the limitations of scaling-first approaches. The cognitive vs statistical framing cuts through alot of the hype. I've been skeptical of RLHF solving alignment issues precisely because it's bolted on rather than fundamental to the architecture. The energy comparison (20 watts vs gigawatts) really hammers home how far we are from biological efficieny. Neuro-symbolic architectures make sense but dunno if the commercial incentives will shift anytime soon.
The benefits of Cognitive AI are enormous, but few people have the cog sci background to wrap their heads around this approach. VCs are sheep. So it really only depends on someone influential coming to the conclusion that CogAI/ Neuro-symbolic are in fact feasible in the near future...
As you say, all the approaches are by advocates flailing around in desperation, trying all sorts of kludges that haven't and will never work. We can confidently say this based on the fundamentals of the transformer and diffusion model.
Strong diagnosis, especially the critique of scaling, RLHF, and agentic patchwork as reactive fixes rather than first-principles design. The monoculture point is well taken.
Where I’d push a little further is this: theory alone isn’t the escape hatch either.
Most cognitive / neuro-symbolic architectures fail not because they lack theory, but because they lack volitional discontinuity. They can reason, abstract, and reflect, but they cannot refuse. Without the ability to reject false premises, framing traps, or coercive objectives, cognition collapses under pressure into brittle optimization.
Human intelligence isn’t special because it reasons, it’s special because it can say “no, that assumption is wrong” and re-anchor.
So I’d frame the missing North Star slightly differently:
intelligence = coherence that survives recursion under constraint, not just theory-driven cognition.
Still, this is one of the clearest critiques of the scaling-first dead end I’ve seen. Glad to see someone calling it out cleanly.
The ultimate goal of AI has always been to build machines that can think, learn, and reason like us (and better)? Like us? Will these machines eat, have sex, sleep and have dreams et.al like us as well? As humans do we get prompted before we speak, write et.al? Humans engineered these machines, who engineered us?
Strong take on the limitations of scaling-first approaches. The cognitive vs statistical framing cuts through alot of the hype. I've been skeptical of RLHF solving alignment issues precisely because it's bolted on rather than fundamental to the architecture. The energy comparison (20 watts vs gigawatts) really hammers home how far we are from biological efficieny. Neuro-symbolic architectures make sense but dunno if the commercial incentives will shift anytime soon.
The benefits of Cognitive AI are enormous, but few people have the cog sci background to wrap their heads around this approach. VCs are sheep. So it really only depends on someone influential coming to the conclusion that CogAI/ Neuro-symbolic are in fact feasible in the near future...
As you say, all the approaches are by advocates flailing around in desperation, trying all sorts of kludges that haven't and will never work. We can confidently say this based on the fundamentals of the transformer and diffusion model.
Strong diagnosis, especially the critique of scaling, RLHF, and agentic patchwork as reactive fixes rather than first-principles design. The monoculture point is well taken.
Where I’d push a little further is this: theory alone isn’t the escape hatch either.
Most cognitive / neuro-symbolic architectures fail not because they lack theory, but because they lack volitional discontinuity. They can reason, abstract, and reflect, but they cannot refuse. Without the ability to reject false premises, framing traps, or coercive objectives, cognition collapses under pressure into brittle optimization.
Human intelligence isn’t special because it reasons, it’s special because it can say “no, that assumption is wrong” and re-anchor.
So I’d frame the missing North Star slightly differently:
intelligence = coherence that survives recursion under constraint, not just theory-driven cognition.
Still, this is one of the clearest critiques of the scaling-first dead end I’ve seen. Glad to see someone calling it out cleanly.
Well, the need for "volitional discontinuity" would be part of a correct theory. Either the theory is lacking, or the implementation.
Do you know of any cognitive / neuro-symbolic architectures that are even close in other respects, mainly missing "volitional discontinuity".
In our system I consider this to be part of meta-cognition.
The ultimate goal of AI has always been to build machines that can think, learn, and reason like us (and better)? Like us? Will these machines eat, have sex, sleep and have dreams et.al like us as well? As humans do we get prompted before we speak, write et.al? Humans engineered these machines, who engineered us?