16 Comments
User's avatar
T.Theodorus Ibrahim's avatar

"GPT: Generative (makes up stuff)"

Haha, harsh but true :)

Expand full comment
John Bortz's avatar

So, why do you think it is that the cognitive AI approach isn't being implemented by any of the big AI companies? Are there major technical issues that still need to be resolved to get this approach to work?

Expand full comment
Peter Voss's avatar

Main reason: An accident of history - Statistical AI's success has sucked all of the oxygen out of the air... we now have an AI monoculture

https://aigo.ai/why-we-dont-have-agi-yet/

Expand full comment
John Bortz's avatar

Makes sense. But are there also major technical issues that still have to be resolved to implement practical cognitive AI systems?

Expand full comment
Peter Voss's avatar

Not all that much. We believe that with our technology we only need about 100 man-years or so to get to human-level language and reasoning ability. After that it'll just be simple additional scaling and integration.

Expand full comment
John Bortz's avatar

In thinking about this some more, I'm not so sure it makes sense. A project requiring an investment of only 100 man-years would be a no-brainer for a large tech company to green light, given the enormous pay-off if it were to succeed. If it were widely believed to be possible, I'd think at least one of them would be enthusiastically pursuing cognitive AI approaches with the goal of achieving human-level AGI. So, I'm guessing that most people in the tech world believe it'd be a lot harder to do than you think it'd be. Are you in a small minority of AI experts who believe human-level AGI would be that easy to achieve? What arguments do the experts who disagree with you make?

Expand full comment
Peter Voss's avatar

Yes, I'm in the minority for sure. VCs bring in their 'experts' and they don't know anything other then big data approaches.

The main argument is "It can't be that easy, otherwise the big companies would be doing it". Pretty limited logic...

As I mention in my articles, almost every decision maker or advisor in the AI game has an engineering background (or statistics, mathematics) and you simply can't talk to them in terms of cognitive psychology and epistemology.

Expand full comment
John Bortz's avatar

OK, that does make sense. I just read your 2023 paper "Concepts is All You Need: A More Direct Path to AGI," which also makes sense to me, although it sounds extremely challenging to implement. But I could see it possibly being doable with 100 man-years of labor.

My oldest daughter is actually working for an AI start-up now. I sent her that paper and asked her to get back to me with her thoughts on it, as well as the thoughts of her colleagues. I'll let you know what they say. Hoping it's not just something like "It can't be that easy, otherwise the big companies would be doing it."

Expand full comment
John Bortz's avatar

Peter, is this a step in the right direction?

https://openai.com/index/introducing-openai-o1-preview/

"We've developed a new series of AI models designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math."

Expand full comment
Peter Voss's avatar

Not at all. They can't learn interactively and need even more training and compute

Expand full comment
Jelena Radovanovic's avatar

It's impressive in terms of scaling (not sure if this was done before but they effectively added another exponential, right there in the process of using the thing). On the other hand, I can't even think about what it means in terms of energy resources.

Expand full comment
J.K. Lund's avatar

Nice discussion here Peter. Wouldn't a real time-trained AI be limited in the speed it could learn? If it learns in real time, like a child, it could take decades to train. I suppose we are assuming that millions of users would be training it simultaneously?

Expand full comment
Peter Voss's avatar

Thanks. No, how children/ adults learn is quite inefficient. An AI can 'concentrate' 100% of the time, and learn at a higher speed.

More importantly, by crafting an optimized curriculum we can significantly speed up learning. Furthermore, once we reach a certain level of language understanding and reasoning ability (say that of a 12-year-old) the system can 'hit the books' and largely learn by itself.

There will also be methods where we can have multiple AIs learn different topics and then copy them to each other. So I don't expect that getting to AGI level knowledge and ability (a smart college graduate) will take more than a year once you have the right core technology.

Our approach (and I think the right one) isn't to try to have an all-knowing oracle but to have millions of high IQ AGIs with good common knowledge and skills that then specialize. Naturally they can easily share knowledge and work load.

Expand full comment
J.K. Lund's avatar

Thanks for the explanation!

Expand full comment