Why aren’t more people working on AGI?
This question came up again at a recent debate on the merits of advanced AI.
Here’s a list of some of the most common reasons, plus an analysis of why there seems to be so little progress in AGI development:
Why are so few researchers pursuing AGI?
Some researchers believe that Human-level AGI is not possible because…
Biological beings (especially humans) have something special (a soul?) that cannot be replicated in machines
Human intelligence requires consciousness, which in turn arises from weird quantum processes that cannot be implemented in computers
They tried in their youth (20–40 years ago) and failed — now, conclude that it can't be done
Others have fundamental problems with ‘general intelligence’…
Believe that it is inherently an invalid concept (‘g’ in psychology has become quite unpopular — one could even say ‘politically incorrect’)
Overall intelligence is just a collection of specialized skills, and we just need to somehow engineer or create all individual ones
A very common objection is that the time is not ripe…
AGI can’t be achieved within their lifetime, so there is no point
Hardware is not nearly powerful enough
Most researchers believe that ‘nobody know how to build AGI’ because…
“We don’t understand intelligence/ intuition/ consciousness/ etc.”
They haven’t seen or heard of any viable theories of AGI
They aren’t even looking for that possibility (because of many of the other reasons listed here)
We should rather copy the brain in some way to achieve human-level AI…
Reverse engineer the brain with custom chips—one area at a time
Simulate a human brain in a supercomputer
Build specialized hardware that copies brain neural structure
Grow biological brains in a vat
Don’t think that AGI is all that important because…
Narrow AI already exceeds humans abilities in many areas
They don’t believe that self-improving (Seed AI) is viable
Don't share the vision of AGI’s benefits, or our need for it
Simply don’t have the ‘patience’ for such a long-term project
Can get quicker results (financial and other) pursuing Narrow AI
Quite a few people think that AGI is highly undesirable because…
Lead to massive unemployment, or is generally not socially acceptable.
We don’t know how to make it safe, and will likely destroy us
Finally, there are those would love to work on AGI, but…
Don’t know how to do it, and see no viable model
Are researchers who will get little academic respect/ support/ funding
Can’t get their AGI efforts funded
All of the above combine to create a dynamic where AGI is not fashionable, further reducing the number of people drawn into it!
Why is there so little progress in (workable) AGI models and systems?
See above: Why are so few researchers pursuing Real AI?
The field is dramatically underfunded
Most theories of general intelligence, and approaches to AGI, are quite poor:
Poor epistemology: understanding the nature of knowledge and certainty, how it is acquired and validated, the importance of context, etc.
Poor understanding of intelligence: knowledge vs adaptive learning, static vs dynamic, offline vs interactive, big data vs instance learning, etc.
A poor understanding of other key concepts involved: grounding, understanding, concepts, emotions, volition, consciousness, etc.
A lack of logical integration of connectionist, statistical, logic, and other AI techniques and insights
Not appreciating the importance of a comprehensive cognitive architectures, and looking for an overly simple, ‘silver-bullet’ approach
Overly modular designs, incompatible with deep cognitive integration
Focusing on only one, or a few, aspects of intelligence
Focusing exclusively on the wrong level: either too high (at logical reasoning) or too low (perception/ action)
Too much focus on copying the brain — i.e. biological feasibility
Using physical robots prematurely (i.e. now)
A lack of commonality/ compatibility between various AGI efforts
Performance expectations are set too high for any specific functionality: early general intelligence is not likely to be competitive with narrow AI
Of course, the (perceived) lack of progress feeds the lack of interest and people working in the field… a non-virtuous cycle.
(Originally published Dec 2016)