4 Comments
User's avatar
James Gracey's avatar

Hi Peter, I enjoy reading your articles. I agree with many of your points you make.

We need to be better informed so that we can make better decisions. Tokenized LLM’s are NOT the answer https://www.youtube.com/watch?v=Cis57hC3KcM

We need help with ethics and morality.

When it comes to AGI maybe we should drop the “A” and the “G” should stand for ~ ‘Genuine’

My thoughts here —

https://payhip.com/StandOUTStudio/blog/ai-value-vs-viral/evolution-and-baking-bread

James (the one from SA) :)

Brook Norton's avatar

The hard problem of consciousness is far from solved, so I don't think we can yet judge whether our AIs will have our rich sense or not. But I do think, if the AI can be trusted to be honest, we can find out simply by asking it "Are you aware in the same fundamental way that humans are?" And it will be able to tell us. It also seems that once the AI is smarter than us, and escapes out into the digital wilderness, the same Darwinian forces will act on AIs, determining at amplified speeds, which ones survive.

Peter Voss's avatar

Disagree on both counts - as per my article:

1) It can't be as rich as ours without the richness of inputs/ sensation that our biological one

2) The *directed* 'evolutionary' forces acting on AGIs will be to be useful to us

Brook Norton's avatar

1) In addition to sensors on robots, there could be all kinds of extra senses like seeing outside the human frequency range, or with a voltmeter, etc, could "feel" noise in the supply voltage or supply current frequency. On and on...

2) Once in the wild, the AI evolutionary forces will no longer be directed by us. This may or may not work out for us.