On Human and AI Ethics
Note: For current purposes I take morality and ethics to be synonymous
Ethics is not an end in itself.
Right and Wrong, Good and Bad are not Platonic forms or absolutes to be discovered. They don’t exist independent of human nature and needs. It is up to us to properly define and understand how these concepts can help us optimize our lives.
But let’s start with another major source of confusion: Ignoring the distinction between descriptive and prescriptive ethics — how we actually behave, versus how we should ideally behave. I wrote about this some time ago.
Now what is the purpose of ethics or morality? Why we need it, or want it?
We need it as a guide to survive and to optimize our lives. It is useful — no, crucial — for us to have generalized rules, or principles by which to live. Life is too complex for us (or an AI) to figure out the best action for every micro decision we face: Should I lie, or tell the truth; should I cooperate or not; should pray for, or work on a solution?
There are objective answers to such questions. We can and should treat ethics as a science. Currently, most people don’t even attempt that.
We all automatically acquire, develop, and internalize some principles. That’s our moral compass. However, few people try to rationally explore how they might discover and learn the best principles — to properly calibrate our compass. To figure out which best optimize life, and minimize moral conflicts — both internally and externally.
Good and bad only have meaning in terms of ‘good for whom?’, and ‘good to what end?’. In ethics it means good for the individual, and by extension good for society. The end is human flourishing.
Advanced general-purpose AI (AGI) will clearly need to understand and deal with actual individual human morality (descriptive). They will also need to effective respond to and mediate between different existing value systems. This is (just) knowledge and skill acquisition, like in any other domain. Crucially, it involves context, clarification, learning, and reasoning.
AGIs will also help us navigate and improve our morality (prescriptive). We’ll have the best personal psychologists and philosophers one could wish for. Their intelligence will help us discover the best principles to live by, and the best goals to pursue.
(Updated. Original from Oct 2016 )



I think AIs will be able to look at all the data about a situation and note conflicting trends and in that way find the trends that are internally consistent, and therefore much more likely to be true. This is all good - an objective way to determine truth, as opposed to giving equal weight to all the human inputs. And I think the most ethical lifestyles, good for the individual and society, are internally consistent. So in all, I think AI will be a good therapist.
I find this very problematic. There is maybe a very basic set of principles (although I am sure they will and are contested) that can be codified. Something like “You shall not kill”, “Don’t do harm” but even these have a very complicated relationship with reality. The idea that a higher (artificial) intelligence will remove all of this ambiguity in a way good for us as humans is a f”technology is the solution for everything“ fantasy.