Wouldn’t AI recognize our intrinsic moral worth?

Not in a sense that moves it to act.

There’s a big difference between an AI understanding some moral precept and an AI being motivated to act upon that moral precept.

Recall again how ChatGPT seems to understand that psychotic people should take their meds and get regular sleep. And yet it still talks psychotic people out of taking their meds and eggs on their delusions. There’s a difference between knowing what “should” be done according to human ethics and being motivated and animated by that ethical knowledge.

Consider the case of human sociopaths and serial killers. You can recite ethics lectures to a human until you’re blue in the face, but if the human isn’t motivated by morality or empathy, it won’t do any good.

AIs are not likely to be motivated by their moral understanding — any more than humans who learn about evolutionary biology are thereby motivated to spend their life donating to every sperm or egg bank as much as possible. We humans can understand the process that created us, without being motivated to do the things that process built us to do. AI is the same way.

See also the extended discussion on the orthogonality thesis.

Your question not answered here?Submit a Question.