Resources › Chapter 5Will AI find us useful to keep around?Happy, healthy, free people aren’t the most efficient solution to almost any problem. 2 min readWill AI treat us as its “parents”?It seems quite unlikely. 4 min readWon’t AIs need the rule of law?AIs could coordinate with each other without including humans. 6 min readTo a powerful AI, wouldn’t preserving humans be a negligible expense?There are many negligible expenses, and it would need a reason to pay ours. 3 min readWon’t AI find us fascinating or historically important?If AI values “fascination,” it probably has better options. 3 min readWouldn’t AI recognize our intrinsic moral worth?Not in a sense that moves it to act. 1 min readWon’t AI want to keep us happy and healthy for the sake of ecological preservation or some similar drive?The human preference for ecological preservation looks like another weird contingent drive. 4 min readBut we still have horses. Why wouldn’t AI keep us around?What horses remain, remain because we like them. 2 min readWon’t AIs care at least a little about humans?Not in the way that matters. 12 min readSo there’s at least a chance of AI keeping us alive?It’s overwhelmingly more likely that AI kills everyone. 1 min readIf AIs are trained on human data, doesn't that make them likelier to care about human concepts?Yes, but this doesn’t help much. 2 min readCan’t we make the AI promise to be friendly?You can make it promise whatever you’d like. You can’t make it keep its promises. 1 min readWhat if we make it think it’s in a simulation?There are many ways for an AI to figure out that it’s not in a simulation. 5 min readHumans evolved to be selfish, aggressive, and greedy. Won’t AI lack those evolved drives?Those drives aren’t necessary to motivate resource acquisition. 1 min readWouldn’t AI only care about the digital realm?Material resources are useful in the pursuit of most goals. 2 min readCan the AI be satisfied to the point where it just leaves us alone?Probably not. 5 min readCan we just make it lazy?Even laziness isn’t safe. 2 min readHumans tend to get kinder as they get smarter or wiser. Wouldn’t AIs too?Probably not. 1 min readWon’t it realize that its goals are boring?AIs won’t run on a human sense of novelty. 1 min readWhy are you imagining a smart AI doing such stupid, trivial things?AIs can intelligently pursue different things than a human would. 4 min readAre you just pessimistic?We’re optimistic about many things, but superintelligence isn’t like most things. 6 min readWould smarter-than-human AI be conscious?We’re not sure. Our best guess is “probably not.” 1 min readWhy don’t you care about the values of any entities other than humans?We do! We have broad cosmopolitan values. We don’t think AIs will fulfill them, and we consider this a great tragedy. 11 min readYour question not answered here?Submit a Question.Extended DiscussionTaking the AI’s PerspectiveHumans Are Almost Never the Most Efficient SolutionOrthogonality: AIs Can Have (Almost) Any GoalLoad more