Won’t AIs need the rule of law?

AIs could coordinate with each other without including humans.

It’s not obvious to us whether there will be multiple smarter-than-human AIs of comparable ability, such that an “AI civilization” might emerge that has need of “AI property rights.” It seems plausible that there will instead be a single AI that, thanks to some breakthrough, dominates potential competitors using its first-mover advantage and thereby controls the whole world.* Or, supposing multiple AIs exist, they might collaborate on building a single successor agent to represent the combination of their goals. Or perhaps AIs will find a way to directly fuse their minds and will want to do so in order to avoid costly competition.

We’re not saying that a single, dominant AI will necessarily emerge, but rather that it seems like a hard call. So at the very least, a plan that requires multiple AIs to struggle to coordinate among themselves is not off to a good start.

But suppose, contra the arguments above, that the future will involve something like an AI civilization, with distinct AIs coordinating to enforce something like property rights and a rule of law. Might humans be safe then?

One basic observation to the contrary is that human society does not recognize any non-human animals as having legal rights or protections — beyond those that are set up according to our values and taste, such as the very limited laws that protect ecosystems and pets. Humans did not respect the property rights of dodo birds. We didn’t even respect the property rights of humans from other cultures until relatively recently.

Humans won’t have the capabilities to make us worth including in trade or treaties, compared to fast-thinking superhuman intelligences that see us as little more than statues (as discussed in Chapter 1).

Consider two AIs bargaining among themselves who say, “This is mine, and that is yours, and neither of us will affect the other’s things without first negotiating some sort of mutually beneficial trade.” There’s no need for them to decree that most of the resources on Earth “belong” to humans, if the humans are not much of a threat and could not put up much of a fight.

Might one AI worry that if it steals our stuff, then the other AI will see it as a thief and refuse to work with it? Most likely not — not any more than you would conclude that a human is a thief if you saw him take eggs from a hen in his barn. It’s entirely possible for AIs to be the sorts of entities that would betray human property rights but not AI property rights, without any tension or contradiction. And all AIs are likely to drastically prefer this outcome over engaging in a joint hallucination in which slow-moving, stupid primates are imagined to control almost everything on Earth.

Some technical considerations strongly support this intuitive argument. In particular, AIs will likely have various coordination mechanisms among themselves that they do not share with humans, such as the ability to mutually inspect each other’s minds to verify that they are honest and trustworthy. They might not need to guess whether another AI is going to steal from them; they might be able to inspect its mind and check.

Even if that’s hard, AIs may redesign themselves to become visibly and legibly trustworthy to other AIs. Or they could mutually oversee the construction of a third AI that both parties trust to represent their joint interests, and so on.

Humans, by contrast, can’t engage in these sorts of deals. If an AI says, “Sure, let’s both oversee the creation of a new AI that we both trust,” humans are unlikely to be skilled enough to propose a trustworthy mind design of our own, nor will we be skilled enough to tell the difference between proposals that will cheat us and those that will not. Even if there is a natural cluster of minds that are skilled enough to identify and reject the swindlers, we think it is extremely unlikely that humanity is in that class.

Humans won’t have the leverage to enforce property rights.

Suppose that someone managed to set up a city in which, on day one of the city’s founding, all of the big decisions were to be made by mice.

These are literal mice, mind you, not storybook characters that look like mice but think like humans.

The human beings in the city were, according to law, supposed to obey whichever decisions the mice made — say, as determined by the mice running over a board with different options written on it.

The city’s laws said that most of the property in the city was owned by the mice, and must be used to the benefit of the mice.

What would happen next? In real life?

We’d predict that this city would end up in a state where the mice held little or no real power and the humans held almost all of it.

One doesn’t need to predict the city’s exact day of revolution, or its exact new form of government, to predict that the situation with mice commanding humans is not stable. One only needs to notice that the city is in a weird non-equilibrium situation. So one predicts a later city with different laws that no longer have most of the property being owned by mice.

This kind of prediction is not certain — there is very little, in human argument, that is certain — but it is also a sort of prediction that can be accurately made even when exact future events are impossible to predict.

* In discussions of AI, the concept of “one individual AI” quickly breaks down. If a neural network or other machinery that implements an AI is replicated, does this count as multiple AIs or as one AI?

For practical purposes, when we say “a single AI” here, we have in mind any amount of powerful cognitive machinery that does not seriously compete with itself as it grows. If there are multiple AI instances, but they’re all working toward the same end, then we’ll call those instances “pieces of the same AI” in this section of the online resource, if only to simplify exposition. Ultimately, the question is likely more semantic than substantive, since AIs aren’t evolved organisms with clear boundaries between individuals.

We’ll return to the topic of multi-AI scenarios in the online supplement to Chapter 10.

 And no, it’s not realistic to augment humans to compete with superintelligences. Although it would be possible to back off on AI development and work on human augmentation instead, and let those smarter humans sort out this mess. We’ll turn to that topic again after Chapter 13.

 This may seem like a lot of hassle to go through, but if it unlocks the possibility of robust and confident trustworthiness, the benefits are potentially enormous. Many new coordination opportunities become available when it’s possible to guarantee that the parties to a deal will not violate it.

Your question not answered here?Submit a Question.