Can we just make it lazy?

Even laziness isn’t safe.

Companies aren’t likely to make “lazy” AIs, because AI is a competitive industry, and that’s not the best way to make a profit. Users won’t want the AI to be lazy about meeting their requests, and the company won’t want the AI to be lazy about maximizing user engagement and attachment, or about thinking better and more clearly.

But even if companies tried to make AI robustly “lazy,” we can expect that they would fail, because nobody knows how to robustly point an AI at anything in a way that’s likely to carry over to superintelligence, as we talked about in Chapter 4.

Moreover, robust laziness seems like an especially difficult target to hit.

Even if all of those obstacles were surmounted, however, “lazy AI” isn’t enough on its own to prevent disaster once AIs achieve smarter-than-human capabilities.

Imagine a very lazy person, somebody who just hates to do the slightest bit more work than necessary. Sounds like a safe sort of person to be around, right?

Now imagine what would happen if this lazy person saw an easy way to create a much harder-working mind to outsource all their work to.

Even if a lazy superintelligence didn’t hate work all that much — even if it just did whatever got the job done, then stopped, without going hard on minimizing work — it would still likely find it just as easy to get the job done by building a harder-working mind to do the task, once it was smart enough.

In a technical context, we might phrase the point as: “Satisficing AIs aren’t a stable equilibrium.” Even if the AI doesn’t want to exert much effort, it would have no compunctions about building a new AI that does exert effort. It wouldn’t even mind modifying itself to “cure” itself of its laziness — as long as there’s a sufficiently lazy way to do so.

Your question not answered here?Submit a Question.