What are your incentives and conflicts of interest, as authors?

We don’t expect to make any money from the book in the average case. Separately, we would love to be wrong about the book’s thesis.

We (Soares and Yudkowsky) take our salary from the Machine Intelligence Research Institute (MIRI), which is funded by donations from people who think these issues are important. Perhaps the book will drive donations.

That said, we have other opportunities to make money, and we are not in the book-writing business for the cash. The advance we got from this book was paid entirely toward publicity for this book, and royalties will go entirely to MIRI to pay it back for the staff time and effort invested.*

And of course, both authors would be ecstatic to conclude that our civilization is not in danger. We’d love to simply retire, or make more money elsewhere.

We don’t think we’d have difficulty changing our minds, if in fact the evidence merited a change. It’s happened before. MIRI was founded (under the name “Singularity Institute”) as a project to build superintelligence. It took a year for Yudkowsky to realize that this wouldn’t automatically go well, and a couple more years for him to realize that making it go well would be rather tricky.

We’ve pivoted once, and we’d love to pivot again. We just don’t think that’s what the evidence merits.

We don’t think the situation is hopeless, but it does seem to us that there is a genuine problem here, and that the threat is extreme if the world doesn’t rise to the occasion.

It’s also worth emphasizing that to figure out whether AI is on track to kill us all, you have to think about AI. If you only think about people, you can come up with reasons to dismiss any source: Academics are out of touch; corporations are trying to drum up hype; the non-profits want to raise money; the hobbyists don’t know what they’re talking about.

But if you take that route, then your final beliefs will be determined by who you choose to dismiss, giving arguments and evidence no space to change your mind if you’re wrong. To figure out what’s true, there’s no substitute for evaluating the arguments and seeing if they stand on their own legs, separate from the question of who raised them.

Our book doesn’t open with the easy argument that the corporate executives running AI labs have an incentive to convince the populace that AIs are safe. It begins by discussing AI. And later in the book, we spend a little time reviewing the history of human scientists being over-optimistic, but we never say you should ignore someone’s argument because they work at an AI lab. We discuss some of the developers’ actual plans, and why those plans wouldn’t work on their own merits. We are doing our best to sit down and have a conversation about the actual arguments, because it’s the actual arguments that matter.

If you think we’re wrong, we invite you to engage with our arguments and point out the specific places where you think we’ve gotten things wrong. We think that’s a more reliable way to figure out what’s true than looking mainly at people’s character and incentives. The most biased person in the world may say that it’s raining, but that doesn’t mean it’s sunny.

* If the book performs so well as to pay off all those investments, there is a clause in our contract saying that the authors eventually get to share in the profits with MIRI, after MIRI is substantially paid back for its effort. However, MIRI has been putting so much effort into helping out with the book that, unless the book dramatically exceeds our expectations, we won’t ever see a dime.

Your question not answered here?Submit a Question.