Isn’t this all just fear-mongering by AI leaders to increase status and raise investment?
No.
Throughout the book, we’ve laid out our case for the claim that rushing ahead on AI is likely to get us all killed. In Chapter 3, we discussed how AI will have its own drives and goals. In Chapters 4 and 5, we discussed why AI is likely to pursue ends that nobody intended, and in Chapter 6, we spelled out how machine superintelligences will have not just a motive but a means to kill us all.
These are the sorts of claims we beg you to evaluate when deciding whether the race to superintelligence should be put to a stop. A person can’t figure out whether AI research is on track to kill us all by arguing back and forth about the schemes of corporate executives.
Are the CEOs trying to drum up hype by talking about “AI risk”?
Or are they trying to pander to concerned researchers and lawmakers, and position themselves as the “good guys”?
These questions don’t bear on the facts about how smart machines would behave.
Even if the AI CEOs are eager to exploit discussions of danger to hype up their product, that doesn’t mean that the work they’re doing is therefore harmless. To figure out whether it’s dangerous, you have to look into AI itself as a technology, not at the press releases that come out of the labs.
Years before these companies existed, there were researchers and academics with zero corporate incentives — ourselves included — warning against racing to build smarter-than-human AI. We spoke to Sam Altman and Elon Musk before they co-founded OpenAI, and told them that the idea of starting OpenAI seemed foolish and likely to increase danger. We spoke with Dario Amodei before he joined OpenAI and advised against his relentless push to scale AIs up (a project that would lead to LLMs).
And if you look at the messaging today, many people without corporate incentives are expressing their concern. They range from respected academics to the late Pope to the chair of the FTC* to U.S. Congress members.
There’s something to be said for treating the utterances of tech CEOs with cynicism. There’s no shortage of examples of AI corporate executives being two-faced, saying one thing in private blog posts and a different thing when testifying before Congress. But to leap from “the heads of these labs are liars” to “there’s no possible way AI could pose a severe threat” is very strange, when the labs themselves routinely downplay this issue. The Nobel-laureat godfather of the field, the most cited living scientist, a steady trickle of whistleblowers, and hundreds of visibly nervous researchers are raising the alarm about it. Nothing about the situation looks like a business-as-usual corporate hype cycle. In a circumstance like this, dismissing the idea without even engaging with the arguments seems more like naiveté than cynicism.
Questions like “Can CEOs raise more money by talking about the dangers?” can tell us a little about how much to trust the CEOs, but they can’t tell us much about the dangers themselves. If discussing danger is profitable, that doesn’t affect whether the danger is real. If it’s unprofitable, that also doesn’t affect whether it’s real.
If you want to figure out whether the dangers are real, you have to ask questions like “Can anyone create an AI that would behave in a friendly fashion even after surpassing human intelligence?” and otherwise engage with arguments about AI, rather than arguments about the people standing nearby. So in the end, we beg you to engage with the arguments themselves. The consequences of getting this wrong are too high.
* Federal Trade Commission chair Lina Khan said in 2023: “Ah, I have to stay an optimist on this one. So I’m going to hedge on the side of lower risk there […] Maybe, like, 15 percent [that AI will kill us all].”
Notes
[1] a different thing when testifying: OpenAI CEO Sam Altman wrote in 2015:
Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.
When addressing Congress in 2023, however, Altman made no mention of this threat, instead listing Privacy, Children's Safety, Accuracy, Disinformation, Cybersecurity, and Economic Impacts as possible areas of concern.