Won’t the situation get better once governments get more involved?

It depends on how (and how soon) they get involved.

When we visit Washington, DC, we often meet policymakers who think that the AI companies have their AIs under control. At the same time, we regularly see folks in the AI industry who say that regulation will fix the problem. A particularly egregious example we observed was the CEO of Google saying that the “underlying risk [of humanity being wiped out] is actually pretty high, but arguing that the higher the risk gets, the more likely that humanity will rally to prevent catastrophe.

Setting aside how insane it is for the CEO of a company to be racing ahead to build a technology that he thinks endangers everyone on Earth in the hope that humanity will “rally” to address the risks he himself is helping create, observe that this is a case where a person on the technical side of the issue imagines that somebody else will solve the issue.

Meanwhile, most folks in politics seem to think that the technical community will solve the problem. This is implicit, for example, every time they say we have to win the race — it’s not possible for this sort of race to have a winner, if the technical challenges aren’t solved. Although it might not be quite as bad as all that; perhaps the policymakers aren’t actually thinking of a race to superintelligence; perhaps they’re just thinking of a race to better chatbots. As of June 2025, an AI policy advisor we know describes Congress as generally not seeming to believe the AI companies when they explicitly say they are working on superintelligence (albeit with some important exceptions).

Just about everyone in power seems to imagine that somebody else is going to solve the issue.

For more discussion of how the world at large is reacting (and how decisionmakers often fail to react appropriately before disasters), see Chapter 12. As of August 2025, governments have yet to muster anything approaching a serious response to this issue. And there’s always a risk that government officials will fail to understand the challenge entirely, and (e.g.) treat AI as a normal technology that should not be strangled by overregulation.

For more on what government interventions have a real hope of averting an AI catastrophe, read on to Chapter 12. The online resources for Chapter 12 also include a discussion on why an international collaboration probably wouldn’t cut it.

Your question not answered here?Submit a Question.