Will elected officials recognize this as a real threat?

An increasing number already have.

Examples include:

“We have to be very careful with it, right? We have to watch it. [...] You know, there are people that say it takes over, it takes over the human race. It's really powerful stuff, AI.”

“Today’s discussion will likely touch on some of the pressing issues in highly capable AI systems: [...] The push to develop Artificial General Intelligence, or super-intelligent AI, that would be so powerful and capable that we would see it as a ‘digital god.’ [...] I hope we will spend our time today on the specific policy solutions necessary to avert the long-term risks of AI and the potential doomsday scenarios.”

“Artificial Intelligence is the biggest technological threat we've faced since the invention of the atomic bomb.”

“And now, artificial general intelligence or AGI, which I know our witnesses are going to address today, provides even more frightening prospects for harm. The idea that AGI might in 10 or 20 years be smarter, or at least as smart as a human being, is no longer that far out in the future. It is very far from science fiction. It is here and now. One to three years has been the latest prediction, in fact, before this Committee. And we know that artificial intelligence that is as smart as human beings is also capable of deceiving us, manipulating us, and concealing facts from us, and having a mind of its own when it comes to warfare, whether it is cyber war or nuclear war or simply war on the ground in the battlefield.”

“If we miss this opportunity the consequences will shape generations to come. What begins today as Generative AI may one day become Artificial General Intelligence. A wild, unregulated AI industry — that is accountable to no one — developing Artificial General Intelligence should scare us all into action.”


We think that the major impediment to people recognizing the threat is getting them to understand it. In the few short months since the book was shipped off to print, it seems to us that the world has already made additional headway in that direction.

Here are some statements by U.S. politicians on both sides of the political aisle, in the summer of 2025:

“Some experts warn we are just a few years away from the emergence of artificial general intelligence — or the singularity. Others argue the technology has inherent limitations and we are decades away from the singularity, if it is even possible. We don’t know for certain what the future of AI will look like. But what I do know is the future is too important to be left up to chance. We need to do our best to understand what kinds of impact AI can have on our economy and society and develop potential solutions now, before it’s too late.”

“Artificial superintelligence is one of the largest existential threats that we face right now. […] Should we also be concerned that authoritarian states like China or Russia may lose control over their own advanced systems? […] And is it possible that a loss of control by any nation-state, including our own, could give rise to an independent AGI or ASI actor that globally we will need to contend with?”

“I’m not voting for the development of skynet and the rise of the machines by destroying federalism for 10 years by taking away state rights to regulate and make laws on all AI.”

“Industry leaders have publicly acknowledged the development of increasingly powerful artificial intelligence systems, with some discussing the potential for artificial general intelligence and superintelligence that could fundamentally reshape the society of the United States.”

“Last month, it was reported that Open AI's chief scientist wanted to ‘build a bunker before we release AGI.’ [...] Rather than building bunkers, however, we should be building safer AI. Whether it's American AI or Chinese AI, it should not be released until we know it's safe.”

“This is something that we have to get right, and we only get one shot at. There's a widely shared view that once AI capability crosses a certain threshold, whether that be recursive self-improvement or some other threshold, there's going to be an escape velocity. That has implications for the narrower geopolitical context of which country leads in the technology, but also for the broader idea of, ‘Is this technology going to be aligned with and beneficial to humanity?’”

“The deeper we get into it, the more we realize that it’s also possible that the race to be the first in AI is the race to be the first to lose control.” (And: “As long as there are really thoughtful people, like Dr. Hinton or others, who worry about the existential risks of artificial intelligence — the end of humanity — I don't think we can afford to ignore that.”)

“There are very, very knowledgeable people — and I just talked to one today — who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario — and there is some concern about that among very knowledgeable people in the industry.”

Other noteworthy statements include:

“Alarm bells over the latest form of artificial intelligence, generative AI, are deafening. And they are loudest from the developers who designed it. These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war. We must take those warnings seriously.”

A lot more progress is needed, but the world is beginning to take notice. The time is ripe for alerting officials to the need for rapid action at the federal and international levels.

Your question not answered here?Submit a Question.