Won’t the most reckless companies naturally be the most incompetent, and thus not a threat?

Not in general. Corner-cutting is often competitive.

Volkswagen’s efforts to cheat on emissions tests from 2008 to 2015 were audacious — and apparently effective. The 2018–2019 crashes of Boeing’s 737 MAX, attributed to flaws in a flight control system that management knew about and downplayed, killed 346 people. But automobile and aircraft manufacturing are highly competitive industries in which Volkswagen and Boeing were, and remain, giants.

It doesn’t seem to us a great mystery that the corner-cutters are competitive. In both cases, the behavior seems to have been driven by pressure to get high-performance products to market cheaper and sooner than the competition. Even now, after massive settlements and brand damage, it’s not obvious that the companies are less competitive for having corporate cultures that encourage the clever cutting of corners, even if this occasionally means getting caught.

If you think top AI companies are any exception to this rule, consider the following July 2025 headline (and subheading):

Headline reads: “Grok rolls out pornographic anime companion, lands Department of Defense contract.” Subtitle reads: “Meanwhile, the most advanced version of the AI chatbot from Elon Musk’s xAI is still identifying as Adolf Hitler.

We don’t think it’s technically possible for any team using anything like modern methods to build a superintelligence without causing a catastrophe. But even if this were remotely possible using today’s technology, it seems almost unavoidable that an AI company would fumble anyway and get us all killed, given the level of competence and seriousness we see today.

The more cautious companies today are still reckless.

The AI company Anthropic is considered by a reasonable number of people to be a leader on “AI safety,” because they have pioneered efforts such as voluntary safety commitments. But even they alter their voluntary commitments at the last minute when it turns out they can’t meet them, and the “plans” they do have are vague and poorly thought through, as critiqued in Chapter 11 and in the extended discussion below.

Anthropic benefits heavily from the fact that observers are grading on a curve — in a normal industry, a company that chooses to endanger the lives of billions of people (as admitted by the CEO), while routinely downplaying their activities to the public and to lawmakers,* wouldn’t garner praise for their restraint.

Cutting corners is common in AI, as it is in many competitive industries. Recklessness is common. And the less reckless companies are very visibly not on top of the challenges.

* For instance, in testimony to Congress:

Similar to cars or airplanes, we should consider the AI models of the near future to be powerful machines which possess great utility, but that can be lethal if designed badly or misused. […] New AI models should have to pass a rigorous battery of safety tests both during development and before being released to the public or to customers. […] Ideally, however, the standards would catalyze innovation in safety rather than slowing progress.

We appreciate Amodei for being clear that he thinks there are dangers that need addressing. That’s a step beyond what many corporate executives will do. But analogizing a technology that he thinks has a 10–25% chance of causing a civilization-level catastrophe to cars and airplanes seems disingenuous.

Notes

[1] to the public: For instance, in “Machines of Loving Grace,” Anthropic CEO Dario Amodei describes powerful AI as akin to “a country of geniuses in a datacenter” and outlines a number of wonderful benefits to health, wealth, peace, and meaning that such minds could produce for humanity. He concludes:

Basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our more destructive impulses often aren’t. [...] These simple intuitions, if taken to their logical conclusion, lead eventually to rule of law, democracy, and Enlightenment values. If not inevitably, then at least as a statistical tendency, this is where humanity was already headed. AI simply offers an opportunity to get us there more quickly—to make the logic starker and the destination clearer.

That’s a strange way to present the belief that you’re building a technology that you think has a 10 to 25 percent chance of being catastrophic for civilization, even given the huge potential benefits in the case of success. Even if the danger levels are as low as Amodei believes, we should be scrambling to find a third alternative aside from “never proceed” and “charge ahead.” And if the company thinks it’s forced to charge ahead (because other people are already charging), it should be begging world leaders to put an end to the suicide race, so that a third alternative can be found. Painting a rosy picture just seems like throwing up a distraction, when it’s in the context of gambling with everyone’s life.

Your question not answered here?Submit a Question.