When is this worrisome sort of AI going to be developed?

Knowing that a technology is coming doesn’t grant knowledge of exactly when it’s coming.

Many of the things people ask us to try to predict for them, we in fact have no way of knowing. When Leo Szilard wrote a letter warning the USA about nuclear weaponry in 1939, he did not and could not include any note along the lines of, “The first atomic weapon will be ready to detonate for testing in six years.”

This would have been very valuable information! But even when you are the first person to correctly predict nuclear chain reactions, as Szilard was — even when you’re the very first one to see that a technology is possible and will be consequential — you cannot predict exactly when that technology will arrive.

There are easy calls and hard calls. We do not pretend to be able to make hard calls, such as exactly when the dangerous sort of AI will be produced.

Experts keep being surprised by how fast AI progress happens.

Not knowing when AI is coming is not the same as knowing that it’s a long way off.

In 2021, the forecasting community on the prediction website Metaculus estimated that the first “truly general AI” would arrive in 2049. One year later, in 2022, that aggregate community prediction had fallen by twelve years, to 2037. Another year later, in 2023, it had fallen by a further four years, to 2033. Again and again, forecasters have been surprised by the fast pace of AI progress, with their time estimates varying wildly year over year.

This phenomenon is not isolated to Metaculus. An organization called 80,000 Hours documents various other cases the rapidly shortening timelines from many groups of expert forecasters. And even superforecasters — who consistently win forecasting tournaments and often exceed domain experts in their ability to forecast the future — assigned only 2.3% probability to AIs achieving the International Math Olympiad gold medal by the year 2025. AIs achieved the International Math Olympiad gold medal in July of 2025. 

Smarter-than-human AI might intuitively look like it’s decades off, but ChatGPT-level AI looked like it was decades off in 2021, and then suddenly it arrived. Who knows when new qualitative AI improvements will suddenly arrive? Maybe it’ll take another ten years. Or maybe a breakthrough will come tomorrow. We don’t know how long it will take, but a number of researchers have become increasingly worried that time might be running short. Without claiming special knowledge on this front, we think humanity should react soon. It’s not clear how much more warning we’re ever going to get.

See Chapter 1 for more discussion of ways that AI capabilities could cascade with very little warning. And see Chapter 2 for more discussion of modern AI paradigms, and whether they will or won’t be able to go “all the way.”

Be suspicious of media claims about what can and can’t happen soon. (It may have already happened!)

Two years after Wilbur Wright’s dejected prediction that powered flight would take a thousand years, the New York Times confidently asserted it would take a million. Two months and eight days later, the Wright brothers flew.

Today, skeptics continue to make over-the-top claims that AI could never possibly rival humans in some specific capability, even as recent progress with machine learning shows AIs matching (or exceeding) human performance on a growing list of benchmarks. It has been known since at least late 2024, for example, that modern AIs can often identify sarcasm and irony from text and even nonverbal cues. But this didn’t stop the New York Times from repeating the claim in May 2025 that “scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony.”*

All of which is to say: Many will claim to have knowledge that smarter-than-human AI is imminent, or that it’s incalculably far off in the future. But the uncomfortable reality is that nobody knows right now.

Worse, there’s a strong chance that nobody will ever know until after it’s too late for the international community to do anything about the matter.

Timing the next technological breakthrough is incredibly difficult. We know that smarter-than-human AI is lethally dangerous, but if we also need to know what day of the week it’s coming on, then we’re out of luck. We need to be able to act from a position of uncertainty, or we won’t act at all.

* Yes, AIs can even recognize the irony of the New York Times reporting that they can’t recognize irony. (To be fair to the New York Times, some of their reporters cover AI with somewhat greater clarity.)

Notes

[1] confidently asserted: Quoting the 1903 article “Flying Machines Which Do Not Fly”:

The machine does only what it must do in obedience to natural laws acting on passive matter. Hence, if it requires, say, a thousand years to fit for easy flight a bird which started with rudimentary wings, or ten thousand for one which started with no wings at all and had to sprout them ab initio, it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years — provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials. No doubt the problem has attractions for those it interests, but to the ordinary man it would seem as if effort might be employed more profitably.

Your question not answered here?Submit a Question.