Isn’t this AI stuff just science fiction?
We can’t learn much from a topic’s prevalence in fiction.
Smarter-than-human AI hasn’t been built yet, but it has been depicted in fiction. We recommend against anchoring on these depictions, however. Real AI probably won’t be much like fictional AI, for reasons we’ll dive into in Chapter 4.
AI isn’t the first technology that was anticipated by fiction. Heavier-than-air flight and travel to the moon were both depicted before their time. And the general idea of nuclear weapons was anticipated by H. G. Wells, one of the first science fiction writers, in a 1914 novel called The World Set Free. Wells didn’t get the details right; he wrote about a bomb that went on burning intensely for days, rather than a bomb that exploded all at once and left lingering death behind. But Wells had the general idea of a bomb that ran on nuclear rather than chemical energy.
In 1939, Albert Einstein and Leo Szilard sent a letter to President Roosevelt that called for the U.S. to try to outrace Germany in building an atomic bomb. We could imagine a world where Roosevelt had first encountered the notion of nuclear bombs in Wells’s novel, causing him to dismiss the idea as science fiction.
As it happens, in real life, Roosevelt took the idea seriously, at least enough to create the Advisory Committee on Uranium. But this case demonstrates the peril of dismissing ideas just because a fiction writer talked about a similar-sounding idea in the past.
Science fiction can mislead you because you assume it’s true, or it can mislead you because you assume it’s false. Science fiction authors aren’t prophets, but they also aren’t anti-prophets whose words are guaranteed to be wrong. In the vast majority of cases, we’re better off ignoring fiction and analyzing technologies and scenarios on their own terms.
To predict what happens in reality, there is no substitute for just thinking through the arguments and weighing the evidence.
The consequences of AI are inevitably going to be weird.
We sympathize with the reaction that AI is weird, and that it would transform the world and violate the status quo. All of us have intuitions adapted, to some degree, to a world in which humans are the only species capable of feats like building a power plant. All of us have intuitions adapted to a world where machines, throughout all of human history, have always been unintelligent tools. One thing we can be very confident of is that a future with smarter-than-human AIs would look different.
Large, lasting changes to the world don’t happen every day. The heuristic “nothing ever happens” performs great most of the time, but the times when it fails are some of the most important times in history to be paying attention. Much of the point of thinking about the future at all is to anticipate those moments when something big does happen, so that preparation is possible.
One way to overcome a bias toward the status quo is to recall the historical record, as discussed in the introduction.
Sometimes, particular inventions end up upending the world. Consider the steam engine, and the many other technologies it helped enable during the Industrial Revolution, rapidly transforming human life:
Is the advent of truly general AI a similarly consequential development? It seems that artificial intelligence would be at least as consequential as the Industrial Revolution. Among other things:
AI is likely to enable technological progress to develop much faster. As we’ll discuss in Chapter 1, machines can operate much faster than the human brain. And humans can improve AI — and AI will eventually be able to improve itself — until machines are far better than humans at making scientific discoveries, inventing new technologies, et cetera.
For all of human history, the machinery of the human brain remained fundamentally unchanged, even as humanity produced ever-more-impressive feats of engineering. When the machinery of cognition begins to improve in its own right, when it becomes capable of improving itself, we should expect many different things to start changing very quickly.
- Additionally, as we’ll discuss in Chapter 3, sufficiently capable AIs are likely to have goals of their own. If AIs were essentially just faster and smarter human beings, then that would be a huge deal in its own right. But AIs will instead be, in effect, a totally new species of intelligent life on Earth — one with its own goals, which are likely (as we’ll discuss in Chapters 4 and 5) to importantly diverge from human goals.
On the face of it, it would be surprising if these two major developments could occur without upending the existing world order. Believing in a “normal” future seems to require believing that machine intelligence will never surpass human intelligence at all. This never seemed like a truly viable option, and it’s become far harder to believe in 2025 than it was in 2015 or 2005.
The long-term future will likewise be weird.
If you look too far into the future, the result is going to be weird somehow. The 21st century looks downright bizarre from the perspective of the 19th century, which looked bizarre from the perspective of the 17th century. AI accelerates this process and adds a very novel player to the game board.
One aspect of the future that seems predictable today is that advanced technological species won’t remain stuck on their own planet indefinitely. Right now, the night sky is full of stars just burning off their energy. But nothing stops life from building the technology to travel the stars and harvest that energy toward some purpose.
There are some physical limitations on how quickly that travel can be done, but it looks like there are no limitations on doing it eventually. There’s nothing stopping us from eventually developing the kinds of interstellar probes that can go out and extract resources from the universe writ large and convert these resources into flourishing civilizations, with a side order of more self-replicating probes to colonize yet more regions of space. If we displace ourselves with AIs, there’s nothing stopping those AIs from doing the same, but swapping out “flourishing civilizations” for whatever ends the AI is pursuing.
In the same way that life spread to barren rocks on Earth until the whole world was teeming with organisms, we can expect life (or machines built by life) to eventually spread to uninhabited parts of the universe, until it’s just as strange to find a lifeless solar system as it would be to find a lifeless island on Earth today, devoid even of bacteria.
At present, most of the matter in the universe, like stars, is arranged by happenstance. But the sufficiently long-term future is almost surely one in which most of the matter is arranged according to some design, i.e., according to the preferences of whichever entities manage to harvest and repurpose the stars.
Even if nothing on Earth ever spreads through the cosmos, and even if most intelligent life that arises in distant galaxies never leaves its home planet, it only takes one spacefaring intelligence anywhere in the universe to light the spark and start spreading through the universe, traveling to new star systems and using the resources there to build more probes to expand outwards to yet more star systems — just as it only took one self-replicating microorganism (and a bit of exponential growth) to turn a lifeless planet into a world teeming, on every island, with life.
So the future will look different from the present. Indeed, we can expect it to look radically different. The stars themselves will predictably be transformed, in the long run, by whatever biological species or AIs are looking for more resources — even if we can’t say much today about what that species might look like, or about what ends the universe’s resources might be put toward.
Predicting the details seems difficult, verging on impossible. That’s a hard call. But predicting the transformation of the universe into a place where most matter is harvested and put toward some purpose, whatever that may be? That is an easier call, even if it’s counterintuitive and weird to a civilization that has barely begun to extract resources from stars at all.
A million years from now, we shouldn’t expect the future to look like the year 2025, with a bunch of hairless apes messing around on the surface of Earth. Long before that, either we’ll have killed ourselves, or our descendants will have gone out to explore the cosmos themselves.*
It’s definitely going to get weird for humanity. The question is when.
The future will hit us fast.
Technologies like AI mean that the future may come knocking at our door soon, and its effects may hit us hard.
The Industrial Revolution transformed the world very quickly, by the standards of pre-modern history. Homo sapiens reshaped the world very quickly, by the standards of evolutionary processes. Life reshaped the world very quickly, by the standards of cosmological and geological processes. New processes for changing the world can reshape the world very quickly, as measured by the old standard.
Humanity looks to be on the brink of another radical transformation, where machines can begin reshaping the world at machine speeds, which far outstrip biological speeds. We’ll have more to say in Chapters 1 and 6 about just how well machine intelligence would measure up against human intelligence. But minimally, we need to take seriously the possibility that the development of smarter-than-human machines would radically change the world at high speed. That sort of thing has happened over and over again throughout the course of time.
* Or they’ll have built tools or successors to do the exploring, in whatever way they find convenient with the benefits of more advanced science and technology.
Notes
[1] nothing ever happens: The phrase “nothing ever happens” appears to be common among people who participate in prediction markets. The heuristic itself is discussed by, e.g., the blogger Scott Alexander in his essay Heuristics That Almost Always Work.
[2] no limitations: See, for example, the paper Eternity in six hours, which discusses the limits on intergalactic colonization given the constraints of known physical law.