Why not use international cooperation to build AI safely, rather than to shut it all down?
Because we don’t have the technical ability to build it safely.
We touched on this in the book, where we pointed out that an international collaboration still requires an international ban everywhere else (because otherwise the international collaborators would not have the time they need). If we suppose that Earth institutes an international ban, what’s the harm in having one unified, collaborative research institute?
The harm is that an international collaboration of alchemists can’t transmute lead into gold any more than a single alchemist can. The best plan that all the alchemists agree upon still won’t do the job.
Relatedly, we’re worried that the people running an international institute like that would be the kind of bureaucrat who thinks that approving research is part of their job. Or the kind who thinks it’s their mandate to keep letting the researchers produce more and more brilliant medical advances. Or who thinks it would be a bad look to say “No” to all of the AI bright eager optimists coming up with brilliant ideas for building an even more powerful machine intelligence that they guarantee will be safe.
We worry that a leader like that would direct the international center to keep building smarter and smarter AIs, and then everybody would die.
Even if the organization’s mandate nominally allows for backing off if the research looks dangerous, it might take a rare and brave soul to say “No” to thousands of different research proposals, year in and year out, with no exceptions, for what would likely be decades. All while the AI scientists continue to promise untold wealth, a cure for cancer, and all manner of technological miracles, if the organization would just ease off on its concerns.
We’ve invested our lives in learning about machine intelligence, not about the culture of institutions and bureaucracies, so we’re less confident about our predictions in this domain. Still, we have read history books.
The Chernobyl operators continued with their disastrous safety test because it had been aborted three times already. Aborting it a fourth time would have been embarrassing.
Barely three months before the Chernobyl meltdown, NASA had launched the Space Shuttle Challenger on its final fatal flight because the people in charge thought their job was to launch space shuttles. The launch had already been delayed three times. Aborting it a fourth time would have been awkward.
Between Chernobyl and the Challenger, three delays seems to be the human limit. Suppose Earth sets up an international AI collaboration, and some “AI safety test” fails three times. Realistically, humans are the sort of creatures that would press “go” the fourth time despite some niggling doubts, because that feels less embarrassing than postponing the test again. Except that in the case of AI, it wouldn’t just wipe out the city of Chernobyl or kill a crew of astronauts. It would kill everyone.
We’re fully on board with the idea that humanity should build smarter-than-human AI eventually.* But rushing to assemble an international AI research hub fails to take seriously the technical challenge before us.
Given humanity’s dismal state of knowledge and competence on this topic, it doesn’t matter who’s in charge. If anyone builds it, everyone dies.
* How, if not by an international coalition? We’d recommend investment into enhancing adult human intelligence, but this is not the sort of idea people need to agree upon to agree that shutting down ASI research is a good idea.
Notes
[1] three times already: The INSAG-7 safety report (p. 51) records that rundown tests were attempted at Chernobyl in 1982, 1984, and 1985 before the disastrous 1986 test, which was itself embarrassingly delayed to the point where operators expected to be fired if they failed to run the test.
[2] delayed three times: Technically “postponed three times and scrubbed once” according to the Rogers Commission Report (p. 17). But one of the postponements occurred a month beforehand in response to delays in a different mission, whereas the other three happened in quick succession in the days leading up to the launch; it’s the latter three that we expect were putting pressure on the NASA managers who thought their job was to launch space shuttles.