Error (in the first print, U.S. and U.K. editions): On page 137, we said that @truth_terminal came out in 2023. It came out in 2024.
Error (in the first print, U.S. and U.K. editions): On page 213, we said that “[t]he entire technological revolution that led to ChatGPT and other popular LLMs was kicked off by a 2018 paper introducing a clever new arrangement of arithmetic inside a GPU, the “transformer” algorithm, […]” The paper was actually a 2017 paper, titled “Attention Is All You Need,” which led to the creation of the first GPT (GPT-1) in 2018.
Error (in the first print, U.S. and U.K. editions): On page 213, we said that “[p]retty much every year, scientists come out with a newer, cleverer, more efficient set of AI algorithms that lets them more cheaply train a new AI model as powerful as last year’s most powerful model — often using literally 10 percent or 1 percent as much computing power.” The word “often” should be “sometimes.” In recent years, algorithmic progress has tended towards it taking 33 percent as much computing power to train a model as powerful as one year before. The 10 percent and 1 percent cases are cases where new algorithms allow AIs to behave in qualitatively new ways (like the dawn of LLMs), which are harder to measure, and do not tend to come yearly.