Resources › Chapter 11Won’t we just muddle through, like always?The world usually muddles through by trial and error. In this case, early errors wouldn’t leave survivors. 1 min readDo you see alignment as all-or-nothing?No. But “partial alignment” is still likely to be catastrophic. 2 min readWon’t the situation get better once governments get more involved?It depends on how (and how soon) they get involved. 2 min readWon’t the most reckless companies naturally be the most incompetent, and thus not a threat?The more cautious companies today are still reckless. 4 min readIsn’t it important to race ahead because of “hardware overhang”?That would be suicidal, because we’re too far from an alignment solution. 3 min readIsn’t it important to race ahead so we can do alignment research?We strongly recommend against this entire AI paradigm. 3 min readWhat if AI companies only deploy their AIs for non-dangerous actions?Actions that seem benign can still require dangerous capabilities. 9 min readWhy not just read the AI’s thoughts?Their thoughts are hard to read. 6 min readWhat if we made AIs debate, compete with, or oversee each other?If the AIs get smart enough to matter, they likely collude. 3 min readWhat about various other AI alignment plans?We cover additional alignment proposals in the book. 1 min readWon’t there be early warnings researchers can use to identify problems?Warning signs don’t help if you don’t know what to do with them. 8 min readYour question not answered here?Submit a Question.Extended DiscussionMore on Some of the Plans We Critiqued in the BookWe Know What It Looks Like When a Problem Is Being Treated with Respect, And This Isn’t ItShutdown Buttons and Corrigibility