Human Intelligence Enhancement | If Anyone Builds It, Everyone Dies | If Anyone Builds It, Everyone Dies

Enhancing Human Minds for the Alignment ProblemResearch & Collaboration Repository

Are you a biotech researcher, founder, investor, or policymaker interested in contributing to human intelligence enhancement?

Tell us more here.

Why enhancing human intelligence could be a critical path toward solving the alignment problem

The challenge of aligning artificial superintelligence is likely solvable in principle, but it looks far beyond what today’s researchers can solve on the first critical try. But perhaps smarter humans would have a shot.

Enhancing human intelligence is a path that could give us the tools to solve alignment ourselves. Humans come “pre-aligned” in ways machines do not: we share basic motivations and values, and our reasoning is usually comprehensible to other humans. Some very smart people are very altruistic, and humans seem to have a chance of identifying which humans are actually kind, versus which humans are somewhat sociopathic. Even modest improvements in human mental abilities — one or two steps beyond Einstein — might be enough to let researchers recognize their own blind spots, avoid systematic biases, and develop a theory of intelligence that works for AI alignment on the first try. We don’t claim this would be easy or guaranteed, but compared to entrusting opaque machines with humanity’s future, augmenting human intelligence stands out as one of the most hopeful strategies available.

Even if researchers with augmented intelligence aren’t able to solve the AI alignment problem, they may be able to decrease the risk of human extinction from AI through other pathways. The task of creating and instituting social structures — such as government policies, treaties, professional rules, and social norms — is itself a set of difficult challenges, which smarter people would have a better chance of succeeding at. Also, if there were many more very smart people, then many research areas would make progress more quickly. If science, technology, and medicine were making very rapid progress, there would be much less justification for rushing to make superhuman AI: Why risk destroying humanity, when we can get much of the benefit using our own brainpower anyway?

Who should reach out?

It’s a big and complex task to significantly increase humanity’s brainpower and will require a wide range of resources and talents. If you have expertise in the sort of biotechnology that would be useful when it comes to augmenting human intelligence, we encourage you to fill out this form. We’d like to hear from anyone who’s interested in contributing to the effort of human intelligence amplification. For example:

  • People who might want to fund research or invest in commercial ventures that will advance the field of human intelligence enhancement;
  • Scientists with the relevant biotech background who might want to put their knowledge and talent towards the relevant research challenges;
  • Regulators and other policymakers who want to make innovation-positive government regulations for the relevant emerging technologies;
  • Skilled generalists eager to investigate and chart potential approaches.

We are not experts in the relevant aspects of biotechnology, but we have connections to various figures in the field. At the very least, we can perhaps connect funders and researchers and policymakers to one another, to help grease the gears.

Links and resources

There’s a large amount of scientific research in many areas that bear on human intelligence enhancement, but there’s little direct discussion of that technical challenge. Here are some articles that address the challenge more directly: