Individual Submission Summary
Share...

Direct link:

Superintelligence: A Game-Changer for IR?

Thu, September 5, 12:00 to 1:30pm, Marriott Philadelphia Downtown, Franklin 1

Abstract

The effects of artificial intelligence on military technology and warfare are now an established topic in IR. Yet the field has all but ignored the prospect of artificial superintelligence (ASI). This is a huge mistake.

On many estimates, artificial general intelligence is likely to emerge by the second half of this century. Once it does, through the ability to teach itself it may quickly come to surpass human intelligence by as much as ours exceeds that of rats or rabbits. For several reasons, IR should not ignore this possibility.

Most obviously, the first actor to develop superintelligence may conquer the world. Alternatively, ASI may take control of international affairs for its own purposes. Either way, the system will go from anarchy to hierarchy. While this might free humanity from the threat of nuclear holocaust and other existential threats, it could also be the beginning of an enduring tyranny—or, if the ASI’s values and aims conflict with our own—lead to the extinction of humanity, even life on earth.

Second, ASI projects are underway in dozens of states. Commercial incentives and the fact that existential risk regulation is a public good ensure that it will continue. Unfortunately, actors who give short shrift to safety concerns will have an edge in getting to ASI first.

Third, the prospect of superintelligence—regardless of whether it is realised—may lead to destabilizing arms races and even preventive war. Some writers have proposed putting research under international control, but states are no more likely to concede such a powerful tool to an international authority than they were the atomic bomb. The result may be intense arms racing, complicating the task of aligning ASI with human values. Worse yet, with poor prospects for controlling ASI proliferation, it might be prudentially rational and even morally justified to race for a monopoly.

Finally, the field of IR can contribute important insights. A great deal of research has focused on the ‘alignment problem’—ensuring that the aims of superintelligent AI remain well-aligned with human ones. Much of the literature, however, underestimates the challenge to cooperation posed by the plurality of human values and collective action problems. Ideas that have attracted philosophers and AI safety researchers—such as centralising ASI research in an international agency, or to putting a moderately superintelligent ‘AI nanny’ in control of its development—will meet a deservedly critical reception in IR.

ASI could prove to be what some once expected from nuclear weapons—a rapidly proliferating technological game-changer that leads either to world unification or to humanity’s destruction. IR scholars should join the debate.

Author