Is General Artificial Intelligence Safe?
Imagine a superintelligent artificial intelligence (or perhaps a superintelligent extra terrestrial) were to arrive on Earth. Would it be evil?
I have argued in the negative. A superintelligence, be it AI, biological, or otherwise, could not be evil/totalitarian because totalitarianism is inherently static: intellectually and morally (and hence technologically, politically) such a system cannot make the progress necessary to attain, sustain, and evolve (improve) itself.
This is because a prerequisite to progress are mechanisms (e.g., traditions, institutions, etc.) of free conjectures and criticisms, which are categorically incompatible with totalitarianism. The argument resembles that for the necessity of altruism to have evolved: organisms that were not reciprocally altruistic could not survive.
Superintelligence extends this logic (whether it be a mind or a civilization). A superintelligence would need to be linguistically competent, and linguistic competence all but entails moral goodness, not evil. This is not to say that linguistically competent entities cannot be evil—one need merely review the history of totalitarians. But, I am including in linguistic competence a dialectical competence: a competence for an inner dialogue with your “daemon” (in Plato’s words), your “inner witness” (in Adam Smith’s words), your conscience (in C.S. Lewis’s words). And by this definition or criterion of linguistic competence, totalitarians are linguistically incompetent: they speak—and think—in clichés.
Internal to any genuine general intelligence—which I am equating with linguistic competence—is a marketplace of ideas, in which good explanations—truth, beauty, wisdom, moral rightness—will, ultimately, emerge. Evil can only be local and ephemeral: evil is essentially stupid, and stupidity inevitably collapses (hence all totalitarians have fallen (or shall)).
It follows that a superintelligent AI would necessarily be good, not evil. However, that may not be true of a weak AI built upon deep learning.
Plato observed that evil is not a “something”; it is a “nothing”. That is, evil is the absence of goodness. Hence, programming in goodness—an explicit, positive morality— into AI is necessary to preempt the emergence of evil. Even if deep learning were sufficient for AI to attain general intelligence (and superintelligence), we ought not to pursue it because a deep learning neural net is the ideal “citizen” of totalitarianism: because it has a minimal preprogramming, an external agency (e.g., a totalitarian government, company, individual) could literally make it in its image.
The neural net need not even have been designed by the malicious government/company/individual: the neural net is vulnerable to corruption by any environmental influence. We have witnessed this in trolls online corrupting deep learning bots—the latter are blank slates on which the former can “write” whatever they will.
Deep learning—in its malleability, gullibility, and exploitability—is an existential danger, and inherently and irredeemably so: the power of a neural net is that it can learn anything, and hence can learn to be evil. Thus if totalitarian AI is to rise, it will be based on deep learning, and hence must be countered by strong AI.
A superintelligence based on strong AI—specifically the anthronoetic (human-style) AI being developed at Oceanit—would not only be moral: it would be supermoral. Like the domains of science, technology, philosophy, politics, and aesthetics, morality consists of “problems” (e.g., “What is the good life?”), and all problems in all domains are solvable with the requisite intelligence. Hence the outstanding questions of morality have answers, and a superintelligent AI could create them, discovering in the process new questions to be answered in an infinite exploration of the moral universe.