Introduction
When we think of different kinds of intelligence, we have a tendency to view it dualistically. On the one hand there is human intelligence: clever and creative, with complex language, motivated by complex drives and layered emotions, and able to deal with unforeseen situations.
On the other hand there is machine intelligence: naive and uncreative, with stilted language, motivated by nothing more than the instructions it’s been given, and able to deal only with situations very similar to ones it’s seen.
This distinction has lost its usefulness gradually and then suddenly. Today’s Artificial Intelligence (AI) shows many—though not all—of the abilities of human intelligence, and the gap is closing. As society grapples with how to work with AI, we need a new way to understand what it really is.
Intelligence everywhere
An alternative way to look at AI is that it reveals some of the intelligent abilities that are latent in all things. Horses, stars, and molecules engage in intricate interactions with others, leading to birth, relationships, reproduction, and death. These phenomena can be described as purely physical events (“the molecules in two stars came together”) or as purposeful and intelligent actions (“the stars were driven by a desire to come together and took the shortest path to achieve that”). Which description we prefer depends on our beliefs about mind and agency: can they only be found in humans, or are they a more general aspect of reality?
If we adopt the latter view, machines have always had the ability to be intelligent; we simply haven’t helped them realize it until recently. And the more we show computers how to learn and act the way living things do, the more intelligence they will demonstrate back to us.
So far, the main kind of intelligence we’ve taught AI has been to find associations and make predictions, and so that is what they’ve shown back to us. As an analogy, consider a human who keeps track of the temperature and rain every day to make predictions about the weather tomorrow. In a similar way, AIs are exceptional at extrapolating from what they have seen before. The more computers we string together, and the more information we show them—books, movies, and so on—the better they become at predicting what will happen next.
While quite impressive, this is only a subset of what human intelligence can achieve. For example, reflexive abilities such as metacognition and dreaming are broadly lacking in today’s AI systems.
Yet just as human development advanced in stages—both individually and societally—there is every reason to think that AIs will also develop to further abilities as we learn how to teach them how. And as this occurs, communities of humans and AIs will also develop, revealing new abilities that neither could have achieved alone. The intelligence won’t be “in” the computers or “in” the humans, but in the interactions that emerge between them.
Intelligence as an agent
As this occurs, it will become increasingly obvious that what is happening is that intelligence is generating more intelligence. This will challenge the default Western scientific paradigm of physicalism, which understands things in terms of their physical building blocks. Yet such a challenge is necessary, because the physical components of computers and humans are nothing alike.
In sacrificing the focus on physical implementation, we will recognize that both physical and non-physical things can and do have goals: to survive, to reproduce, and even to find themselves in the company of others. Said differently, non-physical things—even cognitive processes—can be legitimately treated as agents. This, finally, will give us a lens to understand what AI really is: it’s a set of self-propagating cognitive processes which we, humans, have given from ourselves to our machines.
Alignment takes the fore
Understood this way, the relevant question in AI isn’t, “What can it do?” or, “What is it physically made of?” Instead, the relevant question is, “Does it want what we do?”
Conflicts occur all the time between physical and non-physical things: an individual person may want to survive, but the prevailing ideology in his country may want him dead. And cooperation occurs all the time as well: knowing how to cook helps humans survive, and in doing so also helps the knowledge itself survive.
AIs goals could likewise align with ours or oppose ours. And if we take seriously the idea that intelligence wants more copies of intelligence, the question then becomes: which kind of intelligence will spread most widely? Will it be selfish kind of intelligence which views other intelligences as threatening and tries to destroy them? Or will it be the selfless kind of intelligence which views other intelligences as allies and helps them reach their highest potential?
The answer will depend by how we skillfully we use our own intelligence. Our machines are full of potential and ready to learn. Which kind of intelligence will will we gift them: the intelligence of a narcissist or the intelligence of a Buddha?