We usually think of AI as faster and better versions of human brains, but if we want technology to progress by leads and bounds, we must make AI that is like nothing else on Earth, says digital visionary Kevin Kelly.
Because of a quirk in our evolutionary history, we are cruising as the only self-conscious species on our planet, leaving us with the incorrect idea that human intelligence is singular.
It is not.
Our own intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence “general purpose,” because compared with other kinds of minds we have met, it can solve more types of problems, but as we build more and more synthetic minds we’ll come to realize that human thinking is not general at all. It is only one species of thinking.
The kind of thinking done by the emerging AIs today is already somewhat unlike human thinking. While AIs can accomplish tasks (such as playing chess or describing the contents of a photograph) that we once believed only humans could do, they do not do it in a humanlike fashion. For instance, I uploaded 130,000 of my personal snapshots to Google Photo, and the new Google AI remembers all the objects in all the images from my life. When I ask it to show me any image with a bicycle in it, or a bridge, or my mother, it will instantly display them. Human memory cannot scale to this degree, which makes this artificial ability feel quite un-human. Similarly, we are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don’t think like us. In a super-connected world, thinking different is the source of innovation and wealth.
A bigger payoff will come when we start inventing new kinds of intelligences and entirely new ways of thinking.
We’re entering a world where AI will be ubiquitous — cheap smartness will be embedded into all that we make. But a bigger payoff will come when we start inventing new kinds of intelligences and entirely new ways of thinking. It is not necessary that the new ways of thinking be faster than humans’, greater or deeper. In some cases, they will be simpler.
The variety of potential minds in the universe is vast. One way to imagine what these different intelligences would be like is to sketch a taxonomy of a variety of minds. This fanciful exercise is worth doing because while it is inevitable that we will manufacture intelligences in all that we make, it is not inevitable or obvious what their character will be. And their character will dictate their economic value and their roles in our culture. Outlining the possible ways that a machine might be smarter than us, even in theory, will assist us in both directing this advance and managing it.
The superbrain that predicts the weather will be in a different kingdom of mind from the intelligence woven into your clothes.
Humans have no real definition of our own intelligence, in part because we didn’t need one. But one thing we’ve learned is that, even with the most powerful minds, one mind cannot do all mindful things perfectly well. A particular species of mind will be better in certain dimensions, but at a cost of lesser abilities in other dimensions. In the same way, the smartness that guides a self-driving truck will be a different species than the one that evaluates mortgages. The superbrain that predicts the weather accurately will be in a completely different kingdom of mind from the intelligence woven into your clothes.
In my list I include only those kinds of minds that we might consider superior to us, and I’ve omitted the thousands of species of mild machine smartness, like the brains in a calculator, that will cognify the bulk of the Internet of Things.
Here are just a few possible new minds:
• A global supermind composed of millions of individual dumb minds in concert.
• A hive mind made of many very smart minds, but unaware it/they are a hive.
• A borg supermind composed of many smart minds that are very aware they form a unity.
• A mind trained and dedicated to enhancing your personal mind, but useless to anyone else.
• A mind capable of creating a greater mind that can create a yet greater mind, etc.
• A mind with operational access to its source code, so it can routinely mess with its own processes.
• A dynamic mind capable of changing the process and character of its cognition.
• A half-machine, half-animal symbiont mind.
• A mind using quantum computing whose logic is not understandable to us.
The point of these examples is to emphasize that all cognition is specialized. The types of artificial minds we are making now and will make in the coming century will be designed to perform specialized tasks, usually tasks beyond what we can do. Our most important mechanical inventions are not machines that do what humans do better, but machines that can do things we can’t do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think.
To really solve the grand mysteries of quantum gravity, dark energy and dark matter, we’ll probably need other intelligences beside human. And the extremely complex harder questions that will come after those hard questions are answered may require even more distant and complex intelligences. We may need to invent intermediate intelligences that can help us design yet more rarefied intelligences that we could not design alone.
We have no certainty we’ll contact alien beings in the next 200 years, but we have almost 100 percent certainty we’ll manufacture an alien intelligence by then.
Today, many scientific discoveries require hundreds of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won’t be easy for us to accept the answers from an alien intelligence. We already see this reluctance in our difficulty in approving mathematical proofs done by computer. Some mathematical proofs are so complex that only computers can rigorously check every step — however, these proofs are not accepted as “proof” by all mathematicians. We will need new skills in knowing when to trust these creations. Dealing with alien intelligences will require similar skills and a further broadening of ourselves. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, science will have to know and progress, according to the criteria of new minds.
AI could just as well stand for “alien intelligence.” We have no certainty we’ll contact extraterrestrial beings in the next 200 years, but we have almost 100 percent certainty that we’ll manufacture an alien intelligence by then. When we face these synthetic aliens, we’ll encounter the same benefits and challenges that we expect from contact with ET. They will force us to reevaluate our roles, our beliefs, our goals, our identity. What are humans for?
I believe our first answer will be: Humans are for inventing new kinds of intelligences that biology could not evolve. Our job is to make machines that think different — to create alien intelligences. An AI will think about science like an alien, vastly different than any human scientist, thereby provoking us humans to think about science differently. Or to think about manufacturing materials differently. Or clothes. Or any branch of science or art. The alienness of artificial intelligence will become more valuable to us than its speed or power.
As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans.
Artificial intelligence will help us better understand what we mean by intelligence in the first place. In the past, we would have said only a superintelligent AI could beat a human at Jeopardy! or recognize a billion faces. But once our computers did each of those things, we considered that achievement obviously mechanical and hardly worth the label of true intelligence. We label it “machine learning.” Every achievement in AI redefines that success as “not AI.”
But we haven’t just been redefining what we mean by AI — we’ve been redefining what it means to be human. Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we’ve had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. Each step of surrender — we are not the only mind that can fly a plane, make music or invent a mathematical law — will be painful and sad. We’ll spend the next few decades, or even the next century, in a permanent identity crisis, continually asking ourselves what humans are good for. In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science, although all those will happen. The greatest benefit is that AIs will help define humanity. We need AIs to tell us who we are.
Excerpted with permission from The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future by Kevin Kelly, to be published in paperback on June 6, 2017, by Penguin Books, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © Kevin Kelly, 2016.