ideas.ted.com

Machines might actually be better than humans at creativity. So … what’s left for us to do?

But some jobs — and industries — will still require a human touch. Technology researchers Andrew McAfee and Erik Brynjolfsson have a guess as to which ones.

A surprising report published this spring projects that 30 percent of current jobs in the UK might be replaced by automation in the next 15 years. That number is 21 percent in Japan, 35 percent in Germany and a whopping 42 percent in the United States. The losses will come mainly from the transportation, banking and retail industries, and largely around jobs that involve a lot of routine, repeatable tasks.

As digital tools challenges human superiority in more complex work — like information processing, pattern recognition, language, intuition, judgment, prediction and physical dexterity — we hear one question asked over and over again: “Which abilities will continue to be uniquely human as technology races ahead?” In other words, are there any areas where we humans should not expect to be outstripped?

The most common answer we hear is “creativity.” Many people argue there’s something uniquely human about the ability to come up with a new idea. But a look at the field of industrial design might convince them that machines are getting quite good at coming up with powerful new ideas on their own.

It’s probably safe to say that most people never think about heat exchangers. But people who design refrigerators, furnaces and engines think about them a lot. An exchanger lets heat move from one fluid (like a liquid or gas) to another, while preventing either fluid from coming into contact with the other. A bedroom radiator contains a heat exchanger — it passes heat from the steam flowing through it to the air around it — and so does an air conditioner, and the grid on the back of your fridge. A typical heat exchanger looks like this.

Creating a good heat exchanger is tough. It has to transfer the right amount of energy, and it has to be efficient, safe, durable and cheap. A heat-exchanger designer has to understand required performance levels, thermo- and fluid dynamics, material properties, manufacturing methods and costs, and so on. Many designers draw on the useful knowledge embedded in previous successful heat exchangers; they tweak an existing design to satisfy a new case.

But what would a heat exchanger designer who had all the requisite knowledge but none of the accumulated experience come up with? Let’s say the designer knew exactly what the performance specs were — the dimensions, cost, lifespan, energy transfer, etc. — and was an expert in the relevant scientific and engineering disciplines, but had never worked on a heat exchanger before or recognized that it might be valuable. Below is one example of what such a designer would come up with. As you probably guessed by now, it was designed by a computer.

It’s an example of “generative design,” a process in which software is used not to help a human designer create drawings, perform calculations and explore trade-offs, but instead to do all that work, 100 percent automatically, and to come up with designs that satisfy all requirements. This part was manufactured by 3D printing. In fact, it would have been impossible to make using traditional manufacturing processes. Generative-design software is not constrained by older methods, so it is free to imagine and propose a vastly wider range of shapes. Unlike most, if not all, human designers, the software isn’t consciously or subconsciously biased toward existing processes, so it does explore more freely.

Merriam-Webster defines creativity as “the ability to make new things or think of new ideas.” By their definition, this software is creative.

Is generative-design software “creative?” The Oxford English Dictionary states that creativity is “the use of imagination or original ideas, especially in the production of an artistic work.” A heat exchanger made by generative-design software doesn’t quite meet this definition, since it’s not intended to be an artistic work and didn’t result from anyone’s imagination. Merriam-Webster, however, has a quite different definition of creativity: “the ability to make new things or think of new ideas.” By this definition, generative-design software is clearly creative.

Humans played no role in designing the part shown, but they were essential for telling the software what kind of part to design. To do this work well, they had to understand where the part needed to fit, the environment it had to survive and operate in, the energy it needed to transfer, and so on. These human specifiers had a great deal of domain knowledge and skill — maybe as much as human designers would need to create a design.

What if some of that knowledge could also be generated automatically? What if additional tools could be added to the combination of generative-design software and 3D printing to advance the ability of creative digital technologies? In 2013, Autodesk teamed up with a group of car designers and stunt drivers in Los Angeles to find out. Their goal was an automated system that could design a race car chassis and determine for itself how well the chassis needed to be able to perform.

To do this, the team first built a stripped-down traditional race car: a chassis, transmission, engine, seat and wheels. The team then blanketed the chassis with sensors that measured the stresses, strains, temperatures, displacements, and all the other things that it had to be able to accommodate. Since digital sensors are small, cheap and capable, the team could inexpensively obtain huge amounts of data.

They took this car to the Mojave Desert, where a test driver pushed it to its limits by accelerating, braking and steering as hard as they could without crashing, while the sensors collected information. By the end of this session, the team had some 20 million data points about the car’s structure and the forces acting on it, which were plugged into Project Dreamcatcher, a generative-design technology from Autodesk, and applied to a 3D model of the existing chassis. Below is what the software came up with.

To us, it’s only vaguely recognizable as a race car chassis; it looks much more like the skull of a mammoth or a whale, or the microscopic skeleton of a diatom. This is not a coincidence. Bones, exoskeletons and other structures in nature are the winning entries in evolution’s relentless competition, the outcomes of which are life and death. Evolution has resulted in designs that are simultaneously resilient, durable, energy efficient, intricate, strong and slim. Perhaps we shouldn’t be surprised that when generative-design software is given the task of designing an optimal structure to satisfy a set of performance requirements, it comes up with something that looks as if it came from nature.

This chassis is also asymmetric; its right and left sides are not mirror images. This makes sense. Because a race car turns more often in one direction than the other as it does laps, the two sides of its chassis are subject to different forces. Human designers have been aware of this fact for a long time, but their creations have rarely, if ever, been as deeply asymmetric as the ones that emerge from generative-design software.

Examples like this chassis convince us that digital creativity is more than mimicry and incrementalism. Computers can come up with more than extensions and recombinations of what humans have done. And we’re optimistic that something close to the opposite can happen, that when they’re primed with our scientific and engineering knowledge and given the requirements of a situation or enough data to figure out those requirements, computers can and will come up with novel solutions that never would have occurred to us.

What jobs and tasks will remain least affected by technology? The ones that tap into our social drives.

So if this kind of creativity can be exhibited by machines, what qualities are uniquely human? As we know, the human condition is inherently interpersonal. We are deeply social beings who have been living in ever-larger groups — families, bands, tribes, cities — throughout modern evolutionary history. An inevitable consequence is that we are acutely attuned to each other, both as individuals and as group members. MIT researcher Deb Roy (TED Talk: The birth of a word) has pointed out this social nature gives us a powerful way to predict what jobs and tasks will remain least affected by technological progress. Very simply, they’re the ones that tap into our social drives.

Roy’s list of drives includes compassion, pride, embarrassment, envy, justice and solidarity. To see how they apply in the world of work, take the example of a high school girls’ soccer coach. It would be great if she had a deep understanding of the sport and an ability to observe the flow of a game and shift tactics appropriately, but the ability to deliver wins isn’t what’s most important. Instead, what matters is the ability to get the girls to work well together towards a goal, to teach them to be good and supportive teammates, and to develop their characters through athletics. The coach accomplishes this in large part by tapping into her own compassion and the girls’ pride. She also makes use of the girls’ desire for approval from her.

Now try to imagine an all-digital, artificially intelligent girls’ soccer coach. Could it pick out the natural leaders and difficult personalities on the team and know what to do if some girls were both? Could it bond the team over the course of a season, navigating the highs and lows? Would it be able to motivate a girl to accomplish things she didn’t think possible? We’ve learned never to say never with technology, but here we’ll say “almost certainly not.”

It’s a novel way to combine minds and machines: let computers take the lead on making decisions, then let people take the lead in convincing others to go along with them.

While computers are getting good at tasks like determining people’s emotional states by observing their facial expressions and vocal patterns, this is a long way from doing the things we listed. We’re confident that the ability to work effectively with people’s emotional states and social drives will remain a deeply human skill for some time to come. This implies a novel way to combine minds and machines: let the computers take the lead on making decisions (or judgments, predictions and diagnoses), then let people take the lead if others need to be convinced or persuaded to go along with the decisions.

Health care provides examples of how this can be put into practice. Medical diagnosis is, in part, a pattern-matching exercise, and thanks to the digitization of healthcare information and advances in machine learning and other fields, computers are achieving superhuman levels of performance at this exercise. If the world’s best diagnostician in most specialties — including radiology, pathology and oncology — is not already digital, it soon will be.

Most patients, however, don’t want to get their diagnosis from a machine. They want to hear it from a compassionate person who can help them understand and accept the news. And after a diagnosis is made, medical professionals who can form interpersonal connections and tap into social drives are highly valuable, because they stand a better chance of getting patients to comply with treatment. Noncompliance is a major problem, negatively affecting the health of millions of people.

As a result, people will continue to be critically important in the health care systems of the future but not in the same roles as today. Emotionally and socially astute care coordinators, rather than brilliant diagnosticians, might move to center stage. You may have heard the old joke about the two employees (person and dog) in the factory of the future: The human is there to feed the dog, and the dog is there to keep the man from touching the equipment. We suggest a slight tweak for health care. The medical office of the future might employ an AI, a person and a dog: the AI’s job will be to diagnose the patient, the person’s job will be to communicate the diagnosis and coach the patient through treatment — and the dog’s job will be to bite the person if they second-guess the AI.

Excerpted from the new book Machine Platform Crowd: Harnessing our Digital Future by Andrew McAfee and Erik Brynjolfsson. Copyright © 2017 Andrew McAfee and Erik Brynjolfsson. Reprinted by permission of W.W. Norton & Company. All rights reserved.