Tech

4 thought-provoking questions to spark conversation

Nov 1, 2017 /

We’re on the brink of a future beyond what we can fathom — a future with driverless cars, designer babies, intelligent robots, and digital doppelgangers. Who will you choose to be in that future? How will it change you?

Here are four fascinating questions to get you thinking. See what you would choose — and ask your friends what they think too.

1. If you could upload your brain to a computer, would you do it?

Imagine this: Your future self uploads your brain to a computer, creating a complete digital replica of your mind. But that version of you is smarter — learning faster than you ever could — and starts to have experiences that the “real” you has never had, in a digital world that you have never seen.

Would you be game to try it, and why? Would that digital version of you still be “you?” Should you be free to have a relationship with someone’s digital replica? Are you responsible for the choices your replica makes?

2. Should parents be able to edit their babies’ genes?

If you had a baby with a congenital heart defect and a doctor could remove the gene, would you do it to save your baby’s life? Most people probably would.

But take that another step further: Would you make your baby a little more intelligent? A little more beautiful? Should you be able to choose their sexuality? Their skin tone? What if only the rich could afford it? What if you chose not to edit your child, but other parents did?

3. Should a driverless car kill its passenger to save five strangers?

A driverless car is on a two-way road lined with trees when five kids suddenly step out into traffic. The car has three choices: to hit the kids, to hit oncoming traffic or to hit a tree. The first risks five lives, the second risks two, and the third risks one. What should the car be programmed to choose? Should it try to save its passenger, or should it save the most lives?

Would you be willing to get in a car knowing it might choose to kill you? What if you and your child were in the car, would you get in then? And should every car have the same rules, or should you be able to pay more for a car that would save you?

4. What morals should we program into intelligent machines?

Picture a world with intelligent robots — machines smarter than you’ll ever be — that have no idea how to tell the difference between right and wrong. That’s a problem, right? But giving machines moral values poses an even stickier problem: a human has to choose them.

If we’re going to program morality into intelligent machines, which values should we prioritize? Who should decide which moral beliefs are the most “right”? Should every country have to agree to a set of core values? Should the robot be able to change to change its own mind?