upper waypoint

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech

Save ArticleSave Article
Failed to save article

Please try again

Stuart Russell, UC Berkeley computer science professor and co-author of the standard textbook "Artificial Intelligence: a Modern Approach." (JUAN MABROMATA/AFP/Getty Images)

From Hollywood to Silicon Valley, California leads the world with big ideas. On Monday we launched a new series focusing on a few of them.

Our first episode comes from Stuart Russell. He's a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His idea?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

Russell's idea is at the center of a debate going on right now in computer science.

Sponsored

“So, imagine, if you want to build a robot to go in people’s homes,” Russell says. “This is something that could happen in the next decade," he says.

He says that at first robots will do chores around the house, such as cooking, cleaning and laundry. But eventually they will take on more human tasks.

Right now in Japan there’s a robot named Pepper that’s designed to serve as a human companion. It's being tested with senior citizens. The idea is that instead of getting your granny a cat to keep her company, you’d get her Pepper. Russell imagines that one day robots will take care of our kids.

“If you want to build a robot to go into people’s homes, you don’t want to come home and find it’s put the cat in the oven for dinner, thinking that was a good thing to do because the kids were hungry and there was nothing in the fridge, right?” asks Russell.

But how would the robot know that’s not what you wanted?

“You would want that robot preloaded with a pretty good set of values," Russell says. "So presumably the robot companies will get their values loaded into the robot from a values company."

The humanoid robot Pepper chats with children at a high-tech gadgets exhibition in Tokyo.
The humanoid robot Pepper chats with children at a high-tech gadgets exhibition in Tokyo. (YOSHIKAZU TSUNO/AFP/Getty Images)

Sounds a little creepy, no? Russell says fear of the brave new world of robots is as old as the word itself. In fact, the word "robot" was coined in a play in which robots take over the world. From "Frankenstein" to "The Terminator," that theme has run through the arts and popular culture ever since.

But Russell says for the most part, scientists didn’t take such concerns seriously.

"The normal response to those kinds of things is to say, 'Oh well, you know it’s a long way off in the future, so we don’t have to worry about this,' ” says Russell.

But recently that attitude has changed. In the past few years, scientists have been more vocal about the dangers artificial intelligence could pose to humanity. Theoretical physicist Stephen Hawking told the BBC that he thinks the “development of full artificial intelligence could spell the end of the human race."

And earlier this year Hawking and hundreds of AI researchers signed an open letter, saying that if the industry doesn’t start building safeguards into artificial intelligence it could spell doom for humanity. Tesla CEO Elon Musk, who also signed the letter, gave $10 million to the cause. He went so far as to say that artificial intelligence could be humanity’s biggest “existential threat.”

Physicist Stephen Hawking has said the "development of full artificial intelligence could spell the end of the human race."
Physicist Stephen Hawking has said the "development of full artificial intelligence could spell the end of the human race." (Frederick M. Brown/Getty Images)

Russell also signed the letter, but he says his view is less apocalyptic. He says that, until now, the field of artificial intelligence has been singularly focused on giving robots the ability to make “high-quality” decisions.

“At the moment, we don’t know how to give the robot what you might call human values,” he says.

But Russell believes that as this problem becomes clearer, it’s only natural that people will start to focus their energy on solving it.

And he says, not to be flip, but nobody’s going to buy a robot that cooks a cat. So it’s just a matter of time before tech companies, universities and the government start pouring resources into programming robots with morals.

"In some sense [the robots'] only purpose in existing is to help us realize our values, and perhaps it'll make people better," says Russell.

lower waypoint
next waypoint