Episode Transcript
This is a computer-generated transcript. While our team has reviewed it, there may be errors.
Harri Weber: Do we look like influencers right now? Is this embarrassing to be walking up to a Waymo?
Morgan Sung: This is me and my friend Harri Weber booking my first ride in a Waymo. It’s a self -driving car service. Some people also call them robo-taxis. Wait, I think I have to unlock it.
Harri Weber: Oh, there we go.
Morgan Sung: For Morgan? No one’s going to respond. Waymo’s are, again, totally self -driving. There’s no human in the front seat. Hello from Waymo. As we get going, just give us one minute to cover a few…
Harri Weber: Oh god.
I invited Harry because she’s also a tech journalist and she covers the future of cars. She recently wrote about the expansion of robo -taxis for courts. And also, we both had a craving for deli sandwiches from this one spot with terrible street parking. So why not take a Waymo?
Harri Weber: There’s no one behind the wheel.
Morgan Sung: Is there a real need for the steering wheel?
Harri Weber: Literally, I don’t see any purpose for it, but for whatever reason, there’s, you know, national laws require this steering wheel to still be there. It’s a little ghostly though, right? It’s nudging us around, it’s keeping us in the lane.
Morgan Sung: Waymo’s are something of a tourist attraction in California. My cousins always wanna take one whenever they come to visit me. But honestly, ever since Waymo’s launch a few months ago in Los Angeles, where Harri and I live, I’ve been pretty hesitant to take them. Part of it is, yeah, I don’t really trust a robot to drive me around, especially not through a very busy intersection in Koreatown during rush hour. But that’s where this car picked us up.
Harri Weber: These things aren’t perfect. Something could go wrong while we’re driving or while it’s moving us around. But Waymo’s claim based on its own data is that it is a lot safer than a human driver. I think that you’re probably more likely to find that they’re a nuisance than they are dangerous.
Morgan Sung: But despite their great track record when it comes to safety, I feel like Waymo mishaps are always going viral. Like a couple of weeks ago, there was a video of a Waymo in San Francisco, ignoring a public works crew and driving straight through a sinkhole. And then there was a video of a guy in Arizona who called a Waymo to take him to the airport, but got stuck in a parking lot.
Video Clip 1: I’m in a Waymo car. This car is just going in circles. I got my seatbelt on, I can’t get out the car. Has this been hacked? What’s going on? I feel like I’m in the movies.
Morgan Sung: Waymos make me feel a little uneasy, but it’s really other people’s reactions to Waymos that makes me nervous about being in one. Even getting in, in front of that coffee shop, did you feel self -conscious?
Harri Weber: Yeah, yeah, it felt a little uncool.
Morgan Sung: I felt so deeply uncool getting in.
Harri Weber: I think something flipped when Google was first developing a driverless car. I think there was actually like a cool factor going, or maybe I was just that uncool that I thought it was cool. But tech criticism has evolved a lot since then. And so now we’re in this car and it does feel a little bit like we are maybe class traitors.
Morgan Sung: And it’s not just the uncool factor. Public reaction to Waymos has turned violent. Like this incident in San Francisco, where a bunch of guys started spray painting a Waymo while a group of women was inside.
Video Clip 2: Why are you attacking us? Connecting to writer support. Call may be recorded. Oh my God. Oh my god! Holy shit!
Morgan Sung: There is also a recent viral video of people swarming a Waymo in Los Angeles. One of the guys in the video so gracefully leaps into the air and drop kicks the passenger door right off the hinges, while another person beats the Waymo’s windshield with a ripped off piece of its own front bumper. To be clear, it doesn’t seem like any living person has been hurt in any of these incidents, unless you count getting a little car sick.
Video Clip 1: Yeah, I got a flight to catch. Why is this thing going in a circle? I’m getting dizzy.
Morgan Sung: But to a lot of people, including myself, I’ll be honest, something about seeing robots do a human task just does not feel right. And clearly, it triggers some deep-seated instinct to attack. So what is behind this recent outburst of aggression against Waymos? Is it just that destroying them makes really good content that’s bound to go viral? Or is it our collective anxiety over the state of tech right now? This is Close All Tabs. I’m Morgan Sung, tech journalist and your chronically online friend, here to open as many browser tabs as it takes to help you understand how the digital world affects our real lives. Let’s get into it. People are justifiably jaded against Silicon Valley and the idea of a robot takeover would make anyone nervous. But why take it out on Waymo? All right, we’re gonna kick this off like we usually do. A new tab, Waymo Vandalism. We’re gonna talk to Ellen Huet. She covers technology and Silicon Valley and startup culture for Bloomberg, or as she puts it, all things at the intersection of tech and humans. And she wrote about how Waymo’s recent expansion provoked so -called AI anxiety.
Ellen Huet: as long as these cars have been around in their various iterations, so probably over the last decade or so, people have reacted to them with suspicion and at times with violence. Like even back in 2018 when early Waymo vehicles were being tested in Arizona, people threw rocks at them, people like attacked the cars. Even if these cases are rare, it has been somewhat consistent that people react to robot cars at times with anger or violence or like attempting to kind of outsmart them. You know, there are early stories of people who figured out that you could confuse autonomous cars by putting a traffic cone on the hood of the car. Right, I’ve seen that. And I – Yeah, and like, there’s clearly some human impulse to try to almost like assert agency or control over this vehicle that, you know, in some existential sense is kind of a threat.
Morgan Sung: I have to wonder how much product design plays a role in ensuring that people are more receptive to these cars. I mean, I’m thinking of, you know, that 2000s movie, Herbie Fully Loaded, how it’s like a cutesy little buggy. And if I saw that driving around, I wouldn’t want to, like, beat that up. But when I see, like, a Cybertruck, I’m like, oh, fight or flight, activated.
Ellen Huet: I guarantee lots of designers have spent a lot of time and energy thinking about this question. I remember, it must have been a decade ago, you know, I was a tech reporter, I was working at Forbes and I was invited to take a test drive in an early autonomous car. And what I remember most is that those early cars were designed to look so cute. Like it was like a little gumdrop shaped car and it was white and like its headlights We’re kind of that like. oval shape to make it look like, you know, like these cute little cartoon eyes. And yeah, I guarantee that designers and engineers spent a lot of time thinking about how to make this threatening-ish technology seem as cute and approachable as possible. But yeah, Waymo’s are obviously like kind of just like a normal-looking car. My guess is it’s not coincidence that they’re all white. maybe people associate that with, like, sort of angelic energy or something.
Morgan Sung: But let’s talk about what’s at the root of this feeling that a lot of people have when confronted with a robot doing a human job. That is a new tab. Ready? Waymo and AI anxiety. Okay, so a lot of this anger against Waymo’s is due to the very real risk of job loss as robo-taxis begin replacing human drivers. But it’s also bigger than that. AI is moving into a lot of areas besides cars. In her story about this spike in Waymo vandalism, Ellen cited University of Wisconsin professor Jo Anne Oravec, who said that a lot of people are just not really comfortable with the realization that something else is intelligent.
Ellen Huet: I think people sometimes direct their general anxiety about, like, job displacement from AI toward Waymos just because these are objects that exist in the physical world and that you can actually smash the windshield of a Waymo car in the way that you can’t quite attack ChatGPT, right? Like, I think in some sense they’re at a disadvantage by simply being, like, a physical object. And something that you are reminded of every time, you know, one drives by.
And the cars have also done things that are genuinely annoying. Like in San Francisco, there was this cluster of Waymo cars that had gotten trapped in a honking war with each other. So they were like up at the middle of the night, you know, it’s like 4:30 in the morning. And neighbors who were trying to sleep near wherever these cars are gathered were being woken up at night by the sound of all of these cars honking at each other. And like in the most nonsensical way.
Morgan Sung: The term AI can mean a lot of different things. Autonomous vehicles like Waymo do use artificial intelligence. They use cameras to see, they’re trained to recognize objects and calculate the distance between them. But it’s pretty different from how a generative AI tool like ChatGPT works, or how other AI generators work to produce images, videos, and music. But to a lot of people, the anxiety is the same.
Ellen Huet: I think what makes people have this reaction, regardless of what type of AI it is, is that these products are being designed to do actions and take on jobs and responsibilities that used to be solely the jurisdiction of humans, right? So it used to be like only people could see and drive and manipulate a car and now we can see that a human is not needed to do that. And it used to be that a human was needed to provide half of a conversation, you couldn’t have that, you know, without another person on the other side of the line. Now, that’s no longer true. So I think what we’re seeing is, even though the technology is different, in some cases and in some ways, the human reaction to it is the same. Because I think it’s provoking the same question of, well, where do I belong in this world where this thing that I used to do can be done by a non-human entity.
Morgan Sung: There’s a Waymo ad.
Waymo Advertisement: Well, it’s finally happening. The robots, they’re coming. Maybe that’s a good thing.
Morgan Sung: And like, Waymo’s clearly playing into people’s concerns about the robots, but putting like this positive spin on it. What do you think?
Ellen Huet: I mean, an ad like that makes me laugh because I just imagine, I wish I could be a fly on the wall in the marketing discussions that led to that copy because I’m sure they were like, guys, we have to address the fact that some people are scared of this, but let’s make it cheeky, let’s make it funny. There are researchers out there who study human-robot relations and some of the things that they explained to me were like these human-like. objects that can move with some intelligence in our world feel existentially unsettling to us because we don’t know where we fall in the pecking order anymore.
Morgan Sung: And don’t forget, we’ve been consuming so much sci-fi about how super advanced robots can totally disrupt our way of life.
I, Robot: Robots don’t feel fear. They don’t feel anything. They don’t get hungry. They don’t sleep. I do.
M3gan: Megan, turn off. Are you sure?
2001: A Space Odyssey: Open the pod bay doors, Hal. I’m sorry, Dave. I’m afraid I can’t do that.
Morgan Sung: This pop culture fear of artificial intelligence and robots taking over has been around for a while, but it hasn’t been limited to the screen. Physical attacks on robots have been happening for over a decade.
Ellen Huet: A classic example is the story of Hitchbot. And the story of Hitchbot starts back in 2013 when a group of researchers decided they wanted to try to answer the question, can robots trust humans?
News Anchor: Hitchbot will be depending on the kindness and the curiosity of strangers.
Morgan Sung: So Hitchbot was this little humanoid robot that couldn’t really do much, but it was on a mission to hitchhike across Canada. It had a stocky, cylindrical body and goofy arms and legs made out of pool noodles. Its head had an LED panel with a little smiley face. Hitchbot couldn’t move on its own. It could only recognize human speech and engage in basic conversations, mostly to ask bystanders for help. It also had a GPS and took pictures of its surroundings every 20 minutes so that researchers could keep tabs on it. I mean, it was pretty adorable.
Ellen Huet: And on its first few journeys, everything went fine. Like, people really got excited about this idea and wanted to contribute to helping the robot get from point A to point B. So Hitchbot traveled across Canada and traveled around Europe. And then in 2015, Hitchbot tried to do a cross-country trip across the United States and started in Boston.
Morgan Sung: But Hitchbot only made it to Philadelphia, where it was tragically attacked and decapitated.
Philly Man 1: This is the bench Hitchbot was killed on.
Philly Man 2: Oh really?
Philly Man 1: You know the story of Hitchbot?
Philly Man 2: Yeah, yeah, it made it around the world and it died.
Philly Man 1: It didn’t make it around the world because it died in Philadelphia.
Ellen Huet: Maybe they wanted to reassert their power or dominance over a human-like object and feel secure in their place as humans. But yeah, I think the story of Hitchbot tells us a lot about the psychology of humans versus robots.
Morgan Sung: Right. And that Waymos may not be ready for Philly yet.
Ellen Huet: Seems possible.
Morgan Sung: OK, clearly, there’s something here about robots that makes us uncomfortable. People don’t always respond well to things that look a little too human or act kind of human, but aren’t quite human. So aside from making the cars cuter in hopes that people won’t attack them, is there anything else Waymo can do? When I scroll past one of these videos of someone spray painting one of the cars…
Video Clip 3: Why are you attacking us?
Morgan Sung: Or blocking its way.
Video Clip 4: Get out of the way! Move!
Morgan Sung: Or, worst case scenario, harassing someone inside, which did happen in San Francisco. With these videos, I’ve always wondered, why can’t the car just back up and drive away? Any human driver would. We’ll get into that and some other very thorny questions after this break. As this technology progresses, what are the implications of robots making their own decisions in these very high-stakes situations?
But that is a new tab. “The ethics of self-driving cars.”
To dig into this, I called up Ryan Calo. He’s a legal scholar and a professor at the University of Washington School of Law. And his specialty? Robot law. So he had his own theory about what makes Waymo’s so attractive to vandalize.
Ryan Calo: It’s the fact that they’re doing something transgressive where they’re, you know, they are of course vandalizing like an entity, an agent, but very much falls short of doing violence to a human being. Obviously these same people wouldn’t run up to a human being or a human cab driver and spray paint them. But the point of the matter is that, you know, there is this unease about these technologies, has to do with their novelty, has to do with their being too close to being these anthropomorphic agents.
Morgan Sung: By anthropomorphic, Ryan means having human -like characteristics, even if it isn’t actually human. Yeah, I live in L.A., which has horrendous traffic, and so I watch a Waymo, try to make a U-turn, and get cut off at every possible opportunity, and I feel bad for it. What’s going on with that?
Ryan Calo: It’s hard not to empathize, right? The fact is, is that human beings are quite bad at categorizing robots as either things or people. And the more anthropomorphic a technology is, the harder that is. So what’s really interesting about Waymo is, apparently, it’s associated as much with being a car as it is with being an entity. I always thought that if the vandalism were occurring inside the car, right? So sometimes people vandalize the interior of a Waymo because they’re just there by themselves, seemingly, Right? And it’s one thing to put a camera there, but if you were to put like an anthropomorphic robot driver in the Waymo that looked back at you when you get in to say, “Hey, where can I take you?” And also could be, was visible. Like when you saw the Waymo trying to make a U -turn, you saw like a person in there.
Morgan Sung: It’s struggling.
Ryan Calo: Whatever it was, it’s struggling. Or when somebody went to vandalize this thing, cause it to crash, you know, cause it to shut down, like vandalize it. And there’s this android in there looking at you that looks like a person. I think the outcome would be different.
Morgan Sung: Ryan’s theory is that because Waymo’s don’t resemble any living thing, it’s easier to justify being violent. And sure, passengers would probably be creeped out by a robot doll in the driver’s seat, but maybe people would be discouraged from attacking it because there’s another human-ish thing there.
Ryan Calo: There’s this concept called the uncanny valley. The deal is, it’s a valley, is that appreciation of the robot goes up and up and up and up and up and up and up, the more anthropomorphic it becomes, but then suddenly when it gets super close to actually like a human, it precipitously drops before coming back if it’s completely like a human, right? That’s the valley. Now, Waymo cars… They do not look like people, right? So they really are very far down on that curve. They’re more thrilling to vandalize than a mailbox. I just don’t see it happening with something that felt more like a person.
Morgan Sung: Yeah. You have written about the legal quandaries of self-driving cars, and when they first started hitting the road about five or so years ago, people were mostly worried about passenger safety and crashes. And it seems like companies like Waymo were maybe not so prepared for, like, the irate protesters?
Ryan Calo: If you want to talk specifically about vandalism, this has messed up people working in AI and robotics for a long time, and so it shouldn’t really have surprised Waymo. I don’t know how one guards against it, right? Think about, Morgan, do you recall the time when Microsoft released that chatbot called Tay?
Morgan Sung: Oh my god. Yeah.
Ryan Calo: Microsoft trains this chatbot early. This is pre-OpenAI and so on, and they released this Twitter-based chatbot that’s supposed to learn how to interact. Within an hour, there are these people, trolls on the Internet, getting it to say all kinds of racist and terrible things. They end up having to take the chatbot down. When the company was asked about “Why did this happen?” They’re like, “Well, we just didn’t anticipate that folks would do this,” unfortunately. The truth is about technology, especially like new emerging technology, you just don’t know how it’s going to play out in society. In fact, this leads some law professors like myself, to conclude that technology is a particularly difficult thing to regulate, because you don’t know how it’s going to play out in advance.
Morgan Sung: Back in 2018, Ryan wrote an essay in Slate about whether the law was ready for self-driving cars. And in it, he refutes a common critique of this technology that has to do with the old philosophical dilemma known as the trolley problem.
The trolley problem is this thought experiment about making decisions. You’ve got a train chugging along a track and it’s heading toward five people. But you, the decision maker, can flip a switch and direct the train to another track. Here’s the twist. There’s another single person stuck to that other track. So do you flip the switch? Save five people, but by making that decision, kill another?
This is the kind of thought experiment people love to refer to when puzzling over the morality of autonomous vehicles and other forms of AI. It’s the assumption that self -driving cars will have to make the same kinds of moral decisions. But Ryan says the engineers of these vehicles, and we as a public, should worry less about these philosophical hypotheticals and more about the practical real-life situations that these cars might face. Programming a moral compass is still up for debate. But programming object recognition is way more realistic.
Ryan Calo: Just imagine that a driverless car is always gonna be better than a person at avoiding a stroller in a parking lot. Always better because it’s better sensors, better response times, whatever. And the driverless car is also always gonna be better than a human being in avoiding a shopping cart, okay? But what happens if a driverless car encounters a shopping cart and a stroller at the same time. Imagine that the car confused about what to do and not able to differentiate between these two objects ends up making the wrong decision. Well, the headline reads, “Robot car kills baby to save groceries.” That’s the end of driverless cars in America, if that happens. So the thing that’s fascinating, with driverless car liability is it actually really matters the fact that driverless cars are going to mess up in ways that humans wouldn’t, even if they mess up less overall.
Morgan Sung: So, Ryan, as an expert in robot law, do you think driverless cars will have rights at some point, the way that humans do when they get into an accident?
Ryan Calo: What I will say is that we are very far from a situation where AI or robots will be able to claim rights the way that people have them. And even though some doomsayers claim that AI is going to wake up and kill everybody, you know, like the first time I see a presentation where like PowerPoint works perfectly, that will be when I worry about, you know, AV before AI. Like we’re just very far away from getting to that level of sophistication. Yeah but one day maybe, and at that time we’re really going to have to have an overhaul or a sea change in the law because suddenly there will be an entity that has rights and responsibilities but is not like us.
Morgan Sung: The robot takeover is not going to happen anytime soon, but the friction between humans and these non-human counterparts, that’s only going to increase as AI creeps more and more into our lives. I’m gonna be honest, I thought that ride I took with my friend Harry was actually really nice. It was a nice temperature in there, the AC was going, there were lo -fi beats playing at such a reasonable volume. It didn’t smell weird.
Harri Weber: There’s part of me that’s a little disappointed because, you know, I’m a little bit of a hater and so I want, I want to drive through cement. I want that to, I want to be the one who gets to, gets to like roll their eyes at these things that are sort of just.
Morgan Sung: I know!
Harri Weber: Inexplicably irksome.
Morgan Sung: Are we getting Waymo pilled?
Harri Weber: We’re getting Waymo pilled.
Morgan Sung: Oh god.
Harri Weber: I just don’t, I don’t wanna give in.
Waymo Voice: Pull the handle twice to exit. The first pull unlocks. The second opens the door.
Morgan Sung: Okay, we’re pulling up to the curb now. There’s an older man at the crosswalk kind of just glaring this Waymo down and I feel really weird.
Harri Weber: The deep shame, the shame!
Morgan Sung: I have to admit, it would have been great content if something wild did happen during my first Waymo ride. But like the majority of rides, it was pretty uneventful. It almost felt normal. And maybe that’s the real robot takeover. It’s the gradual, boring replacement of our everyday human interactions. For now, let’s close all these tabs.
Close All Tabs is a production of KQED Studios and is reported and hosted by me, Morgan Sung. Our producer is Maya Cueva. Chris Egusa is our senior editor. Jen Chien is KQED’s director of podcasts and helps edit the show. Sound Design by Maya Cueva. Original music by Chris Egusa with additional music by APM. Mixing and mastering by Brendan Willard. Audience engagement support from Maha Sanad and Alana Walker. Katie Sprenger is our Podcast Operations Manager, and Holly Kernan is our Chief Content Officer.
Support for this program comes from Birong Hu and supporters of the KQED Studios Fund. Some members of the KQED Podcast team are represented by the Screen Actors Guild, American Federation of Television and Radio Artists, San Francisco Northern California Local. Keyboard sounds were recorded on my purple and pink Dust Silver K84 wired mechanical keyboard with Gateron Red switches. If you have feedback or a topic you think we should cover, hit us up at closealltabs@kqed.org. Follow us on Instagram at CloseAllTabsPod. And if you’re enjoying the show, give us a rating on Apple Podcasts or whatever platform you use. Thanks for listening.