Excerpt from "WE HAVE THE TECHNOLOGY: How Biohackers, Foodies, Physicians, and Scientists are Transforming Human Perception, One Sense at a Time"
One question the folks in Stanford's Virtual Human Interaction Lab like to ask: In virtual worlds, do we have to be us? The lab started out with subtle changes to the human appearance: making your avatar prettier, taller, fatter, or older than the real you, even changing your race. (In psychology, this is usually called “perspective taking,” or envisioning yourself as another person or in an altered state.) But in the virtual world, there’s no reason to stick with your human form. In fact, learning how to occupy another body might do us some good. “We’ve known from our past research, when you occupy a human avatar that you gain empathy towards that human. So you can reduce racism, ageism, sexism,” Jeremy Bailenson, a founding director of the lab, says. “Does the same thing work with a nonhuman?”
Nine months later, I head back to to the lab to find out. Once again, it is dark and quiet, and Cody Karutz is cheerfully strapping devices to my appendages. He fits me with fabric kneepads, then infrared markers around my wrists. He helps me into a nylon pinnie— the kind kids wear for soccer scrimmages—and sticks two infrared markers along my spine. On goes the helmet. Then he asks me to get down on all fours and he flips the simulation on.
I am a cow.
I am a cow in a lovely pasture, a green expanse ringed by distant barns.
Sponsored
In front of me is another cow— Karutz explains that this is a mirror image of my avatar, put there because, given the physics of wearing a helmet while crawling, it’s hard to look down and see my own body as a cow’s. I make an involuntary “Aww!” noise because my cow self is adorable, a pint-sized brown-and-white calf with tiny curved horns and a fat body on spindly legs. I lift my right hoof and my cow double does the same. I trot around a bit, getting the layout of the field, watching as my cow self does, too. VR folks use the term “body transfer” to describe shifting your consciousness to an external representation, and as the cow copies my movements, that’s just what we’re doing.
“Welcome to the Stanford University cow pasture,” booms a voice overhead. “You are a shorthorn breed of cattle. You are a dual- purpose breed, suitable for both dairy and beef production.” It is momentarily jarring, being told I am suitable for beef production. But I roll with it, as the voice gives me instructions: Walk over to a feed cart and eat. I do my best to position myself over some hay. My cow double does the same. The voice ticks off some mindblowing stats about how much weight we must gain— three pounds a day—in order to bulk up to 600 pounds. I wonder whether I should mime chewing, which seems oddly natural, even though no one has requested it.
Now the voice tells me to walk to a water trough. My cow double and I loop to the left. I can see a cattle prod hovering in midair. In the real world, it’s a wooden dowel with an infrared marker attached to the end, held by a lab assistant. It’s not working today (this is a very early version of the study) but normally he would have jabbed me with it lightly. I would have seen the cattle prod coming at me while I felt it press into my sides. This is “synchronous touch,” Karutz later explains, another way to produce body transfer. Today, it’s just floating nearby, so, unprodded, I stand over the water trough as the voice tells me I’ll drink up to 30 gallons a day.
“Please turn to your left until you see the fence where you started,” says the voice. “You have been here for 200 days and reached your target weight. So it is time for you to go to the slaughterhouse.”
I was not expecting this. A wave of sadness and horror hits me with the word “slaughterhouse.” The suddenness of the announcement, the feeling of being trapped, the guilt and responsibility I feel for my cow avatar, who I somehow feel is me, but who I simultaneously feel is younger and more innocent and who is, I should point out, a vegetarian— it’s remarkably heavy for having been in this virtual life only a few minutes. The part of me that is a cow dutifully walks toward the fence. The part of me that is a person is yelling. It’s unbidden, startling even me, an anger borne of nervousness. “That is brutal!” I shout at no one in particular.
The simulation presses on, telling me to face my cow avatar. She gazes back at me, as innocently adorable as ever. “Here you will await the slaughterhouse truck,” says the voice. The floor begins to vibrate. I hear the grinding of approaching tires and the beep of a truck backing up. I feel a rush of real fear as the world shakes noisily around me. I scan my head from side to side, wondering where the truck is going to appear. What will happen then? But it doesn’t. The experiment is over. “My God, you guys,” I hear myself muttering in relief as Karutz unwinches me from the helmet.
Watching Yourself Die as a Coral Stem
Had I been in the real study, the follow-up would have gauged whether I now felt more empathy for cows, and more broadly, my feelings about animal rights.
And the cow— cuddly, familiar, a fellow mammal—is just the beginning of where the lab is headed with this idea. Moments before I came in, Karutz had been making tweaks to an experiment in which participants will become a coral stem in a coastal reef, an even more unfamiliar body configuration. The coral is immobile, a brilliant purple, with branches that just vaguely recall arms, the only nod to the human form this simulation makes. It’s situated in clear blue water, surrounded by a passing jumble of sea life. For synchronous touch, a fishing net will bump into you while a lab assistant pokes you in the chest with the dowel. This time, you will listen to facts about ocean acidification, caused by water absorbing the carbon dioxide produced by burning fossil fuels, as you watch the sea around you slowly die— first the sea urchins, then the fish that feed on the urchins, then the sea snails whose shells become corroded by the acidifying waters. As the animals disappear, the water becomes grayer and the rocks coated in algae. If you look down you will see your own body withering, until a chunk falls off onto the ocean floor.
“So in both of these you have to watch yourself die or almost die,” I point out flatly.
“There are some very dramatic effects,” agrees Karutz.
The lab is working with marine biologists to design the coral scenario,which subjects will experience as a VR environment, a video, or simply audio. Ultimately, says Bailenson, they’re testing how well viscerally experiencing the dying coral works as an educational method. “Do you learn better, do you care, are you more motivated to learn when you become the coral?” he asks. They’ll also track an empathy-related measure—maybe subjects’ willingness to donate money or sign petitions for ocean-related causes.
There is also a more technical cognitive question behind all of this shapeshifting: the idea of homuncular flexibility. The homunculus, or “little man,” is a way of visualizing how the cortex maps senses and movements onto the body. The areas that innervate different parts of the limbs, trunk, and head appear in this cortical strip in roughly toe-to-head order. But because the face and fingers are so sensitive and dexterous, they’re more densely innervated and take up more space. If you drew the body of a “little man” based on this cortical schematic, he would have two fat lips and clown hands.
In VR, says Bailenson, “Homuncular flexibility asks, if you put someone in a body that is decidedly nonhuman, can they learn to operate it?” So imagine, he says, as Jaron Lanier—his friend and mentor—had attempted during early experiments with novel avatars, “you put someone in a lobster. A lobster has eight arms. Moving the first arms of the lobster is very simple. You move the physical two arms, and the virtual arms do what the first two do. How do you move arms three through eight?”
That’s an extraordinarily magical question and also an extremely practical one. Think of using your avatar to manipulate digital objects, says Bailenson, like in the film Minority Report, based on the Philip K. Dick story, in which Tom Cruise plays a futuristic police officer. “You remember that scene where Tom Cruise is playing with all that data and using his arms?” asks Bailenson. “Why is he just using two arms? The data is all digital. If people can learn to control eight arms, then they’d be more efficient.” Or, he says, think about using virtual environments to manipulate real-world machines. You could have “many to one” control, in which a group of users operate a “team body,” or “one to many,” in which one expert controls multiple devices. Consider the military, he says: “The single best plane operator, why is she only operating one plane?”
So this is what we’re going to try on my last day in the lab. Andrea Stevenson Won, a lab doctoral student, has built a scenario that will give me a virtual third limb, and we’ll see how quickly I can learn to control it. Karutz fits me with the helmet and straps infrared markers and small plastic accelerometers to my wrists. Then the lights go out, and I’m staring into a virtual mirror at my avatar, a silver outline of a body. It’s got the two usual arms, which I can operate by moving my own normally. But there’s also this enormously long armlike protrusion sticking out of my chest. This limb has no elbow joint, and just the barest hint of fingers. Karutz gives me a few seconds with the mirror to adjust to my new limb, which is controlled by rotating my wrists. One of them—he doesn’t say which— controls horizontal movements, and the other its lateral movements. I hold my hands out stiffly in front of me and jiggle my wrists. My third arm flips back and forth like a windshield wiper. This is all the training I get.
Then the mirror blinks off, and I’m looking at cubes hovering in space, just close enough for me to touch with my own fingertips. On my left, there are nine blue cubes; to the right, nine red ones. Every now and then, a cube will turn white. When it does, I must touch it with my real hands.
A couple of feet behind these are nine green cubes. When a white cube appears here, I’ll have to touch it, too. “They are too far, so you can’t use your normal limbs,” says Karutz. “So that’s why you have a third arm.”
OK. Ready. A cube lights up in the blue set. I tap it with my hand. The cube flashes and emits a delightful shimmery zing before turning blue again. Easy.
Now one back in the green array lights up. I’m mentally prepared for this to be tough. Karutz is, too. He’s about to offer some encouragement when I just reach out with my third arm and ... touch it.
I have no idea what I did. I just did it. “Nice!” Karutz says.
I make an awestruck noise and keep going. The cubes flash, and I smack them. Real arms, fake arm— it’s weirdly natural. The mental and muscular math required to make the third arm move happens subconsciously; somehow my two real wrists direct the imaginary one. I’m hoping this will be an actual job skill in the future. Ask me about my homuncular flexibility!
In fact, Stevenson Won concludes that people can adapt within five minutes, and that subjects given a third arm did better than a control group that only used their real arms and stepped forward to tap the green cubes. In an earlier phase of the study, she also found that people readily adapt to having their arms and legs switched, or to having their legs be able to reach extra far. I’d tried this the previous fall, and found it easy to complete the task— popping virtual balloons floating in midair— while using my arms to control my virtual feet, or kicking over my head with my suddenly superflexible legs. I mean, it hadn’t been pretty. I’d lumbered around the room swinging my limbs like a deranged robot. But it got the balloons popped. The idea, she says, was to see if people would switch to using their feet, rather than their hands, which were better balloon- popping tools in both conditions. (They did, although they performed the task better when their legs were given extended range.)
Thinking about tool use is important in homuncular flexibility research, says Stevenson Won, because there are parallels between how we learn to use them and how we learn to control novel bodies. “People are very good at quickly learning to use new tools, and tools can be considered as extensions of the body,” she says. In the third arm study, subjects are put into one of four conditions. Some see a limb attached to their chest, as I did. Others see it floating near their body, or as a sort of metal cylinder protruding from their chest, or as a hexagonal shape floating beside them. In other words, it can look like either a tool or a body part, and it can be attached to you or not. And that might change how you learn to use it. As Bailenson puts it: “Is it a hammer or is it your arm?”
There is a way to test this, too. After I have tapped my way through the cube task, Karutz makes the cubes vanish, and now a bull’s- eye hangs in midair. He asks me to place my third arm at its center. I do, and there is a roaring noise and a bright light. My brain registers this as: My hand is on fire.
I yell. My shoulders and neck involuntarily go into a deep cringe. And that is exactly what the lab wants to know— how I react to a threat to my imaginary limb. “If it’s a tool, if somebody sets it on fire, you shouldn’t flinch,” says Bailenson. “And if it’s your arm, you should.”
And look, by this point I have spent a lot of time in this lab. I have read a ton of their papers and interviewed them relentlessly on their methods. I can see how the trick is done. I know it’s a virtual hand and a virtual fire. But that little tiny freak-out moment?