- Home
- Kara Platoni
We Have the Technology Page 2
We Have the Technology Read online
Page 2
Words direct our attention to already-established concepts, and a lack of words prevents us from discerning new ones, or at least makes trying to isolate a sixth taste more confusing. As Nicole Garneau, a geneticist who is leading the world’s biggest public search for people who can taste fat, puts it, the one thing she can tell you about naming fat taste is that you shouldn’t call it fat taste. “Because people think of bacon, right?” she said with a shrug before her team of citizen scientists plunged me into a battery of tests to see if I could, in fact, taste fat. (I’ll say this now: Pure fat is nothing like bacon.)
Next, we’ll travel to France, the country that, thanks to Marcel Proust’s In Search of Lost Time, immortalized the connection between scent and memory. But instead, we’ll learn about the link between smell and forgetting. Losing the ability to differentiate between scents is one of the first clinical symptoms of Alzheimer’s and other diseases of memory. We’ll spend time in the Atelier Olfactif with a group of cosmetics industry volunteers who are using odors to help the cognitively impaired recall memories and communicate with loved ones. And here we’ll see the soft biohacking of culture at work. Where you grew up changes what you think you smell, because everything about your life experiences—familiar foods, common household products, the kinds of plants that grow nearby—dictates your connections between word and odor. That’s why when Alienor Massenet, the atelier’s lead perfumer, hands me sample after sample, I repeatedly default to the cultural associations of my California upbringing instead of the associations her French charges would have. To me, lilac is the scent of soap, not a flower. When she gives me the lavender-scented swatch, I think of warm hillsides and it leads me instead to “pine.” (“This smell is so French,” she says forgivingly.) But there are some scent memories she and I share; we both immediately know the smell of the ocean. And when it comes to helping Alzheimer’s patients communicate, correctly identifying the scents doesn’t matter—only the memories they provoke.
In Montréal, Palo Alto, and Washington, D.C., we’ll see how soft biohacking works particularly effectively in the realm of emotion. Our home culture teaches us how to interpret the physical and mental states associated with our feelings, and even how to read the emotions of others. This makes sense, clinical psychologist Andrew Ryder said as we watched students running experiments in his Montréal lab, because in a world of infinite emotional and social data, you want to pay attention to the most culturally significant signals, the ones most meaningful to you and the people around you. “A complex and ambiguous world is forever throwing up new information,” he said. “We only want to spend energy on the information that might matter to us.”
In Los Angeles and the San Francisco Bay Area, we’ll learn how research on pain performed inside the fMRI scanner (by professionals) and inside cocktail bars and taverns (by me) is giving us surprising insights into how we perceive internal states. We tend to think of physical and emotional pain as separate entities: wounds to the flesh versus wounds to the spirit. But social psychologist Naomi Eisenberger, who studies pain as it relates to love and rejection, thinks they are in fact both wounds to the same region of the brain, the part that processes threat. Her jumping-off point is language, how we use identical words to describe social and physical pain—ache, break, and anything else that sounds at home in a country song—despite the fact that we think of them as entirely different kinds of experiences.
“I think there is some prejudice around emotional pain sometimes,” Eisenberger said one day in her lab at UC Los Angeles. “Physical pain is totally understandable. Like, of course you are going to feel hurt if you break a leg! Feeling social pain—somehow people often say, ‘Get over it’ or ‘Deal with it. It’s just in your head.’ And so I think sometimes people feel very validated to hear that some of the same neural regions respond to both. That suggests that we should be taking both physical and social pain seriously.” That idea is opening up some surprising new questions, such as, Can you soothe a broken heart with Tylenol, or assuage physical pain by holding a loved one’s hand? We have long alleviated the pain of bodily injury with ice packs and aspirin; perhaps now there’s a new way to treat the pain of social injury, one of life’s most unpleasant—but fundamental—perceptual experiences.
We’ll also meet researchers who deal in what I’m calling the “hard biohacking” of technology. I’ll be focusing here on devices that people deliberately wear, carry, or implant to alter perception, and this gear is much less passive than social forces in shaping sensory experience. If using technology to manipulate what happens inside your head seems like a futuristic proposition, just consider one of mankind’s earliest perception-shaping devices: the timepiece. In Chapter 6 on time, we’ll see that this perception is a blending of neural, social, and mechanical forces, one that seems to come from both within and outside the body. In a London museum and at a government laboratory in Colorado, we’ll meet the keepers of some very special clocks, one of which was designed to standardize our perception of time, the other to alter it.
And just as the timepiece migrated from rather large, external edifices (think of sundials and clock towers) and then to the tabletop, the wrist, or the pocket, other instruments of sensory perception are scaling down to human size, whether as wearables or, more radically, as implants. Technology is “literally moving towards us like a slow Doctor Who villain,” Rob Spence told me one day from his Toronto home. “It’s moving into our bodies.” And Spence—better known as Eyeborg—should know. He wears a camera in his right eye socket. We’ll learn more about it in Chapter 10, when we visit the explorers of augmented reality, who are trying to enhance human perception through wearable computing devices, commingling human and machine.
Perception-shaping devices are becoming profoundly embedded in human life, partly because they can be worn continuously, partly because they are becoming more deeply integrated with the body. Many of the devices on the commercial market today are wearable, meant to sit lightly on the wrist or over the eye. But other new technologies, currently reserved for those with medical needs, are implanted in the body. And their next generation is headed for the brain. So, too, are the scientists exploring the frontiers of perception. In Chapters 3, 4, and 5—on vision, hearing, and touch, all in the first part of this book—we’ll trace one very special story arc in modern neuroscience, the ongoing quest to parse the electric language of the brain. This is the study of the twin processes that neuroscientists, in a nod to the analogue world of print, call “writing in” and “reading out.” Writing in means moving information to the brain; reading out means interpreting instructions from it.
Our senses are always writing in, taking data from the world and converting it into electrical signals the brain understands. Photons hit the light receptors in your retina, starting an electric relay of signals your brain interprets as an image. Or chemicals lock into the receptors on your tongue, so that your brain registers the resulting electric memo as, say, the taste of sugar. Many of the first generation of write-in devices were built to restore sensory function for people with medical conditions. So we’ll spend time with Dean Lloyd, the man with the bionic eye. After years of blindness caused by retinitis pigmentosa, Lloyd became one of the first people to receive a retinal implant, which writes electrical impulses to his retina that his brain can interpret as visual cues. “This is not standard sight that we have,” Lloyd cautions, referring to his fellow clinical trial participants. But it is sight nonetheless, and we’ll follow along to see how the world looks to Lloyd.
Reading out is the bookend process to writing in; it means translating backward from the brain signal to the sensory experience. For example, if someone shows you a picture or plays you an audio recording, can the pattern of your brain activity be reverse engineered to re-create the original stimulus? Reading out is even tougher than writing in, and before we visit the labs of the people who are attempting it, it’s worth pointing out that coming this far has required not only an enormous lea
p in the ability to translate the brain’s language, but calling in the efforts of disparate branches of the sciences. In the early days of sensory science, research was limited to the periphery—the body’s outer surfaces, sensory organs and their nerve endings. For example, you might stimulate the taste buds, retinal cells, or skin to learn what the organism would do in response. This was largely the turf of psychologists and physiologists, who correlated stimulus with behavior to understand what was happening later in the nervous system chain. “It’s easy to work on the outside, you know,” Tordoff, a psychologist, said affably during our lunch. But it’s much harder to look inside, he added. “If you can’t work out what’s going on on the tongue—where it’s easy to get to it—how can you work out what’s going on in the middle of billions and billions of neurons, all connecting to each other?”
But despite its reputation, the brain is not an unknowable “black box.” It is just very complicated, and heavily protected by the body and the immune system. Because of the physical and ethical difficulties posed by experimenting on living people’s brains, until fairly recently, much of what we knew came from studying other animals. But over the past two decades, a handful of important new technologies brought the collective insights of biochemistry, neuroscience, and genetics to bear on the study of perception. The Human Genome Project unlocked the world of genes and receptors, unveiling the links between DNA and sensory function. Neuroimaging, particularly the fMRI (functional magnetic resonance imaging) scanner, allowed researchers to intricately chart the brain’s electrical activity and better correlate stimulus and response. A new generation of multielectrode brain implants has allowed the workings of a living brain to be recorded with striking fidelity.
To learn more about this, we’ll stop by UC Berkeley to observe an f MRI experiment in stimulus reconstruction, or reading out brain activity to re-create the original sensory experience. In this case, the subject in the scanner is listening to audio podcasts, and the team eavesdropping on his brain is hoping to decode what he heard. Along with colleagues at other labs, they want to build a model of human hearing so precise that it could be used to read out internal speech—the voice in your head. This ability to translate consciously verbalized words—but not yet much more abstract kinds of thoughts—might help patients who cannot communicate out loud because they are incapacitated by strokes or neurodegenerative diseases.
The final scene in this write-in/read-out triptych will be in the operating room, where surgeon Sherry Wren is at work using robotic arms. These represent a step toward the development of artificial limbs that can not only move with the agility of human hands, but, perhaps, one day experience their same delicacy of touch. The researchers trying to help Wren teleoperate hope to translate this work to prosthetics that could be worn by paralyzed people and controlled by their minds. Developing limbs that can convey inbound sensory feedback from the world—the weight of objects, the force of collisions, the temperature of bodies—while obeying outbound commands from the brain would be the ultimate convergence of writing in and reading out. It’s a ballet between organism and machine that must be synchronized perfectly in order not to break the illusion of real touch in real time. This seamless fluidity is the end goal, said Krishna Shenoy, the Stanford University neuroprosthetics expert whose lab we’ll visit. Researchers want to become so fluent in this synaptic language that they can “just have a conversation with the brain.”
Most write-in and read-out technologies are highly experimental and invasive, largely the purview of research universities and hospitals. But you don’t have to undergo surgery to hack perception. In the final stretch of the book, we’ll meet the people behind some of the homemade and consumer-grade devices that let you warp your senses for exploration—and fun.
We’ll start with the devices that are the most external to the body, and end with the ones inching closest toward it. First, we’ll spend time in virtual reality, an entirely external technology that can make use of not just helmets or goggles, but entire high-tech rooms equipped with surround sound, vibrating floors, and even pumped-in odors, to create scenarios that seem so realistic they trick the brain into altering behavior. We’ll strap on the goggles at a military base where researchers are testing whether “preliving” awful combat scenarios before deployment can help soldiers be more resistant to post-traumatic stress disorder. Then we’ll wear the helmet at a Stanford lab where subjects are asked to virtually inhabit strange bodies and perform bizarre tasks—flying, punching balloons, giving themselves a scrub-down—to see if it prompts them into better social or environmental habits. (Not to give too much away, because part of the perceptual magic behind virtual reality experiments is that we can’t see the trick coming, but let’s just say that after my time on a virtual farm I’m permanently off hamburger.)
Next, we’ll enter the world of augmented reality wearables: glasses, watches, and other gadgets you put on your body—but not inside it—to enhance sensory perception. This is a world full of designers who ask out-of-the-box questions, like Why do we not have night vision, or eyes with automatic zoom? Could we taste a flavor impossible in nature? How can you send an invisible hug? This is where we’ll find Eyeborg, as well as a slew of entrepreneurs hoping to deliver a new generation of augmented reality gear to the mass market. We’ll meet the engineers behind iOptik, an augmented reality system that you wear extremely close to the body—a contact lens goes right on your eyeball—and Adrian David Cheok, the pervasive computing professor whose lab is investigating using rings, phone apps, and even fake lips to convey touch, taste, and smell experiences at a distance. To Cheok, the key difference between virtual and augmented reality is that wearing lightweight technology, rather than gazing into helmet-mounted computer screens or being stuck in specialized rooms, means you can move and interact with others normally, making your “mixed reality” experiences more engaging and natural. “Rather than the human being put into a virtual reality system,” Cheok said, “now we do the opposite: We bring the virtual world onto our bodies.”
This book ends in a basement.
It has to.
The basement—and its Silicon Valley counterpart, the garage—is the birthplace of technical innovation, where our species prototypes its fever dreams, where we discover and hack and build our test creatures. This is where people are hoping to fast-forward evolution, to skip the millennia of random mutation it would take for nature to offer up a new sensory port. So we’ll reconnect with the Grindhouse crew and others who are trying to expand perception by modding themselves. Their quest: To write in a class of environmental information humans can’t otherwise perceive, primarily, electromagnetism, since their experimentation generally begins by implanting a magnet. My quest, as a reporter, is to figure out if there’s a neuroscientific explanation for what happens when they try.
One last word about what it means to hack perception. Soft biohacking through social forces is powerful because it is pervasive, stealthy, and hard to control. We are usually unaware it is even happening and only glimpse its influence when we are jarred by a cultural shift, like traveling to a place where the language or social cues are different. But because these attention habits are learned, they can be relearned. Consider again the language problem that dogs the search for a sixth taste: We don’t already have a word or concept for what that taste might be. But people who were in elementary school before the 2000s might remember an era when there were only four tastes, not five. As you’ll read in Chapter 1, the story of how scientists found the fifth, and how the rest of us learned to perceive it, says a lot about the brain’s ability to adjust.
Hacking the brain with technology will be more powerful still. It’s not a totally new idea—you can easily consider psychoactive drug use a form of brain hacking, or storytelling, with its power to transport you to imaginary worlds. But until now, these hacks have been short-lived, temporary breaks from the “real world,” not efforts to supplant it. As people begin altering perception with dev
ices that interact directly with the sensory system and can be worn continuously, like smart watches and glasses, we are moving into an era in which we can deliberately control perception in a more lasting manner, and perhaps enhance our ordinary selves in extraordinary ways.
For some, like the Grindhouse folks, the prospect of installing perceptual gear of our own choice and invention atop our existing mechanisms is liberating, a way to speed up evolution. Others, like members of the group Stop the Cyborgs, whom we’ll meet later, see a risk that we’ll create illusions of “reality” that are harder than ever to escape, their influence more difficult to spot, as we entangle our own perceptual apparatus with machineries built and controlled by others. As they point out, technology is never neutral; it is a way of filtering our experience through someone else’s design. By putting an artificial scrim between your senses and the world, these technologies might give you superpowers, but they might also limit or mediate your attention and experience, or change your behavior in meaningful ways. And because the influence of these devices is so constant and subtle, you might not notice.
But as we’ll discuss with these critics, influencing behavior is human—we already guide each other’s thinking with language, with culture, with social interactions. What is new is how closely you can knit sensory perception to a machine, and—should these technologies gain broad acceptance—to an electronic network of other users, making that influence widespread and perhaps even anonymous. We have in some ways been biohackers all along, shaping each other’s realities through the mere act of living together. Now, as technology users, we may be able to shape our realities again, this time voluntarily and through glossy consumer gadgets.