Touching the Virtual Frontier

Share Button

If you’ve never been stung by imaginary gunfire, sent a texture sample by email, or had a sleeve teach you how to move your arm, Katherine Kuchenbecker’s Haptics Lab is a Pandora’s box of tactile trickery and strange sensations.

By Trey Popp | Illustration by Scott Bakal


Download a PDF of this article

Saurabh Palan GEng’10 wants you to know how it feels to get hit by a bullet. Also, slashed across your shoulder with a sword. Or maybe a zombie claw. Then there’s the sensation of blood flowing from an open wound. He wants you to feel what that’s like too, so he reaches into an electronics drawer in a Towne Building workspace for a thumbnail-sized Peltier element. Plugged into an electrical current, one side of the wafer spreads a gentle wave of warmth over your skin. It’s kind of soothing. Then he pulls out another and tapes it to your arm next to the first, flipped upside down onto its cooling side. He triggers the current and smiles brightly. The combination produces the tactile illusion of a branding-iron burn. 

These are not the typical elements of a class project in robotics. Graduate students in engineering are more accustomed to experiencing pain than inflicting it. But Palan is an aspiring roboticist whose interests run in a very human direction. He wants to tap into what are perhaps our most intense and intimate sensations, the ones engendered by our sense of touch.

In the case of his Tactile Gaming Vest, that means simulating the injuries that lie in wait for a computerized avatar wandering the alien-infested corridors of Half-Life 2. One of his ideas is to make the first-person-shooter genre a little more immersive. So when your attackers target you from behind, you feel a thwack-thwack-thwack against your kidneys. If they come at you straight on, you feel the gunfire in your ribs. 

The prototype he was working on in April was a somewhat stripped-down version of previous ones; the bullet simulators felt a little more like shiatsu taps than sniper rounds, and there was no burning or virtual bleeding to suffer through. “We could do that successfully, but it required a lot of current, so we had to drop it,” Palan explained. His original partner on the project was Ruoyao Wang GEng’09; Edward Li GEng’10 and junior Ned Naukam have pitched in along the way.

“But an application like this, with the blood flow, could be used for military training,” he added, conjuring a vision of soldiers waging war games with heavier, battery-packed simulation vests rather than potentially hazardous rubber bullets. Kind of like laser tag, enhanced with pain. 

“We can make this wireless,” he said. “The main purpose of giving this training to them is to make them aware of how they’re going to feel when they get shot, so that they do not go into shock or a trauma state [in actual combat], and they can handle it. So simulating that in a very realistic way—but not hurting the soldiers at the time—is very important.”

Palan, who earned his master’s in May, is one of a few dozen students to pass through Penn’s Haptics Lab. Haptics is a branch of engineering that focuses on human interaction with real and virtual objects through touch and motion. If you’ve ever swept your fingers across an iPhone screen to scroll through a photo album, or swung a Nintendo Wii remote to strike an imaginary tennis ball, you have an entry-level idea of what the field is about. 

Where it’s going next is the purview of Katherine Kuchenbecker, the Skirkanich Assistant Professor of Innovation, who founded the Haptics Group when she came to Penn in 2007 and serves as its director. A vest that attacks its owner is just the beginning. There are some far stranger sensations on offer in this unkempt room in the Towne Building basement, and their potential applications range from the physical rehabilitation of stroke survivors, to remote-control surgery, to the transmission of textures and sensations by email the way people send photo files now. 


Katherine Kuchenbecker drags a stylus across a flat-screen monitor. Computer algorithms trigger the attached motors to produce a tactile illusion that she’s scraping over one of the textured materials to her right. Photo by Candace diCarlo

“You cannot cause effects in the world without physically touching things.”

As professional credos go, that’s a pretty mundane one. But coming from Kuchenbecker it has an unusual subtext. For one thing, she works in a discipline whose sights have long been set on eliminating the need for people to physically interact with things. Roboticists by and large still hew to a Jetsons-style vision of the future. Their promised land is one where machines unload the dishwasher, cars drive themselves, and there’s no need to give soldiers a virtual preview of bullet wounds because androids will be manning the trenches.

“Here in GRASP,” as Kuchenbecker puts it, referring to Penn’s General Robotics, Automation, Sensing and Perception Lab, “there are many folks who work on autonomous robots. How do I make a robot that can do stuff on its own? And I am working on that. But personally, I think that’s a rather far-off goal in the domains that I am interested in.”

What interests her is the realm of touch and movement. If the stereotypical engineering professor is an eggheaded genius who makes Fourier transforms look easy but hopscotch look hard, Kuchenbecker doesn’t fit the type. She played volleyball at Stanford. She takes dance classes in the Pottruck gym. “I pretend to be a graduate student,” she laughs. She doesn’t have to pretend very hard. She’s not much older than a lot of them, and probably fitter than most. Her athletic pursuits also happen to line up nicely with her academic research, which focuses on the intersection of technology and the human body.

“The area that I focus on is robotic technology to help a user do a task,” she continues, “or make an interaction that they’re having with some sort of technology, like a computer, more interesting, more immersive, [to] let them be able to do what they’re trying to do better. And so most—let’s say all—projects in my lab include either a human interaction with something, or touch-based interaction on the robotic side.”

That’s the other thing that makes her statement about physically touching things a little strange. The more she talks about her research, the fuzzier the definition of touching becomes. Not to mention things. Haptic interfaces, as she describes them in the syllabus of a graduate-level class she teaches, “employ specialized robotic hardware and unique computer algorithms to enable users to explore and manipulate simulated and distant environments.”

Haptic technology has a history that goes back a few decades. The controls in modern aircraft, for example, incorporate some sorts of tactile feedback; nothing grabs a pilot’s attention like a shaking joystick. When flight controls were mechanically linked to wing flaps and so forth, things like that happened somewhat naturally. When computerization severed that link, engineers turned to haptic interfaces to replace the lost sensory stimuli with simulated equivalents. The idea—in airplanes, cars, and every other field haptics touches—is to improve and enrich the connection between a person and a machine, making its operation as intuitive as possible.

As more and more of our daily activities migrate to the digital domain, haptic technology is entering another phase. 

“This is a very hot area, because we live in two worlds,” says Eduardo Glandt GCh’75 Gr’77, dean of the engineering school. “We live in the real, physical world, and we also live online—we live in the virtual world of the Internet and computers. It’s surprising how much our life now is in that other world. People play and study and shop and find friends, and everything happens virtually. Haptics is the interface. It’s the way the two worlds touch.”

Increasingly, it will also be an interface that connects one realm of the physical world with another. One of the environments currently at the center of Kuckenbecker’s research is the inside of the human body. She wants to enable surgeons who slice and stitch using robot-assisted laparoscopic devices to actually feel what’s at the tips of their instruments. Moreover, she wants trainees to be able to experience those sensations without ever entering someone’s abdomen. 

“The way that surgeons learn is actually barbaric,” she says. “Don’t tell the surgeons I said that. They would say, maybe, primitive. But it’s scary if you’re the patient, because a trainee watches an expert do a procedure a couple times, and has read about it in a book, and then they try it and someone watches them. And if they mess up they’re chastised, they’re corrected. But it’s a very high-stakes, high-pressure environment to practice in.”

A high-fidelity virtual reproduction of that environment, lifelike down to the textural differences between healthy tissue and tumors, would make for a safer training ground. 

“Surgeons watch movies of people doing surgery,” she goes on. “Well, what if you could watch it and also feel what the surgeon was feeling? I think there’s a benefit there. But no one has any idea. They’ve never done it before.” 

The technologies being developed in the Haptics Lab, though fragmented and very much in their infancy, are steps toward a first attempt. One that may prove foundational for the field is a project that Kuchenbecker has been working on with a PhD candidate named Joe Romano GEng’10.


Romano is a texture guy. A while ago he mounted a piece of denim to some heavy cardstock. He did the same with a swatch of vinyl, a piece of fine stationery, and a square of rough plastic. Then he set about feeling them. 

Again and again, he dragged the tip of a stylus over the surface of each sample. He pressed against the vinyl gently, then firmly. (Or as Romano puts it, he applied one Newton of force, then two, and so on.) He scraped across the rough plastic slowly, then quickly; the crinkles in its surface snagged the stylus tip to a greater and lesser degree. A shaft-mounted accelerometer measured the vibrations and digitally recorded them in fine-grained detail. Romano did this, and tinkered with the resulting data, for weeks. The man gives the word superficiala whole new spin.

He turned his data sets into mathematical models, which gave him what amounted to compressed computer files corresponding to each texture. (That, it seems safe to say, was the doctoral-level stuff.) Then he outfitted the stylus with a pair of tiny motors capable of rendering the math back into motion. (Well, that too.) 

Now imagine dragging a stylus—or a pencil tip, if that’s more familiar—across the smooth screen of a tablet monitor, or a handheld PDA device. It scoots across the glass practically without friction, making almost no sound. 

That’s exactly what doesn’t happen when Romano calls up one of his textures to a screen that sits next to his keyboard. This time, you drag the stylus over a picture of crinkled plastic and it jiggles around in your hand as though you were plowing across actual furrows and seams. The pixels of denim “feel” like a pair of broken-in jeans. Writing on the virtual stationery is downright eerie. The papery scritch-scratch might as well be emanating from a pen nib scrawling an old-fashioned thank-you note.

A nearby computer station is equipped with a device called a Phantom Omni, which looks like a mechanical arm with a couple of hinges that allow you to move an attached stylus through the empty airspace below. Its salient feature, though, is its ability to transform computer code into what amounts to an invisible object. 

Romano recently loaded up a couple of examples for a visitor. 

First, the computer monitor displayed a simple rendering, using perspective to convey depth, of a ball lying in a box. “Now take the stylus,” Romano said once the Phantom Omni’s hidden motors had synched up to the computer code. Prodding the same space as before, the stylus’s tip seemed to ram against something. On screen, the ball lurched. With a little practice, it quickly became possible to bat the ball against the virtual walls—like playing a game of squash, only using the shaft of a dry-erase marker instead of a racquet. When the stylus itself came in contact with one of the virtural walls, it stopped cold, as if it were being pressed into a foam pad.

“With these devices, everything kind of feels spongy and slippery,” Romano said. “They can give you information about the shape of an object, but they don’t give you the fine details. So that’s the thing we’ve been working on. Any kind of simulation people come up with, you could add in this fine-detail information.” 

If a simulation called for denim or stationery, Romano could add those textures today. With an expanded library, the possibilities are limitless. When Kuchenbecker looks into her crystal ball, she sees what she calls haptic photography. Say an online shopper wanted to get a tactile sense of a clothing fabric, or an archaeologist wanted to “handle” an artifact located in a museum thousands of miles away. 

“We have developed and are in the process of improving models to capture those sensations and distill them down into a portable, emailable form,” she says. “And then we’re also developing the hardware to really accurately recreate those sensations. So that when you drag your tool over the virtual surface, we can make it feel just the same as if you have the real artifact there in front of you.”


A second scenario on Romano’s monitor showed another direction this technology is being taken. This one, which displayed what looked like three parallel sheets with a featureless rod hovering above them, amounted to a crude simulation of the sort of incision that precedes laparoscopic surgery. Unlike open surgery, which involves cuts large enough for a surgeon’s hands to pass through, laparoscopic surgery is conducted by inserting skinny rods through one or more holes as small as the tip of your pinky finger. One rod is equipped with a video camera—sometimes a twin-lens model with 3-D capability. Others have tools designed for actions like cutting tissue or gripping suture needles. It’s a minimally invasive technique, but it starts with what can be a tricky incision. A surgeon uses what’s called a Veress needle to create the port for all these instruments to pass through.

That’s what the Phantom Omni’s stylus stood in for this time. A certain amount of pressure applied to the top virtual layer pierced it. The stylus, suddenly unopposed by that pressure, lurched forward. The second layer, representing another sort of tissue, had a different level of elasticity. The third layer had still another feel to it. 

Endowing such a simulation with the level of textural detail Romano has been modeling could be a big deal for medical training. 

“As we do more minimally invasive surgeries, one of the areas that becomes very critical is getting proper access to the abdomen,” says David Lee, chief of the urology division at Penn Presbyterian Hospital and an assistant professor of surgery at the School of Medicine. A surgeon has to puncture the skin, and the fascia underneath, but take care not to go into the next layer of tissue. “Because the bowel is sitting there, and if you injure the bowel, and you don’t see that you’ve injured it, those patients can do really poorly.”

This is a skill that comes with experience, he adds. “But what cost is it to your patients when you’re in your first few cases and you don’t do the right thing? So the more simulation tools that we have, the better—especially at a place like Penn, where we train lots of residents and medical students. To have them work in this no-risk environment and develop these proper feels of how things are supposed to feel, it’s humongous.”

Though the Phantom Omni simulation wasn’t directly modeled on the actual properties of human flesh, Kuchenbecker envisions “capturing the feel of real interactions” via haptic add-ons to the tools surgeons use already. “Then we could build mathematical models later to let a trainee practice that,” she says, “and experience: Okay, this is what it might feel like with a really healthy young personThis is what it might feel like with an obese patient … ”

Lee is an expert in robotic laparoscopy, in which a surgeon doesn’t actually hold onto the rods, but instead sits at a computerized console that basically channels his hand movements to the tiny tools at their tips, inside the patient. Prostate cancer surgery is his specialty.

“The old open radical prostatectomy involves an incision from the belly button down to the pubic bone. Guys did pretty well, but you know, it’s a bloody operation, and guys are pretty sore down there for a few weeks,” he says. “The robot gives you certain advantages. You can see in 3-D … and the robot instruments are also wristed. [With] standard laparoscopy instruments, you can just go in and out, and open and close, but that’s about all you can do.”

What this sort of computer-assisted operation sacrifices is the sense of touch, which has traditionally been integral to the practice of surgery. “When you’re seated at the robot,” Lee explains, “where all of these potential sensations are blunted by going from the tower, through the wires, into the surgeon console and then to your hand controllers, you really don’t have any sense of feel anymore.”

Kuchenbecker is developing a haptic interface that would restore those lost sensations. Her prototype, built around the same surgical system Lee uses (made by a company called Intuitive Surgical), deploys accelerometers and some fancy wiring to transmit vibrations in the rods back to the surgeon’s fingertips. 

“What surgeons have become accustomed to in open surgery,” she says, “is when they pull on a suture, they can feel the tension. When they cut tissue, they can feel it’s breaking through. If they cut a suture, they can tell if they cut a suture or they missed it. If they’re cutting tissue they may get a sense of is it healthy or is it diseased, as I’m interacting with it, or as I’m palpating or digging around, trying to look for something. And all of that haptic information is absent when they’re using the robot. They learn to compensate through vision, by what they can see.”

Her current model restores some (but not all) of this tactile feedback with a time delay of 1.6 milliseconds, or about three times faster than a honeybee can flap its wings. 

Lee doubts this would make much of a difference for an expert surgeon. (He reckons he’s in the top five in the world in terms of prostate cancer cases done with the robot.) But he believes it would be valuable for surgeons learning how to do robotic laparoscopy. “The first few times you sit down at the robot, you want to reach your hand in there and touch it so you know where you are,” he says. “So surgeons who are experienced at open [radical surgery] and try to switch to the robot, they have a hard time sometimes because they lose that extra feedback.”

The feedback they’d get through Kuchenbecker’s vibration sensor isn’t the same thing as putting your fingers directly on the prostate, he adds, but it could be valuable in a different way. “In the robot surgery setting, because we’re working in a narrow space and you have sometimes three or four instruments, along with a camera, working in  [that] space, you have a lot of potential instrument collisions. If you can feel, off-camera, that your instruments are bumping … that’s a place where a less experienced surgeon, if you don’t feel that at all, and start pushing, pushing—all of a sudden you could have this big release,” he says, jerking an imaginary scalpel tool through the air. “Whereas if you feel that right away, you know [that you’ve] got to back up and come in again. So I think there are a lot of benefits in helping a surgeon along that learning curve.” 

Kuchenbecker was planning to run a study this summer to measure the effect of this haptic feedback on expert surgeons and trainees. “Maybe this could make it easier to become an expert,” she says. “Or maybe it just makes surgery less stressful, less cognitively intense … I liken it a lot to driving. If you’re driving eight hours a day, if your car was just a little more comfortable, or if your mirrors were just a little better aligned, or if you had better information from the car or a better connection between you and your car, maybe it would make that experience easier.

“Or maybe,” she says, “it can let experts reach a higher level of skill.”

That’s not idle speculation, says Lee, who mentions real-time elastography as an example of where robot-assisted surgery could be headed. “With traditional ultrasound, you just get a picture,” he explains. “But with elastography, it sends certain impulses, and then through mathematical calculations it can tell you how elastic the tissue is. So it could help you feel how elastic the tissue is—or feel hard areas within the prostate, maybe even better than what your fingers can feel.”

Kuchenbecker’s prototype “is the first generation of developing tactile feedback for the robot,” he adds.  “But it could turn into a lot of different things where you develop sensors at the tip of your robot instruments that allow you to feel things or see things that you could never do [in] open [surgery]. So you could add all these extra tools and get information pumped to your eyes—and your fingers—as you’re doing the operation that you couldn’t dream of before.”


“In the end, the way you change things in the world is by moving.” 

That’s another one of Kuchenbecker’s unofficial mottos—and for most people, a banal fact of normal life. But for stroke survivors who develop apraxia, it is the defining impediment to normal life. 

Apraxic stroke patients have difficulty planning and carrying out purposeful movements. They can see a cup of water on the table before them; they can think about grabbing it and taking a sip; but something invades the space between desire and action to foil their attempts. Their shoulder might swivel the wrong way. Their elbow may overextend, or scissor shut at the wrong moment. 

Practice helps. Patients who manage to repeat such routine motions over and over can sometimes regain the ability to carry them out consistently. But showing them how to do it isn’t enough. 

“These patients can’t interpret the visual feedback,” says Kuchenbecker. “It doesn’t help them to be able to see how they’re messing up.” They need to feel their way toward success. 

It’s a daunting job for a physical therapist. Teaching someone how to relearn these motor skills involves countless repetitions—and providing too much physical help can undermine the process. 

“From the videos we’ve watched of these patients,” Kuckenbecker says, “sometimes the therapists do actually push, and do the motion. But they’re trying to get the patient to do the movement themselves. They’re trying to get them to make the new connections in their brain, to explore and figure out, How can I get my arm to move in that way?

Some researchers have experimented with planar robots—devices that can guide a patient’s hands along certain trajectories, mechanically pushing them in the right direction when they veer off track. But that leads to another catch. 

“It turns out that having the robot help you in this way maybe makes you do a better job of the task right now, but it doesn’t transfer to real life, because the robot is doing it for you,” Kuchenbecker explains. “So we came up with this idea of a sleeve—and eventually, an entire suit—that would know how you’re moving and give you [tactile] guidance.”

As the spring semester wound down, one of her master’s students, Pulkit Kapur GME’10, demonstrated a prototype he had worked on with Kuchenbecker and a pair of clinical researchers at Philadelphia’s Moss Rehabilitation Research Institute. It was a tight-fitting sleeve embedded with sensors whose precise spatial relationships to one another can be monitored in real time by a magnetic tracking device, alongside small eccentric-mass motors (the same things that make your cell phone vibrate) that deliver little high-frequency buzzes to certain parts of the arm. 

Plugged into a laptop, the sleeve tracks the arm movements of the person wearing it, translating the sensor data into a moving image of a virtual arm on the screen. Meanwhile, whenever the patient’s arm drifts away from its intended trajectory, one or more pager motors goes off, signaling the error the way a therapist might—albeit with a high-frequency vibration instead of a gentle touch of the palm—to prod a self-directed correction. 

“It has to be a little more fancy than a Wii remote because we actually need to know where’s my forearm, where’s my upper arm, where’s my torso, what are the joint angles?” Kuchenbecker says. “And then give them some feedback to help guide their motion, to help make the task more interesting and easier to do.”

“The goal is that this could be something that could be in a rehabilitation clinic,” says Kuchenbecker. While the $10,000 price ceiling set by her clinical collaborators might make the sleeve attractive for that setting, “for it to be truly, truly useful, it would be great if it was something a patient could take home with them, which is on the order of, rather than thousands of dollars, hundreds of dollars.” 

“I’m personally interested in also testing athletes,” she adds. “For a stroke patient, they’re relearning motions that they used to know. Whereas an athlete or dancer is maybe trying to really push themselves beyond what’s typical.”

That prospect is several steps ahead of current capabilities. Getting there—and achieving the sort of sophistication that might really begin to change the game in robotic surgery—will hinge to some degree on figuring out how to go from buzzing someone’s arm with a pager motor to imparting more naturalistic sensations. 

Pumping information about a tissue’s elasticity across the room to a surgeon’s fingers will require more subtlety and nuance than simulating a videogame bullet strike. After all, a gamer dodging virtual cannons and crossbows probably isn’t looking for strict verisimilitude. 

The current advantage of things like pager motors is that they’re small, cheap, and easy to program. “But they’re not what I want to use in the long term,” says Kuchenbecker. “So we’ve been starting to develop what I call new tactors—tactile actuators that either make or break contact with your skin, or vibrate but in a more interesting way, a more natural way. Like, let’s record this thump for someone thumping your arm like this,” she says, rapping a fingertip against her forearm, “and play that thump, thump, thump so it’s more natural instead of this very high-frequency, annoying zzzzzz.” 

This fall, she’s bringing a postdoctoral researcher to Penn who will focus on modular devices that can provide skin-stretch feedback. “So say you’re a transhumeral amputee, and I want you to be able to feel the elbow angle of your prosthesis without looking at it,” Kuchenbecker explains. “I could, like, put that [skin-stretch tactor] right on your upper arm so you could feel the extent that this little tactor is stretching your arm,” which would in turn enable an amputee to intuit the prosthetic limb’s spatial position. 

The underlying challenge is partly about advancing technology, and partly about understanding how our bodies and brains convert physical stimuli into sensations. 

“For haptics,” Kuchenbecker observes, “we work on understanding the capabilities of the human sensing system so that we can try to take advantage of them, exploit them, or build on them.”

Which is just what Saurabh Palan was exploring with his Tactile Gaming Vest. It’s not terribly hard to tap into a computer game for data on what directions the bullets are flying from. The art comes in tricking someone into feeling something that doesn’t quite line up with physical reality. “You need to fool your body or your mind,” as Palan put it. And that’s exactly what he’d done to simulate the searing pain of a bullet entry. It turns out that placing a cold Peltier element right next to a hot one triggers an intense burning sensation without the slightest damage to the skin. 

“The human central nervous system and peripheral nervous system evolved interactive with natural stimuli,” Kuchenbecker says. “Your brain is trying to construct the most likely explanation for the feedback it’s feeling … So 1,000 years ago or 2,000 years ago, your body probably would not have experienced a very warm something next to a very cold something. And so there’s this peculiar illusion where you can create a burning sensation because you’re stimulating the nerves in a way that they didn’t typically get stimulated.

“And now we can create all sorts of artificial stimuli that create contradictions, or exploit the underlying method of the sensing system,” she adds. “It’s all about, can we capture the feel of an interaction the same way that you can capture an appearance, and store the parts that are salient … and then can we recreate it, really realistically, for the user to experience later?” 

There is something at once exciting and unsettling about all of this. In the last 150 years, human beings have come to terms with the power of photography to preserve fleeting images for as long as we care to keep them. In the last 50, film and video have intensified that ability. We experience places without having visited them, remember events without having witnessed them. In our era of relentless documentation, intimate memories of wedding dances have a way of being supplanted by DVD versions viewed many times afterward, and children may remember their first home runs and ballet recitals more keenly in highlight-reel format than in subjective recollections of the experience itself. What is in store for us when our physical sensations can be distilled into portable and everlasting formats, to buy, sell, save, and replay whenever we like? It is a question that may be answered sooner than you think. The virtual world is coming ever closer. The day is coming when you will reach out and touch it.

Share Button

    Related Posts

    Alien Minds, Immaculate Bullshit, Outstanding Questions
    Maintaining Focus
    Digital Player

    Leave a Reply