Share Button

As our ability to peek inside the brain—and to alter it—expands, the field of neuroethics is beginning to emerge (with the help of a few Penn Scholars) to study the implication for society and the individual.

By Susan Frith | Illustration by Jon Sarkin


The technology is already here.

It’s called optical imaging, and it can detect cancer, stress—possibly even deceit. At least that’s the opinion of Dr. Britton Chance CH’35 Gr’40 Hon’85, who calls it “a cheap window on the brain” that’s also noninvasive. And one day, he believes, it might be used in airport security to look for terrorists. But the question is, will it finger a flying-phobic soccer mom instead?

Chance, professor emeritus of biochemistry and biophysics, who celebrated his 90th birthday in July, is a pioneer of biomedical optics whose accomplishments leap across fields as far ranging as radar technology, enzyme research, and magnetic resonance spectroscopy. More recently, he has amassed six years of data on how neural networks are activated in the prefrontal cortex when high-school students learn to solve problems. 

After the September 11 tragedy, he turned his attention to the brain’s activity during other states, such as distress, fatigue, and deceit. “We are interested in airport security and whether or not malevolent intent could be detected readily,” he says, sitting in his lab in the School of Medicine’s Anatomy Chemistry Building. “Remote sensing of brain functions is one of our major projects in this laboratory. We’ve shown it’s mathematically possible and made a preliminary design of the apparatus. It would scan the appropriate region in the brain, and reflected and diffused light from that area would be collected by a sensor in the distance.” 

Dr. Paul Root Wolpe C’78, senior fellow at Penn’s Center for Bioethics, criticizes the idea of scanning brains for terrorists and predicts that any attempt to ferret out malevolent intent would yield a lot of false positives and negatives.

What can technology reliably tell us about the workings of the human mind—and conscience? And who has a right to use this information? How far should we go in treating or enhancing the brain? Those are a few of the concerns that make up the emerging field of neuroethics.

Over the past few years, a small group of scholars—including Penn bioethicists Arthur Caplan and Wolpe, and psychology professor Martha Farah—has begun to look at the ethical issues arising from advances in the neurosciences. “Just as the human genome is being mapped, the brain is being mapped,” says Caplan, director of Penn’s Center for Bioethics and Trustee Professor of Bioethics in Molecular and Cellular Engineering. “It doesn’t have a project and a federal budget and companies racing to do it; it’s taking place in many fields and places, but it raises as many ethical issues as the new genetic knowledge does.” Unlike the link between one’s genes and one’s behavior, which is “fairly complicated,” he says, “the link between brain and behavior is pretty tight, so it becomes more problematic when you start to have information in real time about the brain.” 

Penn hosted what was perhaps the first conference on neuroethics in February 2002 after arranging a series of meetings (funded by the Greenwall Foundation) to outline the basic issues with a working group of national leaders in the neurosciences. Members included Harvard psychologist Steven Pinker (author of The Blank Slate: The Modern Denial of Human Nature), Nobel Prize-winning neurobiologist Eric Kandell, and Steve Hyman, formerly director of the National Institute of Mental Health and now Harvard’s provost. Since then neuroethics has been the topic of conferences hosted by Stanford and the New York Academy of Sciences as well as a flurry of journal articles and speaking engagements. (The term itself was coined by New York Times columnist William Safire.)

“I think if neuroethics didn’t get born at Penn,” Caplan says, “we’re at least one of the two or three places where it started.” 

As functional MRI and other technologies have provided an increasingly detailed picture of the brain, pharmaceuticals and treatments have provided new ways to manipulate it. “Science is way ahead of the ethics and social policy on these issues,” says Wolpe, who is also an assistant professor in the department of psychiatry and sociology and chief of bioethics at NASA. He notes that he is not opposed to the technologies themselves, in most cases, but concerned about their potential misuse.

Brain scans can provide clues about whether someone is an introvert or extrovert and, in some cases, reveal whether a person has the cravings of a recovering drug addict. New versions of lie detection are emerging that could leave the polygraph behind. And even certain new surgical techniques—such as implanting electrodes in the brain to treat motor-loss in Parkinson’s disease patients—may raise neuroethical issues . 

This convergence of technologies has “implications for how we think about ourselves, because in our [Western] culture, at least, we think of ourselves as brains,” Wolpe says. “We think of our brains as the locus of our identity.”


Ever since graduate school, Dr. Martha Farah has loved cognitive neuroscience “for the questions it asks, the methods it uses, and the excitement of being able to understand how the brain works.” But the Penn psychology professor and director of Penn’s Center for Cognitive Neuroscience [“The Fragile Orchestra,” March 1998] had one big regret: “My field was so ‘ivory tower’ that it didn’t seem to have any relevance to the real world and its problems,” she says. Today that’s changed.

“The last few decades of research suddenly have made possible all kinds of interventions and methods of monitoring people’s brains that have some pretty serious social implications,” Farah says, acknowledging that she has mixed feelings about those methods. It’s “sobering to realize that the effect [my field has] could be good or not good, depending on how we use these new capabilities.”

One job for neuroethicists is to perform triage—separating imminent issues from more farfetched possibilities, she says. While implanting a computer chip in someone’s brain to control their mind is still a ways off, a growing number of people are starting to practice “neurocognitive enhancement” to improve their mood, attention, sleep, and sexual performance. Also on the horizon is the use of pharmaceuticals in the criminal-justice system. And though it’s currently impossible to put someone in a brain scanner and obtain a mental-health readout, says Farah, the increasing ability to mine the brain for “socially relevant” information raises very real questions about privacy in the new millennium. 

Unless privacy protections are put in place, warns Caplan, this data easily could end up in the hands of employers, the courts, or government agencies.


Yourself, Only Better
If you think it’s tough to get your child into one of the top universities now, imagine the competition when the valedictorian and salutatorian have chips implanted in their brains to boost their academic performance. Art Caplan has no trouble envisioning a future world chock full of neuroenhancers for healthy individuals—nor predicting and putting down objections to it.

“Even people comfortable with the idea of fixing obvious brain defects become much prissier when it comes to mucking with brains to make them better than good,” he writes in the September issue of Scientific American. “Americans in particular believe that people should earn what they have. [But] Would it be bad if some innovation—say a brain chip implanted in the hippocampus—enabled a person to learn French in minutes or to read novels at a faster pace? Should we shun an implant that enhances brain development in newborns?” 

To Caplan, brain engineering is no more unnatural than eyeglasses, artificial hips, plane rides, and vitamins. “It is the essence of humanness to try to improve the world and oneself.”

But does something get lost in the improving? “It’s probably one of the great worries we have,” Caplan responds in a phone interview. “If we eliminate the different and the unusual and the neurotic, will we wind up with less art and beauty and culture and zest in life? But I’m not sure that’s incompatible [with brain enhancement]. More to the point, you can still be neurotic and have a better memory and learn new languages faster.” It’s also unlikely that we’d end up the same if we had the option to improve our brains in different ways, Caplan says. “Not everyone is going to value the same things.”

If the French-facilitating chip sounds far-fetched, other examples of neuroenhancement are close at hand. U.S. fighter pilots took modafinil, originally developed for treating narcolepsy, to stay awake long hours during missions in Afghanistan. The advent of more powerful drugs with fewer side effects has led to a greater number of people using antidepressants. And Ritalin, prescribed for attention deficit disorder (ADD), has found its way into the hands of college students looking for an advantage at exam-time.

“My undergraduates tell me they’re easy to get,” says Farah of the latter drug. “And they do work for everybody.” A patient with ADD “will be able to focus on a project longer and work more effectively [on Ritalin], but it’s also the case that somebody with normal levels of attentional control will also be able to focus longer and work more efficiently with these drugs.”

Paul Wolpe points out that Ritalin use is bimodal: In inner-city schools, it’s pushed by administrators for classroom management; in suburban schools, parents seek it for their children as a “personal-enhancement” tool to improve their academic performance, he says.

“In our society, [ADD] is an illness, but what we have to remember is that in another society, it wouldn’t be.” In fact, Wolpe says, there is a theory that ADD developed in hunter-gatherer societies, because someone with the “scattered-” and “hyper-focus” of the disorder is a great hunter. “What you want to do when you’re hunting is be looking at your whole environment and then when you see the prey, put hyper-focus on it. A person with ADD can’t concentrate five minutes on their schoolwork, but they can sit with their Game Boy for 4 1/2 hours.” 

Many people who currently take antidepressants would not have been prescribed these drugs 10 years ago, according to Farah. As more selective drugs are developed, “There is no reason to predict that their ranks will not continue to swell, and to include healthier and higher-functioning people,” she writes in an article that appeared in the journal Nature last November. Studies of short-term antidepressant use in normal subjects have found that it reduced “self-reported negative affect (such as fear and hostility) while leaving positive affect (happiness, excitement) the same.” 

Pointing to the popularity and online-availability of drugs like Viagra, Wolpe says, “We will agonize over using [neuroenhancers] even as we use them.”

One concern over enhancement technologies is that the less wealthy or the less educated might be shut out of the opportunities they provide. Caplan argues that the problem “isn’t one of technology, it’s one of fairness.” And one need look only as far as private schools and the Kaplan SAT prep courses to find current inequities. “[It’s] a huge problem, but to me the [answer is figuring out] how to make things equitable.”

Though greater awareness of the brain and development of more sophisticated drugs may lessen the stigma of mental illness, Caplan acknowledges that someone who chooses not to medicate could experience discrimination. “I could imagine someone saying, ‘This is an apartment building where only the pharmacologically calm are allowed to live. We don’t want anyone yelling, screaming, or agitated.’”

Farah sees the potential for more indirect coercion. “Even the enhancement of mood, which at first glance lacks a competitive function, seems to be associated with increased social ability, which does confer an advantage in many walks of life,” she writes. But, she argues, “it would seem at least as much of an infringement on personal freedom to restrict access to safe enhancements for the sake of avoiding the indirect coercion of individuals who do not wish to partake.”

Beyond the moral objections to gain without pain, some forms of enhancement may backfire. Drugs developed for memory-loss in Alzheimer’s patients, for example, might some day improve the memories of healthy adults. But the ability to forget is sometimes crucial to our survival, she says, noting that studies have linked “prodigious memory” to “difficulties with thinking and problem solving.”

Wolpe points to a study in which mice with genetically enhanced memories showed more aversion to pain. “We have this assumption that ‘if I take this memory pill, my brain is going to be enhanced in areas which I want it to be,’” he says. “And yet the balance of memory and forgetting in our brains is very delicately programmed. Who knows what the impact would be? If the brain didn’t forget pain, women wouldn’t have more than one baby.”


Truth or Consequence
A French TV crew has taken over one of the radiology labs at the Hospital of the University of Pennsylvania. They are here to film a simulation, ironically enough, of Dr. Daniel Langleben’s latest experiments on deception.

“I’ll have to ask you to hide somewhere or wait outside,” says the TV reporter to everyone who is not needed inside the snug space for safety or dramatic purposes, including the scientist himself. We creep off to a nearby waiting room, where Langleben, an assistant professor of psychiatry, tells me about a model for recognizing deception that could prove more promising than the polygraph.

When subjects placed inside an fMRI (functional magnetic resonance imaging) scanner were asked to lie about a playing card they held (a standard known as the “Guilty Knowledge Test”) and offered $20 in reward, two areas of the brain—the anterior cingulate and left prefrontal cortex—became more active. In comparison, no parts of the brain became more active when they were telling the truth. “Deception requires extra effort,” Langleben says. “Truth is the default.”

Skillful liars can cheat a polygraph, which only measures physiologic changes such as heart-rate and perspiration. But according to Langleben, “Brain activity is much harder to control.” 

“I’m going to predict a TV show for you,” says Caplan, who has a keen sense of the entertainment value in over-the-top prognostications. “The show will be called Liar Liar. It’s going to be on the air within seven years. On the show, which will be hosted by Jerry Springer’s son, couples will come on and make allegations of infidelity or embezzlement or stealing against one another, and then a scanning machine will be brought on and the head stuck inside it and the host will render the verdict about whether they’re telling the truth or not.”

It’s quite likely that Langleben would change the channel. In fact, when asked about fMRI’s potential as a lie detector, he’d rather come up with reasons why it’s not ready for prime time. He has a grant from a Defense Department-linked company to look for flaws in the model. 

“If you ever want to use it appropriately, you will have to test it in the target population,” he says. “Let’s say it works for hiding a card in normal college students. That does not mean it’s applicable to a population of middle-aged convicts.” Furthermore, he notes there is “a huge difference between average group effects and individual effects. What we’re showing so far is group effects. What needs to be shown is that this technique can be accurate in single subjects within a single session. The average is never applicable to the individual until the range of the effect has been tested.” 

In contrast to Langleben, Iowa scientist Lawrence Farwell is promoting a method of “brain fingerprinting” that has been used in two criminal investigations. Using electroencephalograms (EEGs), Farwell claims he can record electric signals from the brain while a subject is exposed to sounds, words, or images on a computer screen to show the existence or absence of specific memories related to a particular crime—and thus determine a person’s guilt or innocence. His company promotes the system for forensic examinations and screening for security leaks and terrorists.

“I view this as very scary,” says Dr. Kenneth Foster, a professor of bioengineering at Penn who focuses on ethical issues surrounding the use of technology [“Science Meets Society,” February 1998] and cowrote with Wolpe and Caplan an article for IEEE Spectrum on developments like Farwell’s. “This needs to be examined carefully before precedents get established for using it.”

“This is an example of a dangerous use of this sort of thing,” Langleben adds, pointing out that Farwell “doesn’t publish” and is “not part of an academic community.” The good thing about the fMRI machine used in Langleben’s own studies is that it’s “in the hands of the medical community,” he adds. “There’s no way this machine would be in the hands of non-MDs.” (MRI monitors changes in the body’s blood oxygen levels through the use of a powerful magnetic field and radio-frequency pulses, while the subject remains still inside a protective chamber. Functional MRI is an enhanced use of existing MRI technology that provides the necessary resolution and speed to visualize brain function.)

The airport sensor described by Britton Chance wouldn’t require a medical doctor to operate, however, since it’s noninvasive and works with infrared light to detect areas of increased blood-flow and altered metabolism. Chance predicts the device itself could be ready for security tests in two years, though he concedes that more data is needed to measure what he calls “malevolent intent.”

Chance pulls out a pictorial representation of one of his experiments. It’s a page covered with two rows of ovals that represent voxels, or different areas of the forebrain. Each one is differently colored, with red signifying areas of greatest activation. “This is someone who had learned to solve these problems over [many] weeks, so that now, instead of chaotically searching for the solution, they have trained neural networks in this region to function uniquely here.”

Over the past six summers, minority high-school students have taken part in research at Chance’s lab that has provided “an enormous database” for high-school student cognition. Subjects sit before a computer screen wearing a headband-like device, known as a cognosensor. They must determine whether a nonsensical jumble of letters, such as ardma, can be reordered to create an English word (drama). While they are responding to each problem, their brain activity is monitored by the cognosensor, which is studded with emitters that strike tissue or blood vessels in the brain with near-infrared light and detectors that measure the light reflected back. 

Since September 11, Chance’s lab has also looked into emotional disturbances. Among other things, the subjects view pictorial displays of angry, happy, or unkind faces as well as pictures of the World Trade Center collapse.

“Each one has their own particular [response] pattern. Just like fingerprints are different, these—perhaps you could call them ‘brain prints’—are expectedly different,” Chance says. “Different voxels. Different intensities of response.”

Chance has also done experiments on deception with the high-school students and with visiting medical students from Pakistan’s Aga Khan University. “When you lie, there are larger signals over a wide area. It may be that the central conscience, which is pricked by telling a lie, is not localized. Just where the social conscience is located, we haven’t determined and suspect that it may be different for different individuals.” 

The results are preliminary. “We haven’t studied corporate executives or poker players or ladies of the night, whose job it may be to professionally prevaricate. So we don’t have access to those. Maybe we should take a trip to Atlantic City.”

Chance didn’t give a specific definition of what he means by malevolent intent—the emotional state he proposes to monitor at airports—simply saying it may or may not include deception. “We’re dealing with a characteristic which may be a unique characteristic of an individual,” he says, adding, “We do know that the signals are bigger when there is emotional stress than when there isn’t.”

Involuntary screening is “ethically problematic,” says Art Caplan. On the other hand, “There is no right to fly. If you’ve bought a ticket to board an aircraft, then you can decide whether or not to go through the rituals that get you on that aircraft. Outside of emergencies, you have a choice about flying and therefore about being screened. In other words, someone who was frightened that their malevolent intent would be detected can turn around and go away.”

Paul Wolpe predicts that the public would never stand for this kind of airport screening and argues that it would be impossible to accurately gauge something like malevolent intent. “We don’t know if it’s nervousness because you’re about to plant a bomb in the plane or nervousness because you’re scared to fly. And if you’re a really well-trained terrorist, you’re not going to exhibit.”

Chance acknowledges the need to separate fright from malevolent intent and to test a larger population, adding that he, too, is “deeply concerned” about the ethics of opening this window on the brain. “We have no idea where new technologies are going to take us. So let’s be conservative” in applying them. 

Nevertheless, the research still continues at a liberal pace in his busy lab. A different project that has grabbed the attention of the military is the development of a disposable, telemetered cognosensor that would fit on the forehead, under a soldier’s helmet, and allow commanders to remotely monitor soldiers’ fatigue on the battlefield. “We know the tired brain doesn’t work as well as the fresh brain,” Chance says. “Just at what point the brain becomes less efficient is the object of future studies.” He also hopes to use similar technology to monitor people for signs of post-traumatic stress at disaster scenes.


While Chance’s current research was motivated by the events of 9/11, Langleben’s interest in deception was sparked by an article he read several years ago. It statedthat children with attention deficit disorder have trouble lying. 

“I thought, ‘Now what does this mean?’” he recalls. “I’ve had some patients with ADD, and they had no trouble telling me all kinds of stories about why they didn’t show up for an appointment.” But what the article was trying to say is that children with ADD have trouble doing it successfully. “They tend to just blurt [the truth] out.” That was consistent with the prevailing hypothesis that persons with ADD have trouble with response inhibition. It also got Langleben to thinking about the connection between response-inhibition and deception. When Langleben moved to Penn to do substance-abuse research, subjects frequently lied to him about their addictions. They would deny their substance abuse, even when this denial endangered their lives. His interest in deception renewed, Langleben came up with the idea of testing a model of it with fMRI. 

The areas of the brain that were activated during the card game he tested fit nicely into a neuroscience framework, he says. What the anterior cingulate does is make a selective response when you have more than one choice. 

Langleben points to a group of paintings on the wall. “Say I’m asking which one of these pictures you want to take home. You know I’d like you to take that one, but you like this other one better. The anterior cingulate is going to get busy there, because you know it is better for you to do something to please me [than tell the truth].” The left prefrontal cortex, in turn, tells the motor cortex that’s in charge of the hand to press the button in the MRI experiment to answer yes or no. In a lie, “it requires more work to redirect the thumb to some other place.” Despite the neat fit of this model, Langleben bets that different areas would activate during different kinds of deception. “This is not the lie center.”

He continues to fine-tune the study, which in its latest version gives the subject a choice of which cards to hide and separates the process of giving instructions from the testing itself. “The subject who’s doing the deception has to believe the deception is not known to the target.” To have two different people instructing and testing also helps remove an appearance of “endorsement” by the investigator, which could throw off the experiment. 

Though pure research is his first interest, Langleben says this technology could lead to a lie detector, as long as it’s used and applied judiciously. According to Langleben, defendants would have to consent to fMRI testing; otherwise it wouldn’t work. “It only takes moving around the head, and the whole thing is [thrown] off.”

“If Daniel Langleben’s technology ever works, then we have an enormous ethical issue,” says Paul Wolpe: “Where is it appropriately used? If I say I’m willing to get in the scanner and show you I’m telling the truth, that’s one thing. But can the court tell me to get in the scanner? Is that self-incrimination?” It was easier for ethicists to cope with the old polygraph machines, he suggests, since their very unreliability placed them beyond the ethical pale. 


Pills Not Prisons

“Is a world with more pharmacology and less prison better or worse than a world with more prison but less pharmacology?” Dr. Lawrence Sherman, the Albert M. Greenfield Professor of Human Relations and professor of sociology, posed that question in 2002 when he addressed the American Society of Criminology. Sherman, who directs the Fels Center of Government and the Jerry Lee Center of Criminology [“A Passion for Evidence,” March/April 2000], says that treatment for mental illness as an alternative to prison could be one part of the prescription for an “emotionally intelligent justice system.”

The combination of criminal justice and psychopharmaceuticals—or “neurocorrectors,” as Martha Farah calls them—alarms some ethicists. “Using chemicals to suppress ‘aggression’ or sexual desire are threats to fundamental human nature and autonomy in a way that incarceration is not,” Wolpe says. “Why not surgery? How do you insure compliance? Do doctors then become penal workers, enforcing pharmaceutical punishments? I doubt that state medical boards would allow that. It seems there are many issues here.”

Farah writes that medicating criminals makes us uneasy in ways that sentencing someone to take anger-management classes does not. “In anger-management class, a person is free to think, ‘This is stupid. No way am I going to use these methods.’ In contrast, the mechanism by which Prozac curbs impulsive violence cannot be accepted or resisted in the same way.”

But for Sherman the potential for cultivating “a range of pharmaceutical responses to serious crime” is worth exploring. “If selective serotonin reuptake inhibitor pills reduce violence 75 percent among depressed people, as tests so far suggest, then perhaps the next research question is what are the costs of producing that benefit—in side effects to people taking the pills, or in loss of other personality functions?” 

Great strides could be made by a research partnership of ethicists, biopsychologists, and criminologists, he says—provided there is no “major disaster.” Noting the lawsuits against pill manufacturers brought by the survivors of depressed patients who committed suicide while on medication, Sherman says: “What we can’t prevent is a public controversy that blames the drug. But what we can do is large field tests over a long period to be very sure that the drug produces less murder rather than more.” At stake is not just pharmacology in the abstract, he adds, “but the lives of people in prison—two million—every day getting raped or murdered because nobody believes that we can provide more effective ways to prevent crime.” 

To argue that convicted offenders aren’t capable of giving informed consent is to “take away their rights just as much as when you put them in prison,” Sherman says. The victim’s wishes should also be taken into consideration. Research shows that “when the victim is given a choice between pure retribution and opportunity to meet with the offender and to find some way to turn the offender’s life around [perhaps through medication], many victims—perhaps the majority across all crimes—will choose to meet with the offender.

“The crude picture of ‘pop a pill, solve a problem’ overstates the reliance on pharmacology that is likely to come out of this kind of research and development, ” he adds. It is far more likely that pharmacology will be one part of a “combination of victim-centered responses to crime [that] fosters increased social support for offenders remaining law abiding.”

A portrait of August Vollmer, the founder of the American Society of Criminology in 1941, hangs above the fireplace in Sherman’s office. Vollmer believed that one day science—“especially interventions with violent personalities”—would be able to prevent crime, Sherman says. “Before the pharmacological revolution, the prevailing view in science was that Vollmer was writing science fiction. But just as Jules Verne’s rockets came true, perhaps a humane way of controlling violent behavior by means other than prison will also come true.” 


Getting to Know You, Getting to Know All About You

Despite an explosion in fMRI-based research, there are still limits on what a brain scan can tell us about our next-door neighbor or any individual.

“It’s not the case right now in 2003 that you can stick somebody in a scanner and say, ‘Oh yes, this person definitely suffers from bipolar disorder, Attention Deficit Disorder, or has a history of drug abuse or what have you,’” Farah says. Most of the studies are only able to glean average differences among groups of people.

“But even now, scans can be somewhat informative, some of the time,” she says. “If an individual happens to be at one end or another of a continuum, you might well be able to infer something about the person from their brain scan.” One example is the work of Turhan Canli of SUNY-Stonybrook, which has shown a correlation between patterns of brain activation during certain tasks and personality traits like extroversion and neuroticism. “A lot of his scans look kind of middling and ambiguous, but certain scans just scream ‘extrovert’ or ‘introvert’ and, he tells me, those scans invariably predict the person’s personality correctly. So while the state of the art is not yet up to classifying individuals in general, it can from time to time reveal something of an individual’s psychology.”

Where the technology will take us next is hard to predict. “All I can say about the progress of cognitive neuroscience is that, so far, it has surprised us with its speed,” Farah says. Just a decade ago, “the idea of personality showing up in brain images would probably have been ridiculed as being naïve.” MRI has steadily evolved and improved, and scientists continue to learn more about where in the brain—and in what situations—to look for differences. “I see no reason that all kinds of personal psychological characteristics will not one day be reliably measured with functional neuroimaging.”


From time to time Caplan says he gets a letter from someone convinced that a machine has been implanted in his or her brain for mind-control purposes. Perhaps they heard about the Robo Rat project at SUNY-Buffalo, in which investigators sent lab rats on a controlled tour of campus by remotely stimulating electrodes inserted in the animals’ brains. This, and earlier animal experiments, have led some to fear that it’s possible to control people’s behavior through such technology.

For the record, says Kenneth Foster, “It’s not happening now—and it may be possible conceptually, but it’s not likely to be much of an issue in the future.” Though it’s important to consider all the possible implications of this technology, he says, “The Brave New World is [not] already here.” All of which raises the point that it’s possible to worry too muchabout progress in the neurosciences.

“All of the journalists I’ve talked to want to hear about the dangers, the dehumanization—and the creepier the better,” says Farah. “I think [neuroethicists] would serve society better if we reframed the issues. Instead of ‘What must we guard against?’ we should ask, ‘What can this new knowledge do for us, and how can we deploy it to bring the most benefit to humanity and the least risk?’”

Share Button

    Related Posts

    Emanuel Joins Biden COVID-19 Advisory Board
    Mind Traveler
    Healthcare’s Hard Choices (and How to Stop Avoiding Them)

    Leave a Reply