Share Button

Technophobia runs rampant, but what may be scarier is a system in which scientific truth is determined in the media and the courts, says Dr. Kenneth Foster.

By Sonia K. Ellis | Illustration by Anastasia Vasikalis


It’s winter OF 1992, and I’ve just found out that I’m pregnant with my first child. But in the office of the nurse practitioner, my happiness is quickly tempered when she asks if my mother ever took DES — diethylstilbestrol, a drug given routinely in the 1960s to prevent miscarriage. DES, she tells me, has been associated with a rare cancer in some women exposed to the drug in utero. The answer is no, but I’ve been infected, nevertheless — with a realization: There are risks to me and to my unborn child that I can’t control or even see.
    It’s summer of 1995, and my son is a healthy three-year-old enjoying a visit from his grandparents. I’m a second-rate cook, and most of the meals I prepare for my guests go from the freezer to the microwave oven. Watching me in the kitchen, my father-in-law mentions a report he read in the newspaper: You should stand about three feet away from a microwave oven when it’s operating, because that’s how far the waves travel before they dissipate. After that, whenever my son hears the beep of the controls, he dashes to the other end of the kitchen until the microwave is done. My own home, it seems, harbors the phantom hazards of technology.
    Now a hypothetical scenario. We’re back to 1992, but this time my visit to the doctor’s office is grim: I’ve been diagnosed with breast cancer. I elect to have a mastectomy, followed by reconstructive surgery with a silicone implant. Then I hear the news. The commissioner of the Food and Drug Administration has decided to ban silicone-gel-filled breast implants. Are the implants really dangerous? So far there’s no scientific evidence in either direction — safe or not — but there have been some stories about women developing connective-tissue disease after getting the implants. Does that mean I’m at risk? If so, then I have another question: Is there someone I can sue?

These vignettes from my life — two of them real, one that might have been — are really pieces torn from a bigger picture. All three are portraits of this techno-era, a time when technology is our intimate partner. The unsettling images you see are the clashes of science and society.
   Technology’s positive impact on our lives is ubiquitous and undeniable. But there’s a flip side: With increased knowledge comes increased awareness of potential hazards to our health. Open any newspaper or magazine on the racks at your supermarket and you’ll read about another health scare, technology’s latest threat to your well-being. The computer terminals that we use to transmit information also emit electromagnetic fields, which were reportedly associated with a cluster of miscarriages in pregnant women. The same medical X-rays used to diagnose our health also expose us to radiation, which we’ve heard can induce cancer. And what about those fen-phen drugs that help people lose weight? Just recently they’ve been linked to heart-valve damage.
   As a society we are becoming severely contaminated with technophobia. But are our fears justified? Are all these hazards real? What exactly is the truth about the scientific findings that dictate to our health and habits?
   There is, it turns out, one very logical place to start looking for answers to tough questions like these. You can find Dr. Kenneth Foster, associate professor of bioengineering, on the first floor of Hayden Hall. Foster is president of the Society on Social Implications of Technology (SSIT), a 2,000-member organization formed to examine the social issues related to technology. A component of the Institute of Electrical and Electronics Engineers, the SSIT has tackled issues ranging from peace technology to professional responsibility. Foster himself — bearded, articulate, a self-proclaimed skeptic and critic — is a likely-looking oracle of scientific truth. But if, like me, you have any preconceptions about scientific knowledge, some of Foster’s judgments might surprise you.

Phantom or Fact?
   Foster’s tenure as the SSIT’s president began in January 1997, but he has been studying the intersection of science and society for 25 years. Foster’s particular focus — the medical uses of electromagnetic fields — has given him an insider’s perspective on some of the more elusive effects of technology.
   His first encounter with a technology-driven controversy came when he served with the Navy in the early 1970s. Foster’s assignment was to study the biological effects of radio-frequency energy, which is emitted from microwave ovens and other common household appliances. Researchers knew at the time that exposure to very high levels of radio-frequency energy was not healthy. “Put a rat in a microwave oven and it’s clearly dangerous to the rat,” says Foster without a hint of facetiousness. No one would argue that. But what about low levels of exposure, like living 10 miles away from a TV station?
   “There’s been a constant public and scientific debate over whether low levels of exposure can have any kind of adverse effect. It’s been an ongoing controversy for decades, with lots of anecdotal stories. But nothing has ever really been shown. After 50 years, the only hazards that have ever been clearly established are similar to those of putting a rat in a microwave oven.”
   At Penn since 1977, Foster now studies the interactions between electromagnetic fields and biological systems. One issue is whether a link exists between power-frequency magnetic fields and cancer; could workers in “electrical” occupations, for example, be at increased risk for brain cancer? Foster doubts that weak electromagnetic fields pose any real hazard, “but a scattering of weak and inconsistent positive results … helps keep the controversy alive.”
   In addressing this problem of evaluating subtle health hazards, Foster frequently uses the term phantom risk. As he explains, “Phantom risks are cause-and-effect relationships whose very existence is unproven and perhaps unprovable.” Evaluating phantom risk means determining the precise connection between a suspected environmental hazard and a health effect — a task that can often be difficult, sometimes impossible.

For a better understanding of what constitutes phantom risk, consider these examples:

   Phantom risk is not cancer from smoking. A pack-a-day smoker is potentially 14 times more likely than a nonsmoker to develop lung cancer. That’s a strong causal connection. But does second-hand smoke cause cancer? The unconfirmed hazard of passive smoking is a phantom risk.
   Phantom risk is not the use of diethylstilbestrol by pregnant women. DES was undisputedly associated with vaginal carcinoma, a rare cancer that developed in some daughters of the women who took the drug to prevent miscarriage. But Bendectin, a morning sickness remedy introduced in 1957, is an example of a phantom risk. Does Bendectin cause birth defects? That debate raged in countless lawsuits for decades, despite the lack of convincing evidence.

   So why the difficulty, for some hazards, of making a clear connection between cause and effect, or of providing a clear proof of safety? Foster can point to many sources.
   One problem is, simply, the uncertainty of science. Foster’s take on the controversy: “I concluded very early on that [these debates about electromagnetic fields] were probably more a problem of scientific ambiguity than of any real hazard. Looking for small effects is a hard thing to do. These studies tend to be inconsistent. The problem is more a limitation of what people can do with science, rather than that there’s some smoking gun out there that we haven’t really found.”
Ambiguous, inconsistent — those aren’t words that most lay people would associate with science, the supposed realm of the absolute. But they pop up again and again in conversations with Foster. Scientific knowledge is less than certain, he says: “The fact [is] that science can’t answer many of the questions that people want answered with any reliability.”
   One reason for the ambiguity is that many of these phenomena are at the very edge of detectability. Foster’s own research, for example, looks at the potential connection between cancer and weak magnetic fields. Power lines are one of the targeted sources. Beneath high-voltage transmission lines, the strength of the magnetic field is five times less than the strength of the earth’s magnetic field. So it’s difficult to separate and measure the low-level effects from power lines. Any increase in risk may be barely detectable.
   Another inconsistency arises in the interpretation of epidemiologic data. Epidemiology is the study of the incidence of disease in different groups of people: clearly a critical means of inferring cause-and-effect relationships. But in epidemiologic research — in any research, in fact — sources of error may be hidden in the very structure of the study. Suppose you read a report that a medication you’ve been taking is now associated with an increase in the risk of cancer of the pancreas. Here is just a sampling of issues you should consider:

Is the result statistically significant? An association between a hazard and a disease is meaningful from a statistical standpoint only if the result is unlikely to be due to chance. This concept becomes particularly important when measuring the effect of a hazard by studying a small part of the total population.

Was the study valid? The study could have been flawed by “recall bias,” for example, meaning some people may be more likely than others to remember or to report taking a drug. Then there’s the issue of the “multiple comparison effect.” A researcher who makes enough comparisons between a population and a set of diseases will eventually find some connection.
 

“Many errors,” notes Foster, “result from the fact that the human population is very complicated.” You can’t easily isolate a group of people who have a disease and have taken a particular medication but haven’t also smoked cigarettes, consumed alcohol, or been exposed to something that might, obviously or not, be implicated (all factors known to scientists as “confounding variables”). “You have to take people as they come.”
   Of course, being alert to the potential flaws in a study won’t help if the report you hear on TV or read in the newspaper is incomplete. And that raises another specter: the media’s role in the analysis of technology and risk.

Scientists and journalists often sharply conflict in their approaches to reporting on health hazards. In Phantom Risk, a book he co-edited in 1993, Foster notes that science advances by a sifting process. Each study builds on the last, so that new theories are eventually confirmed and unsound data is ultimately dismissed. By this slow and long-term process, scientific truth emerges (ideally, at least). The lay media has a greater tendency — and urgency — to focus on the short-term. In 1992, for example, a single study suggested that chicken-egg embryos were deformed due to exposure to pulsed magnetic fields. That finding was widely reported by the lay media, along with discussions of whether magnetic fields from household appliances might harbor similar hazards. Four subsequent studies over the next four years could not confirm the initial report, says Foster, but that fact “received little media attention.”
    “The same kind of thing happened with silicone breast implants,” he says. “Some initial reports got picked up … and all of the sudden word got out that the implants cause a lot of different diseases. But some of the initial inferences turned out not to be true, based on later controlled studies.” Two recent books about the implant debate neatly illustrate how widely the perspectives may vary. Science on Trial was written by Marcia Angell, executive editor of The New England Journal of Medicine. She points to the need for “an unyielding commitment to scientific evidence” to stop the implant controversy. Published the same year, Informed Consent is the work of Business Week senior editor John A. Byrne, who focuses on corporate ethics and follows the case of one woman who had breast implants and developed various health problems. The book’s jacket advertises “a story of personal tragedy and corporate betrayal.”
   Given an awareness of scientific limitations and journalistic excess, it may be tempting to lean to the other extreme, to dismiss new reports of subtle health hazards. But, Foster stresses, it’s important to understand that phantom risk doesn’t mean no risk: “I’m not saying that dioxins and other chemicals in the environment are not dangerous. The fact that the epidemiological data for many suspected health risks is murky and inconsistent does not mean that the risks do not exist.” Even a small risk can have a significant health effect if it’s averaged over 250 million people.
   “Regulatory agencies,” Foster believes, “are quite justified in being very cautious.” He mentions the concern about the presence of PCBs (polychlorinated biphenyls) in the Hudson River. Scientific evidence has given no clear indication that the level of PCBs in the river is toxic to humans. Still, says Foster, “the EPA has good reason to worry.” When a large population is exposed to a hazard, even a barely detectable one, the result could be health problems in many individuals.
   In all of these controversies about scientific knowledge, he says, “the questions come up continually about what we know, and how we know it, and how reliable our knowledge is.” What it all boils down to, according to Foster, is this: Given the unreliable nature of much scientific data, how do we make socially responsible decisions?
   That’s a question with some pounding reverberations, because the social costs of those decisions can be enormous. The news of health risks, real or not, creates public concern. Reports of the potential adverse effects of electromagnetic waves, for instance, propagated a fear of the health hazards of power lines. That, says Foster, “is costing the U.S., by one estimate, more than $1 billion a year in litigation, ad hoc attempts at remediation, and so on.” The ongoing furor over silicone breast implants is another example. “[FDA commissioner] David Kessler may have been right to have pulled silicone breast implants off the market in 1992, on the grounds that the safety of the devices had not been proven to the satisfaction of the FDA,” says Foster. “But it spawned a multibillion-dollar litigation industry.” A similar scenario is likely to evolve with the diet drug fen-phen; the FDA banned the “fen” portion of the combination in September 1997, after a Mayo Clinic study showed a clear link to heart valve abnormalities.
   These are issues, Foster believes, that transcend science. In dealing with ambiguous scientific data, we need to move outside of science into the realms of politics, ethics, philosophy. And all of these perspectives need to be considered in the place where landmark social decisions about science and phantom risks are already being made: in the courtroom.

Technology’s Tribunal
   What if, back in 1992, I really had chosen to have silicone breast implants? After the FDA’s ban, the answer to my question — Is there someone I can sue? — would not have been long in coming, and I would not have been alone.
   In the two years following the ban, over 16,000 lawsuits were filed on behalf of women with breast implants. A class-action settlement of $4.25 billion proposed in 1994 eventually unraveled. Then, three years later, Dow Corning Corporation offered up to $2.4 billion as a settlement to claims that its silicone breast implants cause disease. In an August 26, 1997 report issued by the Associated Press, Dow Corning chief executive Richard Hazleton said, “We still believe very strongly that the scientific evidence shows there’s no connection between breast implants and medical conditions.” But over 200,000 women were involved in the class action settlement, and Dow Corning was trying to find a way out of bankruptcy court, where it had been mired since May of 1995.
   In Science on Trial, Marcia Angell views Dow Corning’s bankruptcy as “the latest, but by no means the last, example of a company brought to its knees by mass litigation.” So far, she notes, “several good epidemiologic studies have failed to show an association between implants and a host of connective tissue diseases … At most, breast implants could be a very weak risk factor for disease.” John Byrne sees things differently, concluding in Informed Consent that, “The bankruptcy maneuver appears to be part of a new defensive strategy by Dow Corning to sway public opinion to its side and position itself as a victim of greedy plaintiff’s attorneys.” Dow Corning, in citing 17 studies that showed no link between implants and disease, “was selectively quoting from the studies and declaring them to be much more favorable to the company’s case than they often were.”
   In Foster’s opinion, this case and this conflict classically illustrate two related problems. The first is the ambiguity that surrounds risk research — particularly in dealing with phantom risks. The second is the havoc and controversy that this confusion creates in our legal system.
   Courtroom controversy, he points out, is a natural outcome of the very conflicting goals of science and the law. Scientists, in the long process of collecting data and proposing theories, are searching for comprehensive certainty. Lawyers are trying to win a battle. More generally, science considers the health effects of a hazard in a population and, over time, converges on an understanding. Tort law (which deals with damages in a civil court) focuses on the claim of an individual. That individual may lose or win, and the case is closed. But “for legal purposes,” Foster notes, “the issue remains open, to be raised again any number of times by others elsewhere [so] the same question can be litigated indefinitely.”
   There is a fundamental question that needs to be answered about science in the courtroom, he says: When should scientific evidence be considered reliable enough to be presented to a jury?
   There are guidelines for determining what testimony a judge may admit. The Federal Rules of Evidence explicitly govern the admissibility of evidence in federal courts. But there is still some room for interpretation. One liberal view, says Foster, is “that judges should not exclude any proffered scientific testimony from their courtrooms, on the grounds that all science is equally reliable — or unreliable — or for fear of ‘putting a thumb on the scale of justice.’ [That] seems to me to avoid judging science at all, preferring to keep standards of proof low to make it easier for people — who often do have real health problems and no insurance — to collect from corporate deep pockets. On the other hand, calls by conservatives for ‘good science’ in the courtroom may be surrogates for an unstated legal position that would raise the standard of proof. These issues can only be resolved by paying due attention to standard and burden of proof, which are entirely outside the scientific arena.”
   Peter Huber, a Senior Fellow at the Manhattan Institute and Foster’s co-author in the 1997 book Judging Science, has a name for the rationale behind the “let-it-all-in” approach. “[It’s] what I think of as the ‘Galileo’ argument: Even if scientist X is the only person who believes a theory, and even if scientist X doesn’t publish or defend it publicly, his testimony should still be admitted into court. Someone has to be ‘first’ after all. If evidence is excluded, some important truths will be lost. [But] when bad science is admitted, and falsehoods are certified by the judicial process as truths, positive harm can result.”

In civil litigation, the usual standard of evidence is whether a person is “more likely than not” to have been injured by a particular hazard. The criterion of “more likely than not,” explains Foster, is roughly equivalent to a doubling of risk. And a twofold increase in risk “is often beyond the ability of science to measure with any reliability.”
   There are other hurdles on the track to truth. One problem is that science can’t prove that a hazard absolutely doesn’t exist. That’s the unfortunate reality of risk research: you can prove a hazard, but not its absence. By parallel, in the courtroom, scientific experts cannot — should not — claim that a risk does or doesn’t exist. A statement that “silicone breast implants do not cause disease” is unprovable. A potentially defensible wording would be: “Epidemiologic research to date fails to show that silicone breast implants are associated with connective-tissue disease.”
   Another courtroom demon is the blatant misuse of scientific knowledge in the legal system, says Foster, “which inevitably results when science is used for advocacy.” For example, expert witnesses may find themselves under pressure to “dredge data.” Data dredging — or data torturing — is an after-the-fact interpretation of a study that manipulates the data to reveal a desired relationship. Nonscientists can also be misled with scientific-sounding arguments: A person is exposed to a chemical and develops cancer, so the chemical must have caused the cancer. Such reasoning is obviously unsound but can be very convincing to a jury.
   Besides delineating the misuses and abuses of scientific evidence in the legal system, Foster also offers some suggestions for reform: “Probably the most important thing is for judges to try to understand what scientists are trying to say, and not merely accept something because it is expressed using scientific mumbo-jumbo.” Judges, he says, need to appraise the quality of scientific evidence and develop a more sophisticated understanding of the meaning and relevance of scientific data. One route, used with success in Europe, is for judges to summon their own expert witnesses.
   Foster has addressed some of these same issues in his new bioengineering course, where he has been examining the value of medical technology. Just as sound science sometimes fails to prevail in the courtroom, so very sophisticated engineering can fail in the medical arena, he says, “and the reason has nothing to do with the engineering. The circuits can work but the equipment can still fail to help the patient.”
   As his students delved into the first case study — the debate over mammography’s sensitivity for detecting early-stage carcinoma of the breast — Foster was encouraged by their enthusiastic participation. He views the course as a useful way for these future engineers to examine technology, putting into perspective its relative risks and benefits to society. What he hopes to impart is a better perception of the social use of science. With that in mind, he makes sure his students keep asking these questions about technology: “what it can do, what it can’t do, and” — perhaps most importantly — “what society wants it to do.”

Sonia K. Ellis, EAS’86, is a freelance writer specializing in science and technology. She lives in Colorado Springs, Colorado.

Share Button

    Related Posts

    Reproducing Racism
    Time Stretcher
    Insulin by the Leaf

    Leave a Reply