Categories
Loading
Welcome to Babble,
Settings
Sign Out

Get the Babble Newsletter!

Already have an account? .

If Vaccines Don’t Cause Autism … Why Do So Many People Believe They Do?

In the book Autism’s False Prophets, pediatrician and rotavirus vaccine co-inventor Dr. Paul Offit deconstructs the popular (and false) belief that vaccines are a cause of autism. The chapter “Science and Society,” presented here in an exclusive excerpt, explores the question: If the connection between autism and vaccination has no basis in science, why do so many of us accept it as fact? Turns out the answer is far from simple – among the culprits are Oprah, Google and religion. Read on to learn how parents everywhere were duped – and for more, check out Babble’s candid interview with Dr. Offit.

In a culture dominated by cynicism and hungry for scandal, many people believe that doctors, scientists, and public health officials cater to a pharmaceutical industry willing to do anything – including promote dangerous vaccines – for profit. So it’s not hard to appeal to the notion that pharmaceutical companies are evil. Current movies certainly reflect this sentiment. In The Constant Gardener, released in 2005, a pharmaceutical company makes an antibiotic that is highly effective against multidrug-resistant tuberculosis. When the drug is found to have a fatal side effect, the company buries its victims in a mass grave outside of town and kills others who know about the problem, including the sympathetic wife of a government official. In The Fugitive, released in 1993, a pharmaceutical company hires a one-armed man to kill a doctor (Richard Kimble) when he finds that one of the company’s drugs, nearing FDA approval, causes fatal liver damage. Neither the screenwriters nor the public considered these two scenarios implausible. Viewers were perfectly willing to believe that pharmaceutical companies hire hit men to kill people.

To some extent, pharmaceutical companies have brought this upon themselves. Twenty years ago, direct-to-consumer advertising of prescription medicines was uncommon. Now television viewers encounter a barrage of advertisements from pharmaceutical companies showing that medicines can be miraculous; people with allergies run comfortably through pollen-filled fields; and women skate effortlessly despite joint pain. Also, the types of drugs that are being made have started to change: more research dollars are being spent to develop lifestyle products, like those to combat impotency or hair loss. It’s hard to argue the special place of an industry in society when it’s hawking yet another potency product. Companies are starting to look like snake oil salesmen.

So, if everyone appears to be in someone’s pocket, who or what can be trusted? How can people best determine if the results of a scientific study are accurate? The answer is threefold: transparency of the funding source, internal consistency of the data, and reproducibility of the findings.

People have the right to know the funding source for scientific papers. For example, when Andrew Wakefield [the doctor who first proposed a connection between autism and the MMR vaccine] published his study of autistic children in the Lancet, he should have acknowledged that he had previously received money from Richard Barr and that Barr represented some of these children in a lawsuit against pharmaceutical companies. The irony in Andrew Wakefield’s case was that not only did he fail to inform the Lancet‘s readership of his funding source, but he failed to inform his co-investigators, most of whom later withdrew their names from his paper. Although funding sources should be reported in every scientific paper, they’re probably the least important factor in judging a study’s worth or reliability.

More important are the strength and internal consistency of the data. When Lancet editor-in-chief Richard Horton found that Andrew Wakefield had received funds from a personal-injury lawyer, he was outraged. But Horton’s anger should have been aimed at the obvious weaknesses in Wakefield’s paper, not at his perceived motives. Andrew Wakefield had proposed that measles vaccine damaged children’s intestines, allowing entrance of harmful toxins that caused autism. It was a hypothesis for which Wakefield offered not one shred of scientific evidence. Wakefield’s paper shouldn’t have been published not because he had received funds from a personal-injury lawyer but because his assertions were based on flimsy, poorly conceived science.

Probably the most important aspect of determining whether a scientific assertion is correct is the reproducibility of its findings. Superb, reproducible studies have been funded by pharmaceutical companies and poor, irreproducible studies have been funded independently, and vice versa. In the end, it doesn’t matter who funds a scientific study. It could be funded by pharmaceutical companies, the federal government, personal-injury lawyers, parent advocacy groups, or religious organizations. Good science will be reproduced by other investigators; bad science won’t.

Other aspects of our culture also determine how people process scientific information. During the past few decades, doctors have started to treat patients differently. No longer do they always take a paternalistic, I-know-what’s-best-for-you-so-don’t-worry approach. Doctors are more apt to encourage patients to actively participate in their own medical care. And nothing has empowered people more than the internet. Now patients have ready access to a wealth of information about health, medicine, and science. During a recent segment on the Oprah Winfrey Show, a celebrity mother was asked where she had gotten her medical information. “I attended the University of Google,” she replied. J. A. Muir Gray, a British researcher and author of The Resourceful Patient, celebrates the culture of shared expertise. “In the modern world,” he said, “medicine was based on knowledge from sources from which the public was excluded – scientific journals, books, journal clubs, conferences, and libraries. Clinicians had more knowledge than patients mainly because patients were denied access to knowledge. The World Wide Web, the dominant medium of the post-modern world, has blown away the doors and walls of the locked library.”

But empowering parents to make medical decisions comes with a price. Information on the internet is typically unfiltered – anyone can say anything, and health advice can be terribly misleading. The vaccine-autism controversy is a good example. Doctors now constantly encounter parents who don’t want to give their children MMR or thimerosal-containing influenza vaccines, fearing they might cause autism. “I’ve done my research,” parents will say,” and I don’t want my child to have that shot.” By “research,” the parents usually mean that they have perused a variety of websites. But that’s not research. If parents want to do genuine research on the subject of vaccines, they should read the original studies of measles, mumps, and rubella vaccines; compare them with studies of the combined MMR vaccine; and analyze the ten epidemiological studies that examined whether MMR caused autism. If they want to research thimerosal, they should read the hundred or so studies on mercury toxicity, as well as the eight epidemiological studies that examined whether thimerosal caused harm. This would take a lot of time. And few parents have the background in statistics, virology, toxicology, immunology, pathogenesis, molecular biology, and epidemiology required to understand these studies. Instead, they read other people’s opinions about them on the internet. Parents can’t be blamed for not reading the original studies; doctors don’t read most of them either. And frankly, few doctors have the expertise necessary to fully understand them, so they rely on experts who collectively have that expertise. Vaccines have helped to increase the life span of Americans by thirty years.

The experts who are responsible for making vaccine recommendations in the United States, and for determining whether vaccines are safe, serve the CDC, the AAP, the American Academy of Family Physicians, and the National Vaccine Program Office. And they do a pretty good job. During the past century, vaccines have helped to increase the life span of Americans by thirty years, and they have a remarkable record of safety. But if you’re looking for a quote guaranteed to anger the American public, you need look no further than one delivered by Congressman Henry Waxman during Dan Burton’s hearings. “Let us let the scientists explore where the real truth may be,” said Waxman. In other words, let the experts figure it out.

Waxman’s plea doesn’t have much traction in today’s society. Because of the internet, everyone is an expert (or no one is). As a consequence, for some, there are no truths, only different experiences and different ways of looking at things.

If doctors are going to encourage patients to make their own choices, they have to be willing to stand back and watch them make bad ones. They can’t have it both ways. “Patients will often choose to ignore their doctors’ advice and do something that their doctors regard as odd, even crazy,” writes Richard Smith. Michael Fitzpatrick, the author of MMR and Autism, also sees danger in a culture in which experts cede their expertise. “Doing the best for our children means concentrating on being parents and leaving science to the scientists, medicine to the doctors, and education to the teachers.” Fitzpatrick realizes that his request flies in the face of modern parenting. “So influential has the rhetoric of anti-paternalism become,” says Fitzpatrick, “that this now appears a hopelessly old-fashioned proposal. But it is both principled and pragmatic. If I am having trouble with my car, I do not take to the internet to study motor engineering; I take it to the garage and ask a mechanic to repair it. Even though I do not understand his explanation of the problem, I trust him. In a similar way, we put our trust in numerous people we encounter in our everyday lives. If we did not, society would simply collapse. The peculiarity of our current predicament is the selective withdrawal of trust from scientific and medical professionals, which is both unjustified and mutually damaging.” The scientific method isn’t terribly politically correct.

For many parents, the advice given by health care professionals about vaccines is just one more opinion in a sea of opinions offered on the internet. For some, however, science is only an intrusion into beliefs that are as strong as religious convictions. Even when a particular notion is consistently refuted by scientific studies, they refuse to abandon it. During one of Dan Burton’s hearings, a clinician named Kathy Pratt, who took care of autistic patients, was convinced that vaccines were the culprit “regardless of what the research tells us.” Because science is the only discipline that enables one to distinguish myth from fact, Pratt’s statement was particularly unsettling. “Uncovering [the laws of nature] should be the highest goal of a civilized society,” says physicist Robert Park. “Not because scientists have a greater claim to a greater intellect or virtue, but because the scientific method transcends the flaws of individual scientists. Science is the only way we have of separating truth from ideology or fraud or mere foolishness.” And science is enormously open-minded. If people believe they have a treatment for a particular disease or that one thing causes another, the scientific method can determine whether they are right. Suspected causes will be found to be true or not, and therapies will be found to work or not. “Things that are wrong are ultimately set aside and things that are right gain traction,” said Stephen Strauss, former director of NCCAM. Strauss had a framed quotation on the wall of his office: “The plural of anecdotes is not evidence.”

Although science is open-minded, the scientific method isn’t terribly politically correct. To determine whether a medicine works, scientists establish a hypothesis, formulate burdens of proof, and subject those burdens to statistical analysis. Over time, a truth emerges. Something is either true or it isn’t. And although our instinct is to be open to a wide range of attitudes and beliefs, there comes a time when it becomes clear that certain beliefs just don’t hold up. MMR and thimerosal don’t cause autism, and secretin, chelation therapy, and Lupron don’t cure it.

Although the scientific method has almost singlehandedly brought us out of the Dark Ages and into the Age of Enlightenment, it can be difficult to explain how it works. Here’s the problem. In determining whether, for example, MMR causes autism, investigators form a hypothesis. The hypothesis is always formed in the negative, known as the null hypothesis. In the MMR-causes-autism case, the hypothesis would be, “MMR does not cause autism.” Epidemiological studies have two possible outcomes:

(1) Investigators might generate data that rejects the null hypothesis. Rejection would mean that the risk of autism was found to be significantly greater in children who received MMR than in those who didn’t.

(2) Investigators might generate data that do not reject the null hypothesis. In this case, the risk of autism would have been found to be statistically indistinguishable in children who did or didn’t receive MMR. Scientists can’t prove MMR doesn’t cause autism in absolute terms because the scientific method allows them to say it only at a certain level of statistical confidence.

But there is one thing those who use the scientific method cannot do; they cannot accept the null hypothesis. In other words, scientists can never say never. This means that scientists can’t prove MMR doesn’t cause autism in absolute terms because the scientific method allows them to say it only at a certain level of statistical confidence.

An example of the problem with not being able to accept the null hypothesis can be found in an experiment some children might have tried after watching the television show Superman. Suppose a little boy believed that if he stood in his backyard and held his arms in front of him (using Superman’s interlocking thumb grip), he could fly. He could try once or twice or a thousand times. But at no time would he ever be able to prove with absolute certainty that he couldn’t fly. The more times he tried and failed, the more unlikely it would be that he would ever fly. But even if he tried to fly a billion times, he wouldn’t have disproved his contention; he would only have made it all the more unlikely. When scientists try to explain to the public the results of their studies, they always have this limitation in the back of their minds. They know the scientific method does not allow them to say, “MMR doesn’t cause autism.” So they say something like, “All of the evidence to date doesn’t support the hypothesis that MMR causes autism.” But to parents who are more concerned about autism (which they see and read about) than measles (which occurs uncommonly in the United States), this equivocation is hardly reassuring.

Another example of how scientists, respectful of the limits of the scientific method, fail to reassure the public can be found in a 2001 report from the Institute of Medicine (IOM) on the MMR vaccine and autism. This report, written after several excellent studies showed no relationship between the vaccine and the disorder, stated, “The committee notes that its conclusion does not exclude the possibility that MMR vaccine could contribute to autistic spectrum disorder in a small number of children.” Those who wrote this report failed to point out that no study could ever prove MMR didn’t cause autism in a small number of children because the scientific method would never allow it. But parents saw a door left open, and it scared them. Dan Burton picked up on this statement in one of his tirades against the IOM: “You put out a report to the people of this country saying that [the MMR vaccine] doesn’t cause autism and then you’ve got an out in the back of the thing,” screamed Burton. “You can’t tell me under oath that there is no causal link, because you just don’t know, do you?” It’s a challenge to explain the difference between coincidence and causality.

Another challenge for those communicating science to the public is explaining the difference between coincidence and causality. Because we’re always looking for reasons for why things happen, this isn’t easy.

When Andrew Wakefield reported the stories of eight children with autism whose parents first noticed problems within one month of their children’s receiving MMR, he was observing something that statistically had to happen. At the time, 90 percent of children in the United Kingdom were getting the vaccine, and one of every 2,000 was diagnosed with autism. Because MMR is given soon after a child’s first birthday, when children first acquire language and communication skills, it was a statistical certainty that some children who got MMR would soon be diagnosed with autism. In fact, it would have been remarkable if that hadn’t happened. But parents of autistic children perceived their children were fine, got the MMR vaccine, and weren’t fine anymore. (Although most children with autism show problems very early in life, about 20 percent will develop normally and then regress. It was this regression during the second year of life that caused some parents to blame MMR.) “Humans evolved the ability to seek and find connections between things and events in the environment,” says Michael Shermer, author of Why People Believe Weird Things. “Those who made the best connections left behind the most offspring. We are their descendents. The problem is that causal thinking is not infallible. We make connections whether they are there or not.”

For many parents, the association in time between their children’s receipt of vaccines and the appearance of autism is far more convincing than epidemiological studies. That’s because anecdotal experiences can be enormously powerful. Here’s another example. A pediatrician in suburban Philadelphia was preparing a vaccine for a four-month-old girl. While she was drawing the vaccine into the syringe, the child had a seizure lasting several minutes. But imagine what the mother would have thought if the pediatrician had given the vaccine five minutes earlier. No amount of statistical data showing that the risk of seizures was the same in vaccinated or unvaccinated children would have ever convinced her that the vaccine hadn’t caused the seizure. People are far more likely to be swayed by a personal, emotional experience than by the results of large epidemiological studies. “Popular induction depends upon the emotional impact of the instances,” said philosopher Bertrand Russell, “not on their number.”

It’s easy to scare people.Several years ago, a stand-up comedian, imitating a television commercial advertising a book about the occult, showed how hard it can be to distinguish cause from coincidence. Deepening his voice, he said, “A woman in California burns her hand on a stove. Her mother, three thousand miles away, feels pain in the same hand at the same time. Coincidence?” Here he paused for several seconds. “Yes!” he yelled, exasperated. “That’s what coincidence is!”

Yet another, more subtle, aspect of our culture appears throughout the vaccine-autism controversy. Two years after he published his paper in the Lancet claiming that MMR caused autism, Andrew Wakefield published “Measles, Mumps, Rubella Vaccine: Through a Glass Darkly.” The phrase “through a glass darkly” is taken from Saint Paul’s letter to the Corinthians and refers to man’s imperfect perception of reality. Wakefield’s implication was that science in this case the science that had claimed MMR was safe before licensure couldn’t be relied upon to get it right. And Wakefield believed that scientific studies that continued to absolve MMR couldn’t be trusted to get it right either. Wakefield’s capacity to set aside the studies that disproved his theory was based on a belief as powerful as a religious conviction. “He’s very much like my father,” said Wakefield’s mother, Bridget. “If he believed in something, he would have gone to the ends of the earth to go on believing.” When Andrew Wakefield first left England, he landed in Melbourne, Florida, with the Good News Doctor Foundation, whose logo features a stethoscope sitting on top of a Bible. The foundation describes itself as “a Christian ministry that provides hope and information on how to eat better and feel better, and minister more effectively as a result of a biblically based, healthy lifestyle.” For Andrew Wakefield, the question of whether MMR caused autism had moved into the realm of faith. “The best social science evidence reveals that taking candy from a stranger is perfectly okay.”

While Andrew Wakefield continues to make religious references as he exhorts listeners to believe his theories, the most prominent religious figure in the vaccine-autism controversy is Lisa Sykes, an associate pastor at the Welborne United Methodist Church in Richmond, Virginia. Sykes, who believes her son’s autism was caused by thimerosal in vaccines, often delivers fiery speeches denouncing scientists at the CDC, FDA, and IOM, calling them “modern day deceivers.” In April 2006, during an anti-vaccine demonstration in Washington, D.C., Sykes led those gathered in prayer “for the greedy and those who love power so much that they would seek profit over safety, and sacrifice children instead of wealth. We pray for those who have surrendered the truth, and government officials who have failed to seek it. These, too, like so many of our injured children, cannot see, they cannot hear, and they remain silent.” In February 2007, the United Methodist Virginia Conference published its Lenten Devotional, in which Sykes interpreted scripture and issued a call to action: “My son is disabled,” she said, “unnecessarily injured by mercury he received in vaccines. Like Abram, we are cast down. The era of administered mercury is the darkest part of the night.”

“I think about symbols,” said Kathleen Seidel, in reference to cleansing the autistic child’s body of mercury. “And there are a lot of powerful symbols that are part of this whole hysteria, the whole concern over vaccines: symbols of purity and defilement and of sin and redemption.” One of the Rescue Angels of Generation Rescue (the organization dedicated to mercury chelation) proclaimed that with chelation, “We’re helping [a child's] body do what God intended it to do.” Where science and medicine have failed to find a cause or cure for autism, some have put their trust in the certainty, absolutism, and occasional zealotry of Andrew Wakefield, Lisa Sykes, and Mark Geier, people who ask their followers to have unquestioning faith in theories contradicted by scientific evidence.

Another aspect of our culture – and one reason the MMR and thimerosal controversies gained immediate attention – is that it’s easy to scare people. For example, beginning in the 1960s and 1970s, rumors that people had put razor blades into apples or poisoned Halloween candy swept across the nation. Everyone believed it. As a consequence, parents insisted that their children eat only prepackaged candy, schools opened their doors so that trick-or-treaters could have a safe environment, and hospitals offered to X-ray candy bags. In their book, Made to Stick: Why Some Ideas Survive and Others Die, Chip and Dan Heath examined the widespread belief that trick-or-treaters were at risk. They found that since 1958, no one had ever been harmed by a stranger’s Halloween candy. The urban myth had been spawned by two events. First, a five- year-old boy had overdosed on his uncle’s heroin; to cover his tracks, the uncle put heroin on the child’s candy. Second, a father, in a twisted attempt to collect insurance money, killed his son by sprinkling cyanide on his candy. “In other words,” wrote the Heaths, “the best social science evidence reveals that taking candy from a stranger is perfectly okay. It’s your family you should worry about.”

Although the fear of tainted Halloween candy isn’t based on a single occurrence, it hasn’t died, and it probably never will. Both California and New Jersey have passed laws specifically designed to punish candy tamperers. Similarly, laws banning thimerosal-containing vaccines have passed in several states despite clear evidence that these vaccines aren’t harmful. It’s much easier to scare people than to unscare them. Studies are far less compelling than personal testaments, riveting television shows, and blockbuster movies.

A final cultural aspect – and yet another reason that the mercury-in-vaccines controversy stuck – is that it’s easy to appeal to the notion that we live in a sea of poisonous metals, toxic chemicals, and environmental pollutants. To be sure, some toxins in the environment can be quite dangerous. In the United States, high levels of lead in paint caused severe neurological problems in many children. And in Japan, the Minamata Bay disaster showed just how devastating large quantities of mercury can be. But these aren’t typical stories. For example, the media declared that dioxin, the chemical buried under the Love Canal in upstate New York, caused birth defects and miscarriages; that hexavalent chromium, the chemical used by Pacific Gas and Electric to coat its pipes (and the subject of the movie Erin Brockovich), caused a variety of illnesses from nosebleeds to cancer; that trichloroethylene, the chemical dumped by the W. R. Grace tannery into the local water supply (and the subject of the book and movie A Civil Action), caused a cluster of cancer cases in Woburn, Massachusetts; and that Alar, a pesticide featured on a 60 Minutes program titled “A is for Apple,” caused cancer.

None of these stories was supported by subsequent scientific studies. But studies showing that certain chemicals in the environment aren’t harmful are far less compelling than personal testaments, riveting television shows, and blockbuster movies claiming that they are. Steven Milloy, a graduate of the Johns Hopkins School of Hygiene and Public Health, the author of Junk Science Judo, and the creator of the popular website JunkScience.com, laments how the media are attracted to stories that scare people but not to those that reassure them. When Milloy approached Dateline NBC with a story about how fears of small quantities of dioxin were unfounded, he was rebuffed. “I was interviewed about our [dioxin] study by seemingly interested staff of the television news magazine Dateline NBC,” recalled Milloy. “After about twenty minutes of questions, it finally dawned on the staff person. ‘So, this isn’t a scare story?’ she said. ‘Then my producer won’t be interested.’”

Recently, the comedy team of Penn and Teller filmed a three-minute video for YouTube that showed just how easy it is to appeal to the public’s concern about chemicals in the environment. They sent a friend to a state fair to collect signatures on a petition to ban dihydroxymonoxide. Dihydroxy (two hydrogen atoms) monoxide (one oxygen atom) is H2O – water. The petitioner never lied. She said that dihydroxymonoxide was in our lakes and streams, and now it was in our sweat and urine and tears. We have to put a stop to this, she urged. Enough is enough. By using its chemical name, she was able to collect hundreds of signatures to ban water from the face of the earth. People are more frightened by things that are less likely to hurt them.

The media bias toward stories that scare rather than reassure has left the public with a poor understanding of risk. “Hundreds of thousands of deaths a year from smoking is old hat,” writes Michael Fumento in Science Under Siege, “but possible death by toxic waste, now that’s exciting. The problem is [that] such presentations distort the ability of viewers to engage in accurate risk assessment. The average viewer who watches story after story on the latest alleged environmental terror can hardly be blamed for coming to the conclusion that cigarettes are a small problem compared with the hazards of parts per quadrillion of dioxin in the air, or for concluding that the drinking of alcohol, a known cause of low birth weight and cancer, is a small problem compared with the possibility of eating quantities of Alar almost too small to measure. This in turn results in pressure on the bureaucrats and politicians to wage war against tiny non-existent threats. The ‘war’ gets more coverage as these politicians and bureaucrats thunder that the planet could not possibly survive without their intervention, and the vicious cycle goes on.” As a consequence, people are more frightened by things that are less likely to hurt them. They are scared of pandemic flu but not epidemic flu (which kills more than 30,000 people a year in the United States); of botulism, tsunamis, and plagues but not strokes and heart attacks; of radon and dioxin but not French fries; of flying but not driving; of sharks but not deer; of MMR but not measles; and of thimerosal-containing influenza vaccine but not influenza.

The vaccine-autism controversy has shown just how difficult it can be to communicate science to the public. Fortunately, during the past few years, many studies have investigated the true causes of autism; ironically, the media’s constant focus on vaccines has made it difficult for the public to hear about them.

Condensed from Chapter 10, “Science and Society,” of Autism’s False Prophets by Paul A. Offit, MD.

Click here to read an interview with Dr. Paul Offit on Babble.

Click here to buy Autism’s False Prophets from Amazon.

FacebookTwitterGoogle+TumblrPinterest
Tagged as:

Use a Facebook account to add a comment, subject to Facebook's Terms of Service and Privacy Policy. Your Facebook name, profile photo and other personal information you make public on Facebook (e.g., school, work, current city, age) will appear with your comment. Learn More.

FacebookTwitterGoogle+TumblrPinterest