As much as I hate to see the summer end, I am always happy to see the students return to U.Va. and in particular to meet my newest class of students.
For the past 20 years I have taught first-year medical students the art and science of the practice of medicine. As you can imagine from reading my columns, my teaching style takes some getting used to. According to my former students, I once told them that first-year medical students are a necessary evil. Later when asked to evaluate the group as a whole, I tried to be reassuring by telling them that at worst they were average. One of my written evaluations compared me to crème brûlée. “Hard and crusty on the outside but soft and gooey underneath.” Weird.
But it is a great delight to teach the incredible complexities of medical practice to smart, motivated and energetic beginners.
One of the first skills I try to instill in them is the habit of healthy skepticism. This of course comes instinctively to me. One ER resident, Monica Williams-Murphy, labeled me the most skeptical person she had ever met. This is from the doctor who wrote the book It’s OK to Die. I took this as a compliment. I try not to teach the first-years to be skeptical of their patients (they need to learn that on their own) but rather of the science underpinning medicine.
In a previous column I mentioned that we tell the med students that half of what they learn in medical school is wrong; we just don’t know which half that is yet.
Well, it turns out that that statement is actually wrong, too. A recent study by John Ioannidis in PLoS Medicine demonstrated mathematically that up to 90 percent of all published research results are actually not true. The math is complicated, but the corollaries he arrives at are worth going over because all of us are bombarded daily with health claims that are supposedly proven by the latest research. Vitamin E prevents cancer; no, wait, now it doesn’t. Alzheimer’s Disease is caused by cooking with aluminum pots; oops, no, it isn’t. Eggs are bad for your heart; no, they don’t matter. Wait—now they are bad again.
I won’t cover all of the corollaries but focus on the ones easily applied to science in the popular press.
Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. This seems obvious. If you flip a coin four times and get three heads and one tails you might conclude (wrongly) that a coin has a 75 percent chance of landing heads up. Repeat this experiment 1,000 times and you’ll see that both sides are equally likely. This is the importance of a large sample size. The most reliable trials involve tens of thousands of patients while studies of several hundred patients or less don’t have the power to settle competing hypotheses. When a new study is announced, look at the study size before accepting the conclusion. Many of you parents will remember the striking finding in 2002 that duct tape could cure warts. That was based on only 51 patients and was subsequently refuted.
Corollary 2: The smaller the effect size (how much influence one variable has on the other) in a scientific field, the less likely the research findings are to be true. The statin drugs have been widely prescribed to patients to prevent heart attacks. In patients using them for primary prevention (i.e. never had a heart attack, which is most of the patients on the drugs), the reduction in risk of heart attack if the drugs are taken for five years is 1.6 percent (as you might guess, this is a small effect size). This is likely spurious and is a far cry from the initially enormously beneficial effects claimed. This also means that 98 percent of the patients on the drug aren’t helped by it and in fact some are being harmed by developing diabetes or muscle problems.
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. Research science is based on the manipulation of one variable at a time. So any study based on the manipulation of multiple variables simultaneously should be looked at critically. This is why most dietary findings are so quickly overturned by the next study. In humans, diet is made up of so many different elements that noticing associations with one variable in a diet made up of hundreds of other elements results in finding associations that statistically are most likely to be generated by random chance.
Corollary 4: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. This also seems obvious, especially to a skeptic like me. Merck, the makers of the painkiller Vioxx ($2.5 billion in sales per year) withheld research safety data from the public for five years, resulting in 60,000 deaths from heart attacks. Vioxx was withdrawn from the market in 2004. Roche, the maker of the anti-flu drug Tamiflu ($1 billion in sales to the U.S. government stockpile alone) has refused to release data from multiple studies Roche did on the drug which purported to show benefit in patients with the flu. They did not publish the studies but did cite them in proving that the drug was effective. A recent review in the British Medical Journal reversed a decade of favorable research results, declaring the evidence for benefit of Tamiflu during flu season does not clearly exist and the risks are not known.
Overall, increasing recognition of these and other corollaries is slowly improving the quality of the research being produced and has allowed us to more carefully evaluate the research claims. I took some time to explain all this to my students last year in hopes that they would become more critical readers of the medical literature. I think it worked because one of them remarked that if Ioannidis is right, it is likely that his article is also not true. Touché, young man, you could have a career in emergency medicine!