Part 1: In the beginning — Framingham’s Inception
On April 12, 1945, in Warm Springs, Georgia, President Franklin D. Roosevelt spent the morning going over matters of state and meeting with guests, some of whom had “commented on how well he looked.” Shortly after lunch, while sitting for an artist, the president complained of a headache. Two hours later, he was dead from a massive cerebral hemorrhage (1).
The untimely death of the president at 63 set in motion forces that would bring about the most famous study in history: the Framingham Heart Study (FHS) (2). The FHS laid the groundwork for the obsession we’ve had with cholesterol and saturated fat and may well be the study that has been most damaging to the health of the U.S. population. This damage comes not necessarily from the study’s data but from the misreporting, deception, dissembling and outright prevarication about the data that have made it the wellspring for both the diabetes and obesity epidemics that afflict us.
It’s difficult to overestimate the impact this one study has had on the way physicians and patients view the causes and treatments of heart disease. The FHS is revered in most scientific and medical circles as the finest long-term observational study ever performed. The data generated from this ongoing study have been published in over 1,000 scientific papers and were used to create the Framingham Risk Score, a gender-specific algorithm used by physicians around the world to estimate the 10-year risk for cardiovascular disease in patients.
Just over 70 years ago, on Oct. 11, 1948, physicians examined the first Framingham subject and the study officially began (3). Since that day, not only has the original group of subjects been repeatedly examined, but the children and grandchildren of the original cohort have also been examined. In the early 2000s, other cohorts were added to increase diversity; the original cohorts were mainly Caucasian of European origin. Due to the length of the study and prestigious academic credentials of those directing it over the years, the FHS is considered a landmark study—if not the landmark study—on the risks for developing cardiovascular disease. Many researchers hold that the Framingham data strongly support the lipid hypothesis—the notion that cholesterol in the blood leads to the development of heart disease.
Dr. William Castelli, one of its early directors, said this of the FHS: “It is a place that discovers, proves, establishes in an epistemological sense what are the risk factors for heart disease. The findings of Framingham have already helped millions of people around the world, and even if the older generation is not helped directly, their children, grandchildren and great grandchildren will be helped.” (4)
But not all the researchers who were deeply involved in the process feel the same way. Dr. George Mann, an early Framingham researcher whose name is on many FHS articles, has been outspoken about his lack of regard for the way the National Heart, Lung, and Blood Institute (NHLBI) disregards data that don’t confirm the lipid hypothesis. According to Mann, failure to report the enormous amount of contrary data “is a form of cheating” indulged in frequently by the NHLBI, the government-funded organization that runs the FHS (5).
It can doubtless be said of any large study—especially one such as the FHS—that there will be glitches along with a smattering of malcontents and detractors. If you are a doctor or scientist, the path of least resistance is clearly to go along with the crowd and embrace the findings of the FHS, which has been funded to the tune of millions upon millions of dollars and produced a mountain of papers authored by esteemed investigators from prestigious institutions. But, as everyone knows, the path of least resistance isn’t always the correct path.
In order to make any kind of an intelligent determination about the validity of the FHS, we need to examine what’s really going on.
Cargo Cult Science
In my view, the entire FHS is what Nobel laureate Richard Feynman called “cargo cult science” in a commencement address in 1974 at Caltech. This is a pursuit that is not really science but has all the trappings and the outward appearance of science (6).
When Feynman coined the term, he was referring to a South Sea culture that blossomed financially during World War II. Military cargo planes brought goods and created a booming economy for the island, and when the economy crumbled after the war, the islanders decided to try to get the cargo planes to return. They did so by trying to recreate the situation that prevailed during the war. They “arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land.”
Feynman calls this kind of effort cargo cult science “because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.”
So what essential quality is missing from the FHS, and why won’t the planes land, so to speak?
How can 1,000-plus papers—many of which are filled with all the differential equations, matrices and statistical analyses that are the “apparent precepts and forms of scientific investigation”—be flawed?
First, the FHS is an observational study, which, by definition, can’t prove causality. Observational studies are valuable for formulating hypotheses that can then be tested by more rigorous means to try to determine causality. An observational study alone won’t do it.
Observational Studies and Causality
The notion that an observational study can’t prove causality is foreign to many people. Intuitively, it seems that causality would be proven if researchers show a correlation between some risk factor and a disease it is thought to cause. But it doesn’t work that way. One of the axioms of science is that correlation is not causation.
To better understand this concept, consider a made-up observational study.
Imagine a doctor who is part of an eight-physician practice back in the early 1950s. He has been seeing patients for 30 years, as have the other doctors in the practice. Our doc notices that he’s been seeing a lot of bronchitis cases, and he hears from his patients who are smokers (as most adults were in the 1950s) that they’ve cut down while they’re sick because cigarettes seem to make their bronchitis worse. Hearing this repeatedly, the doctor wonders if maybe smoking makes people more prone to bronchitis. He rounds up the medical records of patients going back to the 1920s, when the medical practice first formed, and separates them into two stacks: one stack of charts for patients still with the clinic, the other for patients who are no longer in the practice because they died, moved or found another doctor.
He then takes the charts of those still in the practice and separates them into two groups: one group of patients who are smokers and a much smaller pile of those who are not. After scouring these records for episodes of bronchitis he and the other physicians have treated over the past 30 years, he discovers that the clinic’s smoking patients got bronchitis 15 times more often than the nonsmoking patients. Smoking definitely correlates with risk for bronchitis. Knowing the hazards of tobacco, it’s easy to conclude that smoking causes bronchitis. But it can’t be done with this kind of observational study because observational studies can show only correlations; i.e., this risk correlates with that disease. And remember, correlation is not causation.
But it seems so obvious that smoking caused bronchitis. Given the data provided above, how could anyone not come to that conclusion? Because maybe there is another factor we don’t know about—this should always be remembered. Maybe another factor causes both the smoking and the bronchitis.
I come from a family of smokers. Both my parents smoked, my four siblings smoked, my grandparents smoked, all my aunts and uncles smoked, and yet I never had any inclination whatsoever to smoke. My wife has the same history. What makes the two of us different from the rest of our family members?
Going back to the patients in our imaginary study, maybe they harbor some third factor that makes them likely to smoke and prone to bronchitis. Or maybe a propensity to develop bronchitis due to some slight change in the chemistry of lung secretions causes mild symptoms that are relieved by smoking. In the former case, smoking might not cause bronchitis even though the observational evidence strongly points to the idea that it does. In the latter case, the propensity for bronchitis could actually cause smoking.
So how do we prove causality? In theory, you take the hypothesis developed from the observational study and do a randomized controlled trial (RCT), the so-called gold standard of experiments to determine causality.
In our example above, we can start with the hypothesis that smoking causes bronchitis. To prove this, we would need to recruit a number of subjects who are nonsmokers into our trial. Then we would have to randomize them into two groups and have one group start smoking a couple of packs a day while the other group continued to abstain from cigarettes. We would follow these two groups closely to see if the smokers experienced more cases of bronchitis than the nonsmokers. If they did, we could say that smoking causes bronchitis with some certainty—especially if other researchers repeated the study with other subjects and got the same findings.
It should be immediately obvious that this kind study couldn’t be done for ethical reasons. In many situations, researchers are stuck with observational data, such as most of the data on smoking and bronchitis, lung cancer, heart disease, etc.
There are other ways to make evaluations to bolster the issue of causality. Animals can be studied to see if they respond to tobacco by developing bronchitis. The doctors in our observational study above could simply ask their bronchitic patients to quit smoking. Some would, some wouldn’t. If the ones who did quit reduced the rates at which they were afflicted with bronchitis as compared to the ones who continued to smoke, and even as compared to themselves back when they smoked, the hypothesis that smoking causes bronchitis would be strengthened.
Our imaginary observational study shows how science usually works. First, someone makes an observation. In our case, the physician noticed that his patients who were smokers came down with bronchitis more often than his nonsmoking patients. He generated a preliminary hypothesis: Smoking causes bronchitis. He devised a way to test the hypothesis with an observational study using his 30 years of patient data. His preliminary hypothesis held up in that his data showed an enormous increase in bronchitis episodes in his smoking patients.
But what we’ve got to remember is that despite how strongly the data implicates smoking as causing bronchitis, we have only a hypothesis at this point. And, it must be noted that the data in this imaginary observational study is vastly stronger than most data in these kinds of studies. Our smoking/bronchitis study above showed smokers developed bronchitis at 15 times the rate of nonsmokers. That is huge! In most observational studies, the difference in rate between one group and another is maybe 1.2 or even 1.15, which, if translated to our case, would mean that for every nonsmoker who got bronchitis, 1.2 smokers got it. That’s pretty weak gruel, yet those are the findings of most observational studies.
And, by the way, when you read press reports of studies saying, for instance, that people who avoid red meat live longer or people who eat bacon die sooner, you’re almost certainly reading about an observational study. You can always tell by the weasel words used in the article. The authors never say bacon causes early death; they say bacon is “linked” to earlier death or that red meat is “associated” with a shorter life. Or they say that avoiding red meat is “correlated” with increased lifespan. Those words—linked, associated, correlated and a handful of others—are a dead giveaway that the study in question is observational and thus worthless at proving causality.
It’s crucial to understand that these kinds of studies cannot prove causality no matter how meticulously they are performed.
If the FHS is an observational study—which it most assuredly is—how can the results be used to develop a risk-factor scoring system? That would imply someone had proven causality. How can I know that my risk for heart disease has increased because my cholesterol is a little high if no one has proven that cholesterol in the blood causes heart disease?
That’s a good question—and one that remains unanswered by the FHS. We’ll examine this and other flaws and failings of arguably the most famous and widely disseminated observational study ever published in more specific detail in Part 2.
- Bruenn HG. Clinical notes on the illness and death of President Franklin D. Roosevelt. Annals of Internal Medicine 72(4): 579-591, 1970.
- Dawber TR. The Framingham Study: The Epidemiology of Atherosclerotic Disease. Cambridge, Mass.: Harvard University Press, 1980.
- Mahmood SS, Levy D, Vasan R et al. The Framingham Heart Study and the epidemiology of cardiovascular disease: a historical perspective. The Lancet 383(9921): 999-1008, 2014. Available here.
- Brody JE. Scientist at work: William Castelli; preaching the gospel of healthy hearts. The New York Times. Feb. 8, 1994. Available here.
- Mann GV. Coronary Heart Disease: The Dietary Sense and Nonsense. Cambridge, England: Janus Publishing Company, 1993.
- Feynman RP. Cargo cult science. Engineering and Science 37(7): 10-13, 1974. Available here.
All links accessed Jan. 14, 2019.