This article is Part 1 of a two-part series about the problems with nutrition research. For more on why you should be skeptical of the latest nutrition headlines, check out Part 2 of this series.
Nutritional epidemiology is basically the board game equivalent of a Ouija board—whatever you want it to say, it will say. – Dr. Peter Attia
Every week, we’re bombarded with splashy headlines in the media about the latest nutrition research. Here’s a sampling from the last few weeks alone:
- “Low-carb diets could shorten life, study suggests” (BBC News)
- “Eating cheese and butter every day linked to living longer” (Newsweek)
- “A New Study Says Any Amount of Drinking Is Bad for You. Here's What Experts Say” (Time)
- “Whole grains one of the most important food groups for preventing type 2 diabetes” (Science Daily)
- “Low carb diet ‘should be first line of approach to tackle type 2 diabetes’ and prolong lifespan” (iNews)
Within a six-week period, we learned that low-carb diets will make you live longer and shorten your lifespan and that they’re both good and bad for diabetes. We also learned that consuming even small amounts of alcohol, which has long been regarded as health promoting, is now unhealthy.
For decades, we were told to limit dietary fat and cholesterol because they would clog our arteries, give us heart attacks, and send us to an early grave. Yet in 2010, the federal government removed its restriction on total fat from the U.S. Dietary Guidelines, and in 2015, they did the same thing for cholesterol, remarking that it is “not a nutrient of concern for overconsumption.” (1)
If you’re confused by this, or you’ve just stopped listening altogether, you’re not alone. And who could blame you? In a recent, scathing critique of nutrition research in JAMA, Dr. John Ioannidis, a professor at the Stanford School of Medicine, said:
Nutritional research may have adversely affected the public perception of science.
… the emerging picture of nutritional epidemiology is difficult to reconcile with good scientific principles. The field needs radical reform. (2)
In other words, you’re not crazy for doubting the latest media headlines or just throwing up your hands in frustration! In this article, I’m going to explore the reasons why skepticism is an appropriate response when it comes to most nutrition studies. Armed with this information, you’ll be better able to protect yourself and your family from the latest media hype and focus on what really matters when it comes to diet and nutrition.
Why You Can’t Trust Observational Studies as “Proof”
An observational study is one that draws inferences about the effect of an exposure or intervention on subjects where the researcher or investigator has no control over the subject. It’s not an experiment where researchers are directing a specific intervention (like a low-carb diet) and making things happen. Instead, they are just looking at populations of people and making guesses about the effects of a diet or lifestyle variable.
That is the domain of a randomized controlled trial (RCT), which randomly assigns participants to two groups—a treatment group that receives the intervention being studied and a control group that does not—and then observes them for a specific period of time.
We’ve all seen nutrition headlines promising groundbreaking information that will change the way we view our health. But how many news stories are based on studies with faulty methods, uncontrolled biases, and other major problems? Check out this article to find out.
Every scientist knows this, and most journalists should as well. Yet today, it’s not uncommon to see headlines like “Low-carb diet shortens your lifespan” and “Eating processed meat increases your risk of cancer,” which imply that the studies proved a causal relationship when, in fact, all they did is establish a correlation.
Correlation Is Not Causation
The problem is that two variables that are correlated, or associated together, do not always have a causal relationship. Consider the following examples, from Tyler Vigen’s excellent webpage called Spurious Correlations:
- S. spending on space, science, and technology is 99.8 percent correlated with suicides by hanging, strangulation, and suffocation.
- Per capita consumption of margarine in the United States and the divorce rate in the state of Maine are correlated at 99.3 percent.
- Total revenue generated by arcades is 5 percent correlated with computer science doctorates awarded in the United States.
Those are incredibly strong correlations, but I think it’s fairly obvious that consumption of margarine in the United States has absolutely no impact on the divorce rate in Maine … right?
Another great example of how easy it is to derive spurious correlations—especially when you set out with an agenda—comes from a large study of the most common diagnoses for hospitalization in 10.6 million Canadians. The researchers found that 24 diagnoses were significantly associated with the participants’ astrological signs: (3)
- People born under Leo had a 15 percent higher risk of hospitalization due to gastrointestinal hemorrhage compared to other residents of Ontario.
- People born under Sagittarius had a 38 percent higher risk of hospitalization for arm fractures compared to people with other signs.
In Dr. Ioannidis’s editorial in JAMA, he notes:
Almost all nutritional variables are correlated with one another; thus, if one variable is causally related to health outcomes, many other variables will also yield significant associations in large enough data sets.
As an example of just how absurd this can become, he notes that, if taken at face value, observational studies have inferred that:
… eating 12 hazelnuts daily (1 oz) would prolong life by 12 years (ie, 1 year per hazelnut), drinking 3 cups of coffee daily would achieve a similar gain of 12 extra years, and eating a single mandarin orange daily (80g) would add 5 years of life. Conversely, consuming 1 egg daily would reduce life expectancy by 6 years, and eating 2 slices of bacon (30g) daily would shorten life by a decade, an effect worse than smoking.
Are these relationships truly causal? Of course not, Ioannidis says. Yet study authors often use causal language when reporting the findings from these studies.
In fact, according to an analysis in 2013, authors of observational studies made medical or nutritional recommendations (suggesting their data showed a causal relationship) in 56 percent of cases. (4) The study authors summed up their findings as follows:
In conclusion, our empirical evaluation shows that linking observational results to recommendations regarding medical practice is currently very common in highly influential journals. Such recommendations frequently represent logical leaps. As such, if they are correct, they may accelerate the translation of research but, if they are wrong, they may cause considerable harm. [emphasis added]
I should note that it’s at least possible to become reasonably confident of a causal association between variables in an observational study using what is known as the Bradford Hill criteria:
- Strength of the association
- Consistency
- Specificity
- Temporality
- Biological gradient
- Plausibility
- Coherence
- Experiment
- Analogy
The more of these criteria that are met, the more likely causation is present.
However, observational nutrition studies rarely satisfy these criteria, which makes the frequent claims of causality even more dubious.
There Are Problems with Data Collection Methods Too
Way back in the 13th century, the English philosopher and Franciscan friar Roger Bacon said that scientific data must be: (5)
- Independently observable
- Measurable
- Falsifiable
- Valid
- Reliable
To use a simple example, if someone is eating an apple right in front of you, you can observe, measure, and either verify or repute that they’re doing that. But if they simply tell you that they ate an apple at some time in the past, you can neither observe, measure, verify, nor refute their story. You just have to take their word for it—and that is not science.
The term “observational nutrition study” is a misnomer because it suggests that researchers are actually observing what participants eat. But of course that’s not true; researchers aren’t standing around in people’s kitchens and going out to restaurants with them.
Instead, they are collecting data on what people eat by giving them questionnaires to fill out. There are different versions of these used in research, from food frequency questionnaires (FFQs), which may ask people to recall what they ate months or even years prior, to 24-hour recall surveys where people are asked what they ate over the past 24 hours.
These “memory-based assessments,” or “M-BMs,” bear little relation to actual calorie or nutrient consumption. Why? Because memory is not a literal, accurate, or even precise reproduction of past events. (6)
In a paper criticizing the validity of M-BMs for data collection, Edward Archer pointed out:
When a person provides a dietary report, the data collected are not actual food or beverage consumption but rather an error-prone and highly edited anecdote regarding memories of food and beverage consumption. (7)
Going back to the apple example above, researchers aren’t watching participants eat an apple. They’re relying on the participants’ reports of eating apples—sometimes several years prior!
We Can’t Rely on Memory When It Comes to Nutrition Research
But just how inaccurate are M-BMs? To find out, Archer analyzed questionnaires from participants in the National Health and Nutrition Examination Survey (NHANES), which is a long-running series of studies on the health and nutritional status of the American public. NHANES has served as the basis of dietary guidelines and public health recommendations.
Archer found that, over the 39-year history (at the time of his study) of NHANES, the self-reported calorie intake on the majority of respondents (67 percent of women and 59 percent of men) was not physiologically plausible, and the average calorie intake levels reported by overweight and obese people (i.e., the majority of Americans) were incompatible with life.
In other words, a bedridden, frail, elderly woman (i.e., a person with the lowest possible calorie requirements) could not survive on the number of calories reported by the average person in the NHANES survey!
And this isn’t just a problem in the United States. The inaccuracy of M-BMs has been replicated consistently over three decades and in multiple countries around the world. (8)
Can you see why this would be a problem?
What’s more, certain subgroups are more prone to underreporting, including people who are obese or have a high calorie intake. Obese subjects have been found to underreport up to half of their calorie intake, and in particular, they underreport fat and carbs. (9)
One consequence of this is that the health risks associated with a high fat (or carb) intake would be overestimated. Imagine that someone reports a saturated fat intake of 50 grams, and they have a total cholesterol of 200 mg/dL. But say they underreported their saturated fat intake by 40 percent, and their actual intake was 80 grams. This would overestimate the effect of saturated fat intake on total cholesterol because it assumed that eating 50 grams—rather than 80 grams—led to a total cholesterol of 200 mg/dL.
Where does that leave us? Archer doesn’t pull any punches:
Data collected from M-BM are pseudoscientific and inadmissible in scientific research and the formulation of national dietary guidelines. (10)
… the uncritical faith in the validity and value of M-BM has wasted significant resources and continues the single greatest impediment to actual scientific progress in the fields of obesity and nutrition research. (11)
Most people have no idea that the entire field of observational nutrition research—and all of the media headlines that come out of it—is based on questionnaires about what people eat. Now that you know, will you ever look at nutrition headlines in the same way again?
How the “Healthy-User” Bias Impacts Findings
For example, because red meat has been perceived as “unhealthy” for so many years, on average, people that eat more red meat are more likely to: (12)
- Smoke
- Be physically inactive
- Eat fewer fruits and vegetables
- Be less educated
Of course, most researchers are well aware of the influence of confounding factors and the healthy-user bias, and good ones do their best to control for as many of these factors as they can. But even in the best studies, researchers can’t control for all possible confounding factors because our lives are simply too complex. As Norman Breslow, a former biostatistician at the University of Washington, once said:
People think they may have been able to control for things that aren’t inherently controllable.
One of the inevitable results of the healthy-user bias is that many observational studies end up comparing two groups of people that are not at all similar, and this casts doubt on the findings.
For example, early studies suggested that vegetarians live longer than omnivores. However, these studies compared Seventh Day Adventists—a religious group that advocates a vegetarian diet and a healthy lifestyle as part of their belief system—with the general population.
That introduces serious potential for healthy-user bias because the members of the SDA church engage in lifestyle behaviors—like not smoking or drinking alcohol, eating more fresh fruits and vegetables, and getting more exercise—that have been shown to reduce the risk of death from cardiovascular disease and all causes. So, we can’t possibly know whether the reduction in deaths observed in these studies was related to the vegetarian diet or these other causes, and thus the findings are not generalizable to the wider population.
(As a side note, four later studies that compared vegetarians with a more health-conscious population of omnivores found that both groups lived longer than the general population, but there was no difference in lifespan between the vegetarians and healthy omnivores. You can read more about this in my article “Do Vegetarians and Vegans Really Live Longer than Meat Eaters?”)
The healthy-user bias plagues most observational nutrition studies, and yet we hardly ever hear it mentioned when these studies are reported in the media. Now that you know about it, how might you respond differently to some of the headlines I shared at the beginning of the article?
- “Low-carb diets could shorten life, study suggests”
- “Eating cheese and butter every day linked to living longer”
- “Whole grains one of the most important food groups for preventing type 2 diabetes”
Would you ask questions like:
- Since fat has been perceived as unhealthy and low-carb diets are high in fat, were the people eating low-carb diets also engaging in other behaviors perceived as unhealthy?
- Were the people eating more cheese and butter doing anything else that might have contributed to a longer lifespan?
- Were the people who were eating more whole grains exercising more or engaging in other behaviors perceived as healthy (since eating whole grains is perceived as healthy)?
The “Risks” Are Often Pure Chance
In 2015, the International Agency for Research on Cancer (IARC) issued a report suggesting that for every 50 grams of processed meat consumed, the relative risk of cancer was increased by 18 percent compared to those who ate the least processed meat. (13)
How confident can we be of that claim? In epidemiology outside the field of nutrition (and even within the nutrition field until recently), the threshold for confidence in relative risk is between 100 and 300 percent. In other words, we’d need to see an increase or decrease of risk of between 100 and 300 percent for a given intervention before we could be confident that the change observed was due to the intervention and not simply to chance.
According to the late epidemiologist Syd Shapiro, cofounder of the Slone Epidemiology Center, at the higher end of this range, one can be guardedly confident, but “we can hardly ever be confident about estimates of less than 100 percent, and when estimates are much below 100 percent, we are simply out of business.” (14)
Marcia Angell, the former editor of the New England Journal of Medicine, said much the same thing in a 1995 article in Science called “Epidemiology Faces Its Limits”:
As a general rule of thumb, we are looking for a relative risk of three or more [before accepting a paper for publication], particularly if it is biologically implausible or if it’s a brand-new finding.
And Robert Temple, who was the director of drug evaluation at the Food and Drug Administration (FDA) at the time, put it even more bluntly in the same Science article:
My basic rule is if the relative risk isn’t at least three or four, forget it.
Most epidemiologists that were interviewed for the Science article said they would not take seriously a single study reporting a new potential cause of cancer unless the increase in risk was at least three-fold.
This is bad news for observational nutrition studies since the vast majority of relative risks reported fall well below this threshold. Most are well below 100 percent, and many—like the IARC finding on processed meat and cancer—are below 25 percent.
To put this in perspective, the increased relative risk of lung cancer from smoking cigarettes is between 1,000 and 3,000 percent. The increased relative risk of liver cancer from eating grains contaminated with aflatoxin is 600 percent.
It’s also important to consider the difference between absolute and relative risk reduction. Researchers often use relative risk statistics to report the results of nutrition studies. For example, in the IARC report, they said that every 50 grams of processed meat consumed increased the risk of cancer by 18 percent. But when that increase in relative risk is stated in absolute terms, it doesn’t sound quite as impressive. The lifetime absolute risk of colon cancer in vegetarians is 4.5 out of 100; in people eating 50 grams of processed meat every day for a lifetime, the risk is 5.3 out of 100. (15)
All of this suggests that most findings in observational nutrition studies are indistinguishable from chance and are unlikely to be validated by RCTs (which is exactly what has happened in most cases, as I’ll explain shortly). Yet despite this, many of these studies are highly publicized in the media and often reported as if they conclusively discovered a causal relationship.
The current climate in both academia and the media, unfortunately, contributes to this. Null results—when researchers don’t find a positive or negative association—are significantly less likely to be published, and without publication, researchers are out of a job. (16, 17) And the pressure to get clicks and generate advertising revenue in the digital media world leads to splashy headlines that overstate or distort what the study actually found. One study found that 43 percent of front-page stories reporting on medical research are based on research with mostly preliminary findings (i.e., they were observational studies that didn’t prove causality). (18)
“The sin,” Dr. Sander Greenland, an epidemiologist at UCLA, has said, “comes from believing a causal hypothesis is true because your study came up with a positive result.” (19)
Sadly, this is more the rule than the exception today.
I hope this article has given you some reasons to remain skeptical about nutrition research. For more information on this topic, check out Part 2 of this article series—and let me know what you think in the comments below!
The post Why You Should Be Skeptical of the Latest Nutrition Headlines: Part 1 appeared first on Chris Kresser.
from Chris Kresser https://chriskresser.com/why-you-should-be-skeptical-of-the-latest-nutrition-headlines-part-1/
via Holistic Clients
No comments:
Post a Comment