Interesting thread by
@bryan_johnson on
#RedMeat
As irony would have it, the potential harm of red meat consumption is one of the biggest changes in my own opinion to date.
I was certain that red meat is effectively a slow acting poison from my earliest education in middle school, starting with my eighth grade science teacher. He stated, “Red meat is bad for your health, it’s established fact now.”
In high school, I still remember next to the lunch hallway a “Heathful Tips” poster that included “Skip the steak, have a salad.”
And indeed, while I never became anything close to a vegan, when I was trying to “be good” about my diet, I would focus more on plants and less on red meat, in particular.
But even back then, I kept thinking, won’t this be a problem to study with observational data on diets? Doesn’t this create a feedback loop of people who seek to be healthy doing what they’re told is good for them on multiple fronts, including “skipping the steak for a salad”…?
As it turns out, there’s a name for this, “Healthy User Bias” understandably, people who generally seek out healthy habits, such as no smoking, light to no alcohol, regular exercise, etc. are also more prone to follow health guidelines and advice from their doctor.
So how do researchers deal with the problem of healthy user bias? They used statistical instruments such as a sensitivity analysis. They attempt to isolate the harm of these other unhealthy habits, and then quantify their contribution to the overall harm. Then, per the speculation, it’s just a simple matter of math — removing those contributions so the variable of interest, such as red meat, are all that remains.
But does this work? Can researchers actually quantify all of these relevant variables in cultural, socioeconomic, and environmental effects for their actual independent, causal contribution?
For that matter, do these variables function in linearity? To the degree that the dose response can likewise be understood and accounted for?
I’m certainly not the first to bring up these questions as they’re explored by many, including the towering epidemiologist, Bradford Hill. He himself had very high standards for what could be considered a threshold for claiming high confidence in causality from observational data.
(Or at least, he had high standards by today’s baseline.)
Obviously, I have a deeper connection to this in my spearheading the
#LMHRstudy. While I am blinded from the dietary data, we know it’s far more common place to have
#carnivore or near
#carnivore diet levels of animal protein and red meat in this population.
Will that result in the higher expected cardiovascular disease (given high LDL as well), or for that matter, all cause mortality?
I can’t say much about the mortality question, but we do have more data dropping soon on the question of cardiovascular disease.
Ours is exactly the kind of study I’m interested in given it will have folks who are not likely to have the same poor health habits as others that typically consume diet high in red meat (smoking, heavy drinking, etc.). This helps to get this variable of interest away from the other confounders that statistics is trying so hard to account for.
That said, I’m also aware this diet space is different than other forms of science. As I discussed above, the level of confidence in statistical speculations are perhaps the highest they’ve ever been. To repeat what I’ve said many times over, I still think observational data are quite crucial, particularly for hypothesis-building (and I don’t state that with any dismissal, it’s a very important aspect of science).
However, the other aspect of observational data that is less discussed is that while it isn’t as great at proving causality, it’s quite excellent at knocking down claims of causality where the data show no association when highly expected (again, see Bradford Hill).
For me personally, I’m excited to where our study data will take us.