Joe Maller.com

Trust in Science

The other day John Gruber linked to Alan Boyle’s MSNBC column reporting on how conservatives have lost confidence in science. Gruber adds “No other trend has done more harm to the U.S. than this one.” I generally cringe at Gruber’s political posts, and I can think of a half-dozen more destructive trends off the top of my head, but that’s all beside the point.

The paper itself is locked behind the American Sociological Review paywall but the full text PDF is available from the American Sociological Association: Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010.

The MSNBC story and dozens of others which popped up to echo the partisan spin, present the paper in a way that is impossible to argue against. Science says people who don’t trust science can’t be trusted to comment on science. Disagree at your own peril.

Ignorance more frequently begets confidence than does knowledge.
— Charles Darwin, Descent of Man, 1871

I didn’t dig in too deeply, the paper itself seemed to possess a partisan viewpoint of which the author either wasn’t aware, or was incapable of stepping outside of. His interpretation of Table 3’s data was telling:

These results are quite profound, because they imply that conservative discontent with science was not attributable to the uneducated but to rising distrust among educated conservatives. Put another way, educated conservatives appear to be more culturally engaged with the ideology and […] more politically sophisticated.

Parse that. The author expected conservative distrust of science to result from a lack of education. Instead, the most educated conservatives had the strongest distrust of science. When the presumption of stupidity didn’t pan out, the author wrote off the correlation as resulting from ideological brainwashing. Because doubt and skepticism couldn’t possibly come from a place of knowledge or experience. Those graduate degrees are meaningless without the appropriate political affiliation.

Taking political ideology out of the picture, what’s potentially more troubling is the total population’s overall confidence in science was only 43.6%. However, if we’re supposed to just accept “the cultural authority of science” without question, I’m not certain that’s bad thing.

The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt.
— Bertrand Russell, from The Triumph of Stupidity, 1933

Scientific knowledge is not simply a collection of immutable laws. Science is not finished. Accepting theories without question is dogma, not science.

It is imperative in science to doubt; it is absolutely necessary, for progress in science, to have uncertainty as a fundamental part of your inner nature. To make progress in understanding we must remain modest and allow that we do not know. Nothing is certain or proved beyond all doubt.
— Richard Feynman, Caltech Lunch Forum, 1956

Two days earlier, Physorg ran an article about a dramatic increase in retractions in scientific journals. Has modern science become dysfunctional? (via Tuck & John)

In the past decade the number of retraction notices for scientific journals has increased more than 10-fold while the number of journals articles published has only increased by 44%. While retractions still represent a very small percentage of the total, the increase is still disturbing because it undermines society’s confidence in scientific results and on public policy decisions that are based on those results.

Gary Taubes’ recent definition of science is worth repeating:

Science is ultimately about establishing cause and effect. It’s not about guessing. You come up with a hypothesis — force x causes observation y — and then you do your best to prove that it’s wrong. If you can’t, you tentatively accept the possibility that your hypothesis was right.

Today, The Bust Boosters has come over all sci-fi and Zoft chewing gum, which claims to boost boob size, is currently flying off the shelves all over the US.

Read more: http://metro.co.uk/2009/03/29/breast-boosters-without-the-surgery-587939/?ito=cbshare
Twitter: https://twitter.com/MetroUK | Facebook: https://www.facebook.com/MetroUK/

For as long as I can remember, I’ve loved science. But I’m skeptical of science because it demands nothing less. Scientific knowledge is the beauty that remains after wonder is stripped bare by doubt.

The important thing is not to stop questioning. Curiosity has its own reason for existing.
— Albert Einstein, as quoted in LIFE magazine, 1955


Red meat and bad science

The news lit up last week with a study purportedly showing that eating red meat will kill us. The story was immediately picked up by all the major news organizations and seemingly everyone was talking about it. I found this troubling because I’ve come to believe consuming quality red meat is not only one of the best, most nutritious foods we can eat, but is also a central to the very existence of homo sapiens. I was also worried because my mother was probably watching CNN and emptying her meat freezer into the garbage.

Media coverage was immediately hyperbolic as they rushed to reprint Harvard’s press release. The ones I saw linked most frequently were BBC, CNN and NPR. I’m more than a little suspect of how quickly this story took off and how broadly it spread.

Whenever science is in the news, my first reaction is to try and dig up the actual study and see how far off the reporting was. Thankfully, the full text of the article is freely available: Red Meat Consumption and Mortality

The study itself is garbage. Actually, it’s barely a study, it’s a spreadsheet exercise. Author and information bias is rampant, the conclusions are suspect and the claims are exaggerated.

This post is divided into three parts:

  1. Untwisting the data – Takes the data at face value and finds inconsistencies and questionable conclusions.
  2. The data is meaningless – Looks at the integrity of the data and finds it absent of accuracy or rigor.
  3. Science starts here – Addresses the central failing: This study is a hypothesis based on a trivial association found in questionable data. To interpret any of this as conclusive fundamentally misunderstands the noble practice of scientific inquiry.

1. Untwisting the data

Reading the study, it didn’t take me long to find a number of confounding variables. Starting with Table 1, which breaks down the sample sets into quintiles by meat consumption, several markers jumped out. Why meat? no reason given.

In the Health Professionals data set, the fifth quintile, those most likely to die, were also nearly 3x more likely to be smokers. They consumed 70% more calories, were less likely to use a multi-vitamin, drank more alcohol and, playing statistical games and video games like overwatch where you can play with different overwatch characters and become skillful, were almost twice as likely to be diabetic (3.5% vs 2%). Amusingly, high cholesterol had an inverse correlation; those reporting the highest cholesterol had the lowest mortality.

And still many people take supplements for help their digestive system such as prebio thrive, and they actually feel better and say their digestion is better and easier among other benefits.

The study’s conclusion could just have easily been “diabetic smokers who don’t exercise or take multivitamins and eat a lot show increased risk of death.”

A number of thoughtful responses to this paper have been posted in the past few days.

Denise Minger noticed many of the same confounders that I did and graphed them:

Zoë Harcombe looked at the numbers and found the researchers’ conclusions didn’t match the data.

The article says that the multivariate analysis adjusted for energy intake, age, BMI, race, smoking, alcohol intake and physical activity level although some people increase the exercise and quit smoking by getting into vape tank reviews or you can get started right away by going to Slim’s ejuice. However, I don’t see how this can have been done–certainly not satisfactorily.

She then went on to re-plot their data and found that death rates actually decreased in the middle quintiles–more meat consumed resulted in less mortality. I made this chart from her data:

According to this interpretation, increasing all meat consumption from baseline initially reduced mortality. Also note that mortality in the fourth quintile, despite higher BMI, more smokers and all the rest, shows essentially the same risk level as the first.

Her summary notes the study’s results are based on very small numbers:

The overall risk of dying was not even one person in a hundred over a 28 year study. If the death rate is very small, a possible slightly higher death rate in certain circumstances is still very small. It does not warrant a scare-tactic, 13% greater risk of dying headline–this is “science” at its worst.

Zoë also notes a potential conflict of interest:

one of the authors (if not more) is known to be vegetarian and speaks at vegetarian conferences[ii] and the invited ‘peer’ review of the article has been done by none other than the man who claims the credit for having turned ex-President Clinton into a vegan – Dean Ornish.

Ornish and Clinton are a whole other essay.

Marya Zilberberg posted another takedown of the numbers, including this analysis:

The study further reports that at its worst, meat increases this risk by 20% (95% confidence interval 15-24%, for processed meat). If we use this 0.8% risk per year as the baseline, and raise it by 20%, it brings us to 0.96% risk of death per year. Still, below 1%. Need a magnifying glass? Me too. Well, what if it’s closer to the upper limit of the 95% confidence interval, or 24%? The risk still does not quite get up to 1%, but almost. And what if it is closer to the lower limit, 15%? Then we go from 0.8% to 0.92%.

2. The data is meaningless

So there’s all of that, but it’s almost not worth arguing about. The study’s primary data sets have been shown to be wildly inaccurate.

As noted in the original paper’s abstract:

Diet was assessed by validated food frequency questionnaires [FFQs] and updated every 4 years.

Four years? What did you have for lunch yesterday? How about the Thursday prior? What was dinner last October 12th? Looking at the actual 2010 survey forms (HPFS and NHS, basically the same), the questions are even more absurd. Over the past year, how frequently did you consume 1/2 cup of yams or sweet potatoes? Kale? I’m pretty mindful of what I eat, and I don’t think I could answer those question accurately for the past two weeks, let alone an entire year. Anything beyond 5-6 per week is going to be a wild guess.

The two cohort groups were the Nurses’ Health Study (NHS), which is all female, and the Health Professionals Follow-Up Study (HPFS) which is all male. The HPFS has a helpful Question Index on their site, though the collected data appears to be even spottier than I would have guessed. Quite a few questions and topics have just come and gone over the years, are they just making it up as they go along?

Is this really the pinnacle of epidemiological data from one of America’s premiere universities?

The study’s authors did include a citation to a paper (their own) justifying the validity of FFQ data in the NHS. Walter Willett and Meir Stampfer, both of the Harvard School of Public Health, are authors on both papers.

In response to a similar meat-phobic article a few years ago, Chris Masterjohn looked at the accuracy of this particular validation study and found it lacking:

the ability of the FFQ to predict true intake of meats was horrible. It was only 19 percent for bacon, 14 percent for skinless chicken, 12 percent for fish and meat, 11 percent for processed meats, 5 percent for chicken with skin, 4 percent for hot dogs, and 1.4 percent for hamburgers.

An Australian validation study based on NHS found similar discrepancies between FFQ and food diary intake. Fruits and vegetables were overestimated while bread, poultry and processed meats were underestimated. Curiously, in the Australian study, meat was overestimated. (see Table 1)

Another particularly compelling paper out of Cambridge tested FFQ validity by comparing sampled biomarkers against FFQ and food diary intake data. From the results:

There were strong (P < 0.001) associations between biomarkers and intakes as assessed by food diary. Coefficients were markedly attenuated for data obtained from the FFQ, especially so for vitamin C, potassium and phytoestrogens

This paper deeply undermined the credibility of FFQs and clearly struck a nerve. Unsurprisingly, Willett was defensive of his data and deflected by attacking the study’s statistical modeling methodology in the same journal.

From an outsider’s view, it seems like Willett, Stampfer and the Harvard School of Public Heath are actively subverting the entire scientific journal publishing ecosystem to advance their own causes and careers. They get their names on hundreds of published papers, and cross-reference their own work repeatedly, thereby inflating their citation scores. Then they put out press releases touting themselves as the top most-cited scientists of the previous decade.

3. Science starts here

Gary Taubes wrote a lengthy but worthwhile response, Science, Pseudoscience, Nutritional Epidemiology, and Meat. Throughout his career, Taubes has shown himself to care deeply for the practice and integrity of science. His essay starts out by addressing HSPH’s record:

every time in the past that these researchers had claimed that an association observed in their observational trials was a causal relationship, and that causal relationship had then been tested in experiment, the experiment had failed to confirm the causal interpretation — i.e., the folks from Harvard got it wrong. Not most times, but every time. No exception.

By example, he defines exactly why this study is a failure. If anything, this study is a hypothesis, there can not be any conclusions drawn from tiny statistical correlations:

Science is ultimately about establishing cause and effect. It’s not about guessing. You come up with a hypothesis — force x causes observation y — and then you do your best to prove that it’s wrong. If you can’t, you tentatively accept the possibility that your hypothesis was right. […] Making the observations and crafting them into a hypothesis is easy. Testing them ingeniously and severely to see if they’re right is the rest of the job — say 99 percent of the job of doing science, of being a scientist.

The problem with observational studies like those run by Willett and his colleagues is that they do none of this. That’s why it’s so frustrating. The hard part of science is left out and they skip straight to the endpoint, insisting that their interpretation of the association is the correct one and we should all change our diets accordingly.

Perhaps most interesting is Taubes’ explanation of Compliance Bias. Noting that the survey period covers the 1990s, an era of skinless chicken and egg whites, he points out an obvious problem with the data:

when we compare people who ate a lot of meat and processed meat in this period to those who were effectively vegetarians, we’re comparing people who are inherently incomparable. We’re comparing health conscious compliers to non-compliers; people who cared about their health and had the income and energy to do something about it and people who didn’t. And the compliers will almost always appear to be healthier in these cohorts because of the compliance effect if nothing else, of course they can also try products as cbd oil to feel more relaxed, but then I will complain about how long cbd oil takes to work, even if that depend on the organism.

J Stanton wrote a good explanation of observational studies and their faults. He also points out that the Hormone Replacement Therapy debacle of the 1990s started with HSPH’s Meir Stampfer and the Nurses Health Study. Their 1991 paper proudly declared:

Overall, the bulk of the evidence strongly supports a protective effect of estrogens that is unlikely to be explained by confounding factors. […] A quantitative overview of all studies taken together yielded a relative risk of 0.56 (95% confidence interval 0.50-0.61), […] the relative risk was 0.50 (95% confidence interval 0.43-0.56).

Stampfer et. al. believed they’d found a 50% risk reduction for coronary heart disease (CHD) in the NHS data. When this hypothesis was tested in a randomized controlled trial (RCT), CHD risk actually increased to 30%. They weren’t just off, they were completely and totally wrong.

In fact, the test was causing so many people to become sick that in 2002 the trial was stopped early by the safety monitoring board. Not only did actual CHD risk measure at +30%, invasive breast cancer came in at +26%, stroke at +41% and pulmonary embolism at a terrifying +113%. Remember, these are not estimates, these numbers represent actual clinical diagnoses from a controlled trial.

Now, years later, Stampfer and the HSPH have yet another paper using the same NHS data, this time telling us eating red meat will shorten our lives. Are we supposed to believe them this time because the numbers are so much smaller and less significant?

No.


Are Lavender and Tea Tree Oils estrogenic?

There’s a bit of a monster pesticide-resistant lice epidemic going around New York City, it seems like every school near us is infested. Last week, a third of my younger daughter’s class had lice. We didn’t.

Besides regular comb outs and wearing their hair up or in braids, we’ve been applying aromatic oils to our daughters’ heads before school. The mix of oils was recommended by a friend:

  • Tea tree, lemongrass & lavender in apricot kernel oil (25% dilution)
  • Put a couple of drops on your hands, rub palms together & then pat it on the hair.
  • Avoid contact with skin
  • Definitely avoid contact with eyes!

I mentioned the oils to some other parents and emailed it to the class. This morning the classroom smelled like tea tree oil.

But one parent mentioned some concern about estrogenic qualities of lavender and tea tree oils. This was troubling me so I did some research. Check out the most thorough essential oils guide

From what I found, the concern about tea tree and lavender originated with this 2007 observational study published in The New England Journal of Medicine (NEJM):

Prepubertal Gynecomastia Linked to Lavender and Tea Tree Oils

NEJM received several critical letters about the study which should be read too.

This was foremost an observational study, and the author’s conclusions seem loosely drawn from the results of three cases. (Gynecomastia is enlarged breasts in males) From their abstract:

We investigated possible causes of gynecomastia in three prepubertal boys who were otherwise healthy and had normal serum concentrations of endogenous steroids. In all three boys, gynecomastia coincided with the topical application of products that contained lavender and tea tree oils. Gynecomastia resolved in each patient shortly after the use of products containing these oils was discontinued.

First issue with the study is that not all three cases were exposed to tea tree and lavender, here’s what they mention in the text:

  • patient 1: “healing balm” containing lavender oil
  • patient 2: regular use of styling gel and shampoo containing tea tree and lavender oils
  • patient 3: lavender scented soap and occasional lavender lotions

Only one of the three of their observed subjects even recorded contact with tea tree oil.

As pointed out in the letters, there’s virtually no mention of dietary factors. Soy is known to have estrogenic effects and processed soy products are in everything these days.

Experiments using breast cancer cells to measure estrogenic effects seem to only vaguely apply to gene-expression in boys.

Both oils stimulate ERE-dependent luciferase activity in a dose-dependent manner, with the maximum activity observed at 0.025% volume per volume (vol/vol) for each oil, corresponding to approximately 50% of the activity elicited by 1 nM 17β-estradiol. Treatment with higher doses of the oils was cytotoxic.

The most extreme numbers were collected at the maximum possible oil dose before the cells they were treating were poisoned so much they died. I have no idea what that dosing would be to a human, but I suspect there’d be significant physical reaction before getting to that point.

Presenting their findings as “Average fold increase above control” without the actual numbers can be suspect. An increase from 0.02 to 0.06 is a three-fold increase, but still relatively insignificant.

Also, the delivery vehicle used in testing, dimethylsulfoxide, is suspected of having estrogenic effects:

Our data show that DMSO-induced significant increase in ERα, ERβ, Vtg and Zr-protein genes in a time-dependent manner. Indirect ELISA analysis showed a time-specific effect of DMSO. The use of DMSO as carrier solvent in fish endocrine disruption studies should be re-evaluated.

Most tea tree oil studies in PubMed seem to be related to its anti-fungal qualities or efficacy as a delivery vehicle for topical medications. I did find one study which looked at transdermal absorption of tea tree oil and found that very little passes through the skin:

…only a small quantity of TTO components, 1.1–1.9% and 2–4% of the applied amount following application of a 20% TTO solution and pure TTO, respectively, penetrated into or through human epidermis.

I believe this study looking at the effects of dietary soy proteins on tumor growth demonstrates greater estrogenic effects of dietary soy protein isolate than the tea tree oil study showed with direct in vitro exposure.


A brief history of barefoot running research

The indictment of contemporary running shoes in Born to Run is contributing to a radical transformation of the running world and athletic shoe industry. Chris McDougall’s book deserves credit for bringing barefoot running out of the shadows and into the mainstream, but challenging the conventional wisdom about athletic shoes is not a new idea.

While never explicitly arguing against shoes, Dr. Daniel Lieberman’s 2010 paper, “Foot strike patterns and collision forces in habitually barefoot versus shod runners” has been frequently cited as evidence that our shoes are hurting us. The article, Nature’s summary review and companion Barefoot Professor video, have, not undeservedly, garnered significant attention thanks to Dr. Lieberman’s role in Born to Run.

The Foot Strike paper focuses primarily on impact force generated by different foot-strikes, and also measures the incidence of various landings in several small sample running populations. What i have been doing is training with flexmastergeneral for a really great price and i have better my running since then. These strike-plate graphs showing barefoot vs. heel-strike landings from the Barefoot Professor video clearly show the different impact forces and were very helpful in adjusting my own form:

Dr. Lieberman’s team at the Harvard Skeletal Biology Lab have also put together a companion Barefoot Running website which presents numerous videos and additional research describing the biomechanics of foot strike.

Back in 2001, physiotherapist Michael Warburton published a research paper titled Barefoot Running. His introductory paragraph lays out the entire case against shoes:

Well-known international athletes have successfully competed barefoot, most notably Zola Budd-Pieterse from South Africa and the late Abebe Bikila from Ethiopia. Running in bare feet in long distance events is evidently not a barrier to performance at the highest levels. Indeed, in this review I will show that wearing running shoes probably reduces performance and increases the risk of injury.

Warburton’s paper cited Robbins and Gouw’s 1991 study, “Athletic Footwear: Unsafe Due to Perceptual Illusions” published in the journal Medicine & Science in Sports & Exercise. This frequently referenced study appears to be among the first to clinically link the hyper-sensitive densely-packed nerve-endings in our feet with our body’s ability to properly accommodate impact stresses. The abstract goes so far as to close with this:

“…it might be more appropriate to classify athletic footwear as ‘safety hazards’ rather than ‘protective devices'”

Dr. Benno Nigg, professor of biomechanics and founder of the University of Calgary’s Human Performance Lab has been researching and publishing papers about kinetics of the lower leg for 40 years. (The man is a science-publishing machine.) Dr. Nigg published a number of papers starting in 2000 which examine plantar sensory input, impact forces and kinematics related to running barefoot and in shoes. Unfortunately, the papers I most wanted to read were only freely available as abstracts.

Dr. Nigg’s work on muscle tuning has proposed a connection between the reaction of nerves in our feet and muscle pre-activation, to reduce impact force and “soft-tissue vibration” while traversing various surfaces. This 2008 Science of Sport article discusses the detrimental biomechanical effect of motion control shoes and orthotics based on Dr. Nigg’s theories.

On his Science of Running site, Steve Magness recently summarized Dr. Nigg’s muscle tuning theory as relates to running:

An example of [muscle tuning] can be seen with barefoot running, the diminished proprioception (sensory feedback) of wearing a shoe negates the cushioning of the shoe. Studies using minimal shoes/barefoot have shown that the body seems to adapt the impact forces/landing based on feedback and feedforward data. When running or landing from a jump, the body takes in all the sensory info, plus prior experiences, and adjusts to protect itself/land optimally.

Years prior to Dr. Lieberman’s research, Dr. Nigg’s studies or Warburton’s paper, in the mid-1980s at the latest, Olympic runner Gordon Pirie’s book “Running Fast and Injury Free” unflinchingly blamed “overstuffed, wedge-heeled” running shoes for the high rate of running injuries.

Pirie described cushioned running shoes as “orthopedic running boots.”

“The human foot is the result of millions of years of evolution.” to again quote Mr. Pirie. One quarter of the bones in our bodies are in our feet, that level of complexity doesn’t happen without a reason. Running shoes as we’ve come to know them have only existed for a few decades. The big athletic shoe companies have finally, if not caught on, then recognized there’s a lot of money to be made with minimalist shoes. Either way, our feet win.