Trust in Science

The other day John Gruber linked to Alan Boyle’s MSNBC column reporting on how conservatives have lost confidence in science. Gruber adds “No other trend has done more harm to the U.S. than this one.” I generally cringe at Gruber’s political posts, and I can think of a half-dozen more destructive trends off the top of my head, but that’s all beside the point.

The paper itself is locked behind the American Sociological Review paywall but the full text PDF is available from the American Sociological Association: Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010.

The MSNBC story and dozens of others which popped up to echo the partisan spin, present the paper in a way that is impossible to argue against. Science says people who don’t trust science can’t be trusted to comment on science. Disagree at your own peril.

Ignorance more frequently begets confidence than does knowledge.
— Charles Darwin, Descent of Man, 1871

I didn’t dig in too deeply, the paper itself seemed to possess a partisan viewpoint of which the author either wasn’t aware, or was incapable of stepping outside of. His interpretation of Table 3’s data was telling:

These results are quite profound, because they imply that conservative discontent with science was not attributable to the uneducated but to rising distrust among educated conservatives. Put another way, educated conservatives appear to be more culturally engaged with the ideology and […] more politically sophisticated.

Parse that. The author expected conservative distrust of science to result from a lack of education. Instead, the most educated conservatives had the strongest distrust of science. When the presumption of stupidity didn’t pan out, the author wrote off the correlation as resulting from ideological brainwashing. Because doubt and skepticism couldn’t possibly come from a place of knowledge or experience. Those graduate degrees are meaningless without the appropriate political affiliation.

Taking political ideology out of the picture, what’s potentially more troubling is the total population’s overall confidence in science was only 43.6%. However, if we’re supposed to just accept “the cultural authority of science” without question, I’m not certain that’s bad thing.

The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt.
— Bertrand Russell, from The Triumph of Stupidity, 1933

Scientific knowledge is not simply a collection of immutable laws. Science is not finished. Accepting theories without question is dogma, not science.

It is imperative in science to doubt; it is absolutely necessary, for progress in science, to have uncertainty as a fundamental part of your inner nature. To make progress in understanding we must remain modest and allow that we do not know. Nothing is certain or proved beyond all doubt.
— Richard Feynman, Caltech Lunch Forum, 1956

Two days earlier, Physorg ran an article about a dramatic increase in retractions in scientific journals. Has modern science become dysfunctional? (via Tuck & John)

In the past decade the number of retraction notices for scientific journals has increased more than 10-fold while the number of journals articles published has only increased by 44%. While retractions still represent a very small percentage of the total, the increase is still disturbing because it undermines society’s confidence in scientific results and on public policy decisions that are based on those results.

Gary Taubes’ recent definition of science is worth repeating:

Science is ultimately about establishing cause and effect. It’s not about guessing. You come up with a hypothesis — force x causes observation y — and then you do your best to prove that it’s wrong. If you can’t, you tentatively accept the possibility that your hypothesis was right.

For as long as I can remember, I’ve loved science. But I’m skeptical of science because it demands nothing less. Scientific knowledge is the beauty that remains after wonder is stripped bare by doubt.

The important thing is not to stop questioning. Curiosity has its own reason for existing.
— Albert Einstein, as quoted in LIFE magazine, 1955

Share |

link: Apr 02, 2012 8:15 am
posted in: misc.
Tags: , ,

Incidental Running

In the early morning you could encounter whole family groups of Mayan Indians, kids to grans, trotting heavily laden along the mountain trails on their way to market.”
— Recounting the environment ofthe Tarahumara in Running Times

My free time has been a disaster lately. Between construction on the house, Michelle being swamped at work and general parenting insanity, I haven’t been finding much time to run.

But I’ve been running. Everywhere.

To and from school, the office, the market, across the street. Everywhere. In whatever I’m wearing–usually jeans and carrying a bag of some sort.

Out of curiosity, I started logging these runs. I haven’t been updating Dailymile because I don’t want to flood my stream, but I’ve decided to start recording summaries totaling up a day or two of incidental runs.

While my overall mileage is down, I’m feeling better physically than I have in years–no aches and no complaints. Some of this comes from focused strengthening and improved form, but I suspect it’s simply from moving constantly and not overloading anything.

The only drawback I’ve noticed is is I’m less inclined to go out for a late run on days where I’ve already gotten in a few miles. This might make marathon training as much a discipline challenge as anything else.

Even though I only managed five deliberate runs in March, I still pulled in nearly 46 miles for the month. For the past two weeks, counting weekdays only, I ran 19 times for a total of 14 miles (0.75 miles per run). Here’s a table showing what all those runs looked like:


Distance (miles)


Pace (per mile)













































































Share |

link: Apr 01, 2012 11:40 pm
posted in: misc.
Tags: ,

Red meat and bad science

The news lit up last week with a study purportedly showing that eating red meat will kill us. The story was immediately picked up by all the major news organizations and seemingly everyone was talking about it. I found this troubling because I’ve come to believe consuming quality red meat is not only one of the best, most nutritious foods we can eat, but is also a central to the very existence of homo sapiens. I was also worried because my mother was probably watching CNN and emptying her meat freezer into the garbage.

Media coverage was immediately hyperbolic as they rushed to reprint Harvard’s press release. The ones I saw linked most frequently were BBC, CNN and NPR. I’m more than a little suspect of how quickly this story took off and how broadly it spread.

Whenever science is in the news, my first reaction is to try and dig up the actual study and see how far off the reporting was. Thankfully, the full text of the article is freely available: Red Meat Consumption and Mortality

The study itself is garbage. Actually, it’s barely a study, it’s a spreadsheet exercise. Author and information bias is rampant, the conclusions are suspect and the claims are exaggerated.

This post is divided into three parts:

  1. Untwisting the data – Takes the data at face value and finds inconsistencies and questionable conclusions.
  2. The data is meaningless – Looks at the integrity of the data and finds it absent of accuracy or rigor.
  3. Science starts here – Addresses the central failing: This study is a hypothesis based on a trivial association found in questionable data. To interpret any of this as conclusive fundamentally misunderstands the noble practice of scientific inquiry.

1. Untwisting the data

Reading the study, it didn’t take me long to find a number of confounding variables. Starting with Table 1, which breaks down the sample sets into quintiles by meat consumption, several markers jumped out. Why meat? no reason given.

In the Health Professionals data set, the fifth quintile, those most likely to die, were also nearly 3x more likely to be smokers. They consumed 70% more calories, were less likely to use a multi-vitamin, drank more alcohol and, playing statistical games, were almost twice as likely to be diabetic (3.5% vs 2%). Amusingly, high cholesterol had an inverse correlation; those reporting the highest cholesterol had the lowest mortality.

The study’s conclusion could just have easily been “diabetic smokers who don’t exercise or take multivitamins and eat a lot show increased risk of death.”

A number of thoughtful responses to this paper have been posted in the past few days.

Denise Minger noticed many of the same confounders that I did and graphed them:

Zoë Harcombe looked at the numbers and found the researchers’ conclusions didn’t match the data.

The article says that the multivariate analysis adjusted for energy intake, age, BMI, race, smoking, alcohol intake and physical activity level. However, I don’t see how this can have been done–certainly not satisfactorily.

She then went on to re-plot their data and found that death rates actually decreased in the middle quintiles–more meat consumed resulted in less mortality. I made this chart from her data:

According to this interpretation, increasing all meat consumption from baseline initially reduced mortality. Also note that mortality in the fourth quintile, despite higher BMI, more smokers and all the rest, shows essentially the same risk level as the first.

Her summary notes the study’s results are based on very small numbers:

The overall risk of dying was not even one person in a hundred over a 28 year study. If the death rate is very small, a possible slightly higher death rate in certain circumstances is still very small. It does not warrant a scare-tactic, 13% greater risk of dying headline–this is “science” at its worst.

Zoë also notes a potential conflict of interest:

one of the authors (if not more) is known to be vegetarian and speaks at vegetarian conferences[ii] and the invited ‘peer’ review of the article has been done by none other than the man who claims the credit for having turned ex-President Clinton into a vegan – Dean Ornish.

Ornish and Clinton are a whole other essay.

Marya Zilberberg posted another takedown of the numbers, including this analysis:

The study further reports that at its worst, meat increases this risk by 20% (95% confidence interval 15-24%, for processed meat). If we use this 0.8% risk per year as the baseline, and raise it by 20%, it brings us to 0.96% risk of death per year. Still, below 1%. Need a magnifying glass? Me too. Well, what if it’s closer to the upper limit of the 95% confidence interval, or 24%? The risk still does not quite get up to 1%, but almost. And what if it is closer to the lower limit, 15%? Then we go from 0.8% to 0.92%.

2. The data is meaningless

So there’s all of that, but it’s almost not worth arguing about. The study’s primary data sets have been shown to be wildly inaccurate.

As noted in the original paper’s abstract:

Diet was assessed by validated food frequency questionnaires [FFQs] and updated every 4 years.

Four years? What did you have for lunch yesterday? How about the Thursday prior? What was dinner last October 12th? Looking at the actual 2010 survey forms (HPFS and NHS, basically the same), the questions are even more absurd. Over the past year, how frequently did you consume 1/2 cup of yams or sweet potatoes? Kale? I’m pretty mindful of what I eat, and I don’t think I could answer those question accurately for the past two weeks, let alone an entire year. Anything beyond 5-6 per week is going to be a wild guess.

The two cohort groups were the Nurses’ Health Study (NHS), which is all female, and the Health Professionals Follow-Up Study (HPFS) which is all male. The HPFS has a helpful Question Index on their site, though the collected data appears to be even spottier than I would have guessed. Quite a few questions and topics have just come and gone over the years, are they just making it up as they go along?

Is this really the pinnacle of epidemiological data from one of America’s premiere universities?

The study’s authors did include a citation to a paper (their own) justifying the validity of FFQ data in the NHS. Walter Willett and Meir Stampfer, both of the Harvard School of Public Health, are authors on both papers.

In response to a similar meat-phobic article a few years ago, Chris Masterjohn looked at the accuracy of this particular validation study and found it lacking:

the ability of the FFQ to predict true intake of meats was horrible. It was only 19 percent for bacon, 14 percent for skinless chicken, 12 percent for fish and meat, 11 percent for processed meats, 5 percent for chicken with skin, 4 percent for hot dogs, and 1.4 percent for hamburgers.

An Australian validation study based on NHS found similar discrepancies between FFQ and food diary intake. Fruits and vegetables were overestimated while bread, poultry and processed meats were underestimated. Curiously, in the Australian study, meat was overestimated. (see Table 1)

Another particularly compelling paper out of Cambridge tested FFQ validity by comparing sampled biomarkers against FFQ and food diary intake data. From the results:

There were strong (P < 0.001) associations between biomarkers and intakes as assessed by food diary. Coefficients were markedly attenuated for data obtained from the FFQ, especially so for vitamin C, potassium and phytoestrogens

This paper deeply undermined the credibility of FFQs and clearly struck a nerve. Unsurprisingly, Willett was defensive of his data and deflected by attacking the study’s statistical modeling methodology in the same journal.

From an outsider’s view, it seems like Willett, Stampfer and the Harvard School of Public Heath are actively subverting the entire scientific journal publishing ecosystem to advance their own causes and careers. They get their names on hundreds of published papers, and cross-reference their own work repeatedly, thereby inflating their citation scores. Then they put out press releases touting themselves as the top most-cited scientists of the previous decade.

3. Science starts here

Gary Taubes wrote a lengthy but worthwhile response, Science, Pseudoscience, Nutritional Epidemiology, and Meat. Throughout his career, Taubes has shown himself to care deeply for the practice and integrity of science. His essay starts out by addressing HSPH’s record:

every time in the past that these researchers had claimed that an association observed in their observational trials was a causal relationship, and that causal relationship had then been tested in experiment, the experiment had failed to confirm the causal interpretation — i.e., the folks from Harvard got it wrong. Not most times, but every time. No exception.

By example, he defines exactly why this study is a failure. If anything, this study is a hypothesis, there can not be any conclusions drawn from tiny statistical correlations:

Science is ultimately about establishing cause and effect. It’s not about guessing. You come up with a hypothesis — force x causes observation y — and then you do your best to prove that it’s wrong. If you can’t, you tentatively accept the possibility that your hypothesis was right. […] Making the observations and crafting them into a hypothesis is easy. Testing them ingeniously and severely to see if they’re right is the rest of the job — say 99 percent of the job of doing science, of being a scientist.

The problem with observational studies like those run by Willett and his colleagues is that they do none of this. That’s why it’s so frustrating. The hard part of science is left out and they skip straight to the endpoint, insisting that their interpretation of the association is the correct one and we should all change our diets accordingly.

Perhaps most interesting is Taubes’ explanation of Compliance Bias. Noting that the survey period covers the 1990s, an era of skinless chicken and egg whites, he points out an obvious problem with the data:

when we compare people who ate a lot of meat and processed meat in this period to those who were effectively vegetarians, we’re comparing people who are inherently incomparable. We’re comparing health conscious compliers to non-compliers; people who cared about their health and had the income and energy to do something about it and people who didn’t. And the compliers will almost always appear to be healthier in these cohorts because of the compliance effect if nothing else.

J Stanton wrote a good explanation of observational studies and their faults. He also points out that the Hormone Replacement Therapy debacle of the 1990s started with HSPH’s Meir Stampfer and the Nurses Health Study. Their 1991 paper proudly declared:

Overall, the bulk of the evidence strongly supports a protective effect of estrogens that is unlikely to be explained by confounding factors. […] A quantitative overview of all studies taken together yielded a relative risk of 0.56 (95% confidence interval 0.50-0.61), […] the relative risk was 0.50 (95% confidence interval 0.43-0.56).

Stampfer et. al. believed they’d found a 50% risk reduction for coronary heart disease (CHD) in the NHS data. When this hypothesis was tested in a randomized controlled trial (RCT), CHD risk actually increased to 30%. They weren’t just off, they were completely and totally wrong.

In fact, the test was causing so many people to become sick that in 2002 the trial was stopped early by the safety monitoring board. Not only did actual CHD risk measure at +30%, invasive breast cancer came in at +26%, stroke at +41% and pulmonary embolism at a terrifying +113%. Remember, these are not estimates, these numbers represent actual clinical diagnoses from a controlled trial.

Now, years later, Stampfer and the HSPH have yet another paper using the same NHS data, this time telling us eating red meat will shorten our lives. Are we supposed to believe them this time because the numbers are so much smaller and less significant?


Stoppers, change and transition plans

How many blog posts have started with some variation of “It’s been too quiet around here”, followed by an apology? Too many. This might be one of those, but I learned a long time ago not to apologize for creative lulls. The process is inherently too fickle and just can’t be counted upon.

Truth is, I’ve been writing a lot, I just haven’t posted anything. The reasons are kind of dumb, but mostly there are several “big” pieces getting in the way. I’ve somehow convinced myself that those pieces are foundational to putting other pieces in context. It sounds ridiculous, but there it is. Essentially, I’m creatively constipated, there’s lots in the pipe, but nothing’s getting out.

Nice metaphor there.

In my own head I’ve been calling these things “stoppers”. Until said whatever-it-is is completed, I can’t move onto the next one. I have a piece mostly written about this, but, of course, that too is stuck behind a few other pieces. Curiosity doesn’t schedule, and I find myself drifting off and starting numerous new projects before the old ones are completed.

This site is going to change soon, and somewhat radically. I’ve got a piece half-written about that too. I want to strip things down, remove the barrriers (this is a theme) and get back the joy of making for the web– it’s not just about writing.

Welcome to the stopgap.

This is my near-term plan going forward, I’m not setting any time frames or due dates because there are just too many unknowns and outside interruptions. Right now, this is my intent, but things might always change.

I’ve been tired of WordPress for years, but felt stuck and was never quite sure what to do about it. The current plan is to switch to either Octopress or Jekyll. These should allow me the freedom to write when I feel like writing, hack when I feel like hacking, and create whatever crazy half-breed functional post I want– a current near-impossibility without substantially gutting WordPress.

Technically there’s nothing wrong with WordPress, it’s unarguably better than it’s ever been. But, as a full-fledged platform, it brings it’s own innate complexity. I don’t want to have to deal with that extra level of middleware, I just want to make stuff.

But there’s one big stopper: Both Octopress and Jekyll are Ruby projects. A few years back, feeling burnt out and very tired of PHP, I sat down to learn either Ruby or Python. I chose Python.

I do know about the Hyde project, which started as a Python port of Jekyll, but I didn’t have much luck experimenting with it and the Octopress/Jekyll communities are much more active. Sure I could potentially jump in and contribute to Hyde, but I don’t have the personal bandwidth and would rather start out with a mostly-working framework instead of trying to hack around and fix another which doesn’t quite meet my needs.

As a way of tip-toeing into the Ruby garden, I’m rebuilding a friend’s site using Jekyll. It involves some backend hacking and is letting me get my hands dirty with a small data set.

In the meantime, I’ve started writing everything using Markdown. I first tried Markdown a very, very long time ago–2006-ish–but gave it up because I wasn’t sure it would stick. It stuck.

For implementation during the transition, I’m using the Markdown on Save plugin. This was written by one of the lead WordPress devs, so I trust it’ll work without screwing everything up. Also, since I’m dealing with a somewhat large dataset, the conditional formatting checkbox is a smart solution.

Finally, I’m trying to approach all my writing with an exit strategy. Once I’ve got something down, I immediately start thinking about how to finish it and get out. Editing is no longer just about clarity and polish, it’s a means of escaping from a death spiral.

There’s plenty more I’ve been doing which I’ve been remiss in sharing. Lots of research into health, nutrition, movement, anthropology, running, feet and a bunch of other stuff I’m forgetting. I’m looking forward to having my voice back.

Existential note: Just as I clicked publish, Safari decided to crash.

2012 Manhattan Half Marathon

This was not the triumph I’d been hoping for.

9:07am, about 6 miles in, 23°F on the CNN clockI’d built this race up in my head quite a bit, convincing myself that this one would redeem last year’s injury hampered race. Despite those thoughts, I wasn’t able to train adequately. It likely wouldn’t have mattered anyway, temperatures dropped suddenly and my body never had a chance to adjust to running below-freezing yet. My left knee threw a cold-related tantrum near mile 8, tightening up and never letting go. The last few miles were something of a death march. These things happen sometimes, even with perfect training.

So why did I build this one up? After a fantastic running year in 2010, I went into 2011’s Manhattan Half with a mild foot ache and finished with a stress fracture. Probably two fractures, but the X-ray only showed one at the time. The spot where my foot gave out in 2011 has been haunting me ever since. Just past Cedar Hill behind the Met. 2012’s race was supposed to be when I confronted that demon and put it to rest.

The weather would have none of it. Saturday was the first snowstorm of the winter. Not a big storm, but cold, windy and with enough snow to mess things up a bit. Like the subway, but more on that later. NYRR switched the race to an unscored, non-competitive run; participants would get 9+1 credit whether we ran it or not. After the first lap one of the NYRR organizers was telling people to bag it at 7 miles. I didn’t. Again at 12 miles, an organizer said the last mile was too slippery and to stop early. I’d been fighting my stupid knee for too long to quit there, so again, I kept going. After two hours in a blizzard with a crap leg, this level of psychological torture was sort of existentially comical.

The race started out well, it was snowing, windy and 23° but everyone who braved the elements was in a great mood. This was my first snow run of the year, so my footwear situation was untested. This seemed like too long a run, and likely too wet, to try huaraches and socks in the snow for the first time. I ended up wearing a pair of wool Injinji socks and Soft Star DASH moccasins. The combination seemed fine, I had a similar cold knee issue a few weeks ago in huaraches, so I don’t blame the shoes.

Running in snow is hard work. At the finish I heard someone say it was like running on sand. I didn’t think it was that bad, but my heart rate was significantly elevated the whole time. Trouble sleeping the night before also didn’t help. Still, I didn’t “bonk” or run out of energy, and had my knee cooperated, I don’t think I would have.

Even after finishing with my worst Half Marathon time ever, the day just wouldn’t let up. Thanks to subway and bus troubles, I ended up finishing the morning with a 1.5 mile slog across 14th St. By the time I got home my feet were soaked, I was very cold and very tired.

But in the end, none of that mattered. It was an insane, amazing morning, and while not the triumph I was hoping for, it was a triumph nonetheless.

I’m looking forward to doing it again–and better–in 2013.

Postscript: Two days later I found myself running along the East River, no shirt, no shoes, and smiling in the sun. This has been a crazy winter.


« Previous Page