Randall Hoven grossly misrepresented a recent article in the scientific literature, “Erroneous analyses of interactions in neuroscience: a problem of significance” Sander Nieuwenhuis, Birte U Forstmann & Eric-Jan Wagenmakers, (Nature Neuroscience, Vol. 14, 1105–1107 (2011) doi:10.1038/nn.2886 ).
Here is what Randall Hoven had to say;
Do you know how many doctors, some literally brain surgeons, made an important statistical mistake in their studies? Half of them. These were studies trying to prove that some medical treatment was actually effective.
Yes, half the studies showing that some medical treatment is effective are in error. We just found that out this week (at least for neuroscience journals).
This is either stupid, or dishonest. I cannot tell anymore. Really; Stupid? Dishonest? Dishonest? Stupid? It is so hard.
Randall quoted a secondary news article, “Study finds statistical error in large numbers of neuroscience papers” by Bob Yirka (PhysOrg, September 13, 2011). The quote is,
Sander Nieuwenhuis and his associates from the Netherlands have done a study on one particular type of statistical error that apparently crops up in an inordinately large number of papers published in neuroscience journals. In their paper, published in Nature Neuroscience, they claim that up to half of all papers published in such journals contain the error.
Well, Bob is a jerk (see below). So, does that let Randall of the hook? Only if he is too lazy to read the original article, and he expects his readers to accept his grossly incompetent ability to read. Probably this is a safe assumption, since none of these Bozos seem able to read a scientific paper.
The authors of the Nature Neuroscience article actually had a fairly modest goal to teach that, “when making a comparison between two effects, researchers should report the statistical significance of their difference rather than the difference between their significance levels.”
So, there is some event, E, and it is the possible result of variables A, and B. You need to look at the independent causes A, and B, but also the interaction AB on E. (I offer the crude example of “attractiveness of date” = Horniness + Beer + (Horniness X Beer).” Each of the three have a statitistial probability, p, and by conventional practice, only variables with a probability less than five persent, p<0.05, are called "significant." The paper’s authors correctly place greater emphasis on the interaction effect, (Horniness X Beer). So, any paper they reviewed that didn’t make enough effort to examine the interactive effects was rated as “ERROR, Will Robinson, ERROR!” (Actually, I fully agree). But, here is the short form conclusion from the original article, “Are all these articles wrong about their main conclusions? We do not think so.”
” Are all these articles wrong about their main conclusions? We do not think so."
Before I continue with the stupid, dishonest, or lazy Mr. Hoven, I want to just spend a few electrons on what the real scientific paper had to say. It is interesting. Most researchers in medicine like to keep things very simple. I was a professor of medicine, but not a clinician- I am a scientist. The sort of people who make good clinical workers (or at least good medical students) mostly don’t like things that are abstract. So, I found that presenting research results as a series of “if … then …, if not … then …” decisions was very successful.
The “keep it simple” extends into the presentation of research statistics, to the overall detriment of the research. From the actual, original research this right-wing, fundamentalist jerkwad has mangled, out of thousands of published articles from five major journals, only 513 even fit the selection statistics criteria. Of these, in only
… 157 of these 513 articles (31%), the authors describe at least one situation in which they might be tempted to make the error. In 50% of these cases (78 articles), the authors used the correct approach: they reported a significant interaction. This may be followed by the report of the simple main effects (that is, separate analyses for the main effect of training in the mutant mice and control mice). In the other 50% of the cases (79 articles), the authors made at least one error of the type discussed here: they reported no interaction effect, but only the simple main effects, pointing out the qualitative difference between their significance values (for example, vehicle infusions were associated with a statistically significant increase in freezing behavior; muscimol infusions were not associated with a reliable increase in freezing behavior).”
Lets review those numbers, thousands of papers published, only 513 even had data that fit the topic. Of that fraction, only 157 had data that might be analyzed with an AB interactive effect, and of those, the “correct” analysis was used half the time. So at MOST, there were 15% of studies in a very narrow subdiscipline of neurology that used statistical methods that were weaker than recommended by the study authors.
That is way fucking better than I would have expected.
What Randall Hoven stupidly wonders is,
“So how much can we trust an NAS study that is a study of studies, when half of those underlying studies contain a major error? “
So, OK. Hoven fails to do even a minimal check on sources. Even an dumb undergraduate should know that you do not cite papers you have never even read. The article’s real position that I quoted above was on the first page of the Nature Neuroscience article, and it did not require any statistical, or scientific background to understand. Basic reading comprehension would have been adequate to grasp,
”Are all these articles wrong about their main conclusions? We do not think so.”
But, he expects us to apply his ignorant version of the Nature article to the NAS Institute of Medicine study on vaccination. And, he concludes that his kids (and his reader's kids) shouldn't be vaccinated.
What a dumb ass.
No comments:
Post a Comment