Quantcast
Channel: denialism blog » GMOs
Viewing all articles
Browse latest Browse all 3

Anti-GMO study is appropriately dismissed as biased, poorly-performed

$
0
0

The anti-GMO study released late last week has raised so many bad science red flags that I’m losing count. Orac and Steve Novella have both discussed fatal flaws in the research, the New Scientist discussed the researchers’ historical behavior of inflating insignificant results to hysterical headlines. And all this new paper seems to be proof of is that these researchers have become more savvy at manipulating press coverage. The result of this clever manipulation of the press embargo and news-release stenography by the press is predictable. The internet food crackpot army has a bogus paper to flog eternally with Mike Adams predicting the end of humanity, and Joe Mercola hailing this as the bestest study of GMO Evar. Lefty publications that are susceptible to this nonsense like Mother Jones have largely uncritical coverage and repeat the researchers’ bogus talking points. It’s a wonder Mark Bittman, organic food booster and anti-GMO half-wit hasn’t used it for his assertion that the evidence against GMO is “damning”. He substantiates this claim, by the way, by linking an article without a single scientific citation, just links to crankier and crankier websites.

Orac and Steve Novella do a good job dissecting many of the methodological flaws of this paper. Similarly, my read (or reads since this paper is unnecessarily obtuse in its data presentation), is that this paper is so flawed as to be meaningless.

Critically, these rates of tumor formation are well established from the pre-GMO era. This paper is exceptional for a low rate of tumor formation in the controls compared to historical controls and knowledge of tumor formation in this rat strain.

Second, the sample groups were small, and the parameters measured were large, almost guaranteeing false-postive events would outnumber true-positive events. Take a data set like they generated, and then perform subgroup analysis, and false-positive yet statistically-significant events are going to jump out at you like mad. The researchers then indeed seem to engage in this behavior, selecting a single time point to present their measurements of various biomarkers, rather than showing them over time. This is particularly notable in figure 5 and table 3. This is a sign of sloppy thinking, sloppy experimental design, and a failure to understand Bayesian probabilities. If you study 100 variables at random, you are likely to find false-positive statistically significant events about 5 percent of the time, even though there is no actual difference between groups. The pre-test probability of an effect being meaningful should determine whether a test should be performed and reported, and this “fishing trip” kind of experiment should only be the beginning of the process. It’s simply not possible to know the relevance of any of these ostensibly significant results found by subgroup analysis until they are subsequently studied as primary endpoints of a study.

The histology in figure 3 is demonstrative of nothing, and the scary rat tumor pictures notably lack a control rat – and we know the controls make tumors too. So why aren’t any control tumors shown? With the concern for bias throughout this paper I find the entire figure to be of no value, since it’s purely qualitative and highly susceptible to bias. Histology slides should be used to show something meaningful in terms of big qualitative effects, unusual structure, or a specific pathology. If one is to make claims about differences between groups by histology, you still have to subject it to rigorous and blinded analysis. I’ve done it, published on it, etc. It can be done. Worse, we know the controls have tumors, and that in this strain tumors are frequent. Why are the control samples always completely normal if not for biased selection of samples? Don’t show me one kidney, show me all the kidneys. Don’t show me one control slide, show me ten, or ideally the results of a blinded quantitative evaluation for tumors or histopathologic grade.

Similarly with figure 4, I don’t see a significant difference between the fields examined, and looking back at previous papers from the same group none of their ultrastructural evaluation of glyphosphate exposed or glyphosphate-resistant feed exposed cells and animals appears consistent or convincing. I don’t think many people have exposure to EM anymore as an assay, but having performed it, it’s very hard to say anything quantitative or meaningful with it. You’re going to find something in every grid, and it largely serves as a qualitative evaluation of cellular ultrastructure. I’m very wary of someone saying, upon presentation of a couple EM slides, that two groups of cells are “different”, and I’m confused by the assertion that the areas they describe represent aggregates of glycogen. What is the significance of glycogen being more dispersed in one cell versus another? You found some residual bodies, so what? They’re everywhere. Is this really a consistent effect? Show me numbers – summaries from 10 grids. Is there any clinical significance of such a change? The answer is no. If I were a reviewer I would have told them to junk the figure unless they wanted it to provide evidence of no difference between the cells.

In general the paper is confusing and poorly-written. Others have pointed out that Figure 1 is unnecessarily complex and better representation of the same data shows no consistent pattern of effect. I would say, given the sample sizes and effect sizes that the likelihood is the researchers are studying noise. There simply is no signal there, if there were there would be a consistent dose-response effect, rather than in many cases the “low dose” group having more tumors than the “high dose” groups. Without error bars it’s hard to be sure but my read of figure 1, in particular the inset panels, is that there really is no difference between any of the groups in terms of tumor formation.

We also have to consider that in the end, this whole idea is kind of dumb. Is there really a plausible explanation for how eating feed with an enzyme that’s resistant to glyphosate generates more tumors in rats, and so does exposure to glyphosate? Why would this protein be tumorigenic? If indeed the roundup-ready crop may have residual levels of glyphosate on it, and that’s the explanation for the similarity between groups, then aren’t you just admitting you’ve done a completely uncontrolled analysis of exposure to the compound? Couldn’t and shouldn’t this have been assayed? Isn’t this whole study kind of crap?

This paper should not have passed peer-review and represents a failure by the editors and reviewers to adequately vet this paper.


Viewing all articles
Browse latest Browse all 3

Trending Articles