Wrong studies: is this how science progresses?

An article by Sylvia McLain in the Guardian’s Science Blogs section yesterday argued against John Ioannidis’ provocative view that “most scientific studies are wrong, and they are wrong because scientists are interested in funding and careers rather than truth.” The comments on the Guardian article are good; I thought I might add a little example of why I think Sylvia is wrong in saying that prevailing trends in published research (that most studies turn out to be wrong) just reflect scientific progress as usual.

There is a debate in the neuroscience literature at the moment regarding the electrical properties of brain tissue. When analysing the frequencies of electrical potential recordings from the brain, it is apparent that higher frequencies are attenuated more than lower frequencies – slower events show up with more power than faster events. The electrical properties of brain tissue affect the measured potentials, so it is important to know what these properties are so that the recordings can be properly interpreted. Currently, two theories can explain the observed data: the high-frequency reduction is a result of the properties of the space around neurons (made up mostly of glial cells), which result in a varying impedance that attenuates higher frequencies; or it is a result of passive neuronal membrane properties and the physics of current flow through neurons’ dendrites, and the space around neurons doesn’t have an effect. Both of these explanations are plausible, both are supported by theoretical models, and both have some experimental data supporting them. This is a good case of scientific disagreement, which will be resolved by further more refined models and experiments (I’ll put some links below). It could be that aspects of both theory become accepted, or that one is rejected outright. In that case, the studies will have been shown to be “wrong”, but that is besides the point. They will have advanced scientific knowledge by providing alternative plausible and testable theories to explore.

The kind of “wrong” study that Ioannidis describes is quite different. His hypothesis is that many positive findings are results of publication bias. High profile journals want to publish exciting results, and exciting results are usually positive findings (“we found no effect” is rarely exciting). Scientists are under pressure to publish in high profile journals in order to progress in their careers (in some cases even just to graduate), so are incentivised to fudge statistics, fish for p-values, or just not publish their negative results (not to mention the problems inherent in null hypothesis testing, which are often ignored or not known about by many study designers). Pharmaceutical companies have further obvious incentives only to publish positive results from trials (visit www.alltrials.net !). This doesn’t lead to a healthy environment for scientific debate between theories; it distorts the literature and hinders scientific progress by allowing scientists and doctors to become distracted by spurious results. It is not – or should not be – “business as usual”, but is a result of the incentive structure scientists currently face.

Hopefully it’s clear why the second kind of wrong is much more damaging than the first kind (the first is healthy), and that’s why I think Sylvia’s Guardian piece is a bit wrong. Changing the incentives is a tricky matter that I won’t go into now, but as an early career researcher it’s something I don’t feel I have a lot of power over.

Note: this is far from comprehensive and mostly focuses on the work of two groups

References in support of the variable impedance of brain tissue causing the low-pass filtering of brain recordings:
Modeling Extracellular Field Potentials and the Frequency-Filtering Properties of Extracellular Space
Model of low-pass filtering of local field potentials in brain tissue
Evidence for frequency-dependent extracellular impedance from the transfer function between extracellular and intracellular potentials
Comparative power spectral analysis of simultaneous elecroencephalographic and magnetoencephalographic recordings in humans suggests non-resistive extracellular media

References in support of intrinsic dendritic filtering properties causing the low-pass filtering of brain recordings:
Amplitude Variability and Extracellular Low-Pass Filtering of Neuronal Spikes
Intrinsic dendritic filtering gives low-pass power spectra of local field potentials
Frequency Dependence of Signal Power and Spatial Reach of the Local Field Potential
In Vivo Measurement of Cortical Impedance Spectrum in Monkeys: Implications for Signal Propagation (this is, as far as I know, the most recent direct experimental study measuring the impedance of brain tissue, finding that the impedance is frequency-independent)

Anti-optogenetics 2

This is a response to John Horgan’s response to the responses to his original anti-optogenetics-hype article what I blogged about. The comments section is worth reading, but I thought I’d respond to a couple of points here, too.

Neuroscientist Richard Tomsett says one of my examples of hype—a TED talk by Ed Boyden, another leader of optogenetics—doesn’t count because “the whole point of such talks is hype and speculation.” Really? So scientists shouldn’t be criticized for hyping their research in mass-media venues like TED—which reaches gigantic audiences–because no one is taking them seriously? Surely that can’t be right.

I perhaps wasn’t clear enough here – my point was that it seemed silly to refer to a TED talk as an example of hype when all TED talks hype their particular topics. Scientists certainly should be criticised for hyping research, but this is a problem with the TED format rather than optogenetics.

…the abysmal state of health care in the U.S. should have a bearing on discussions about biomedical research. I’m not saying that journalists, every time they report on a biomedical advance, need to analyze its potential impact on our health-care problems. But knowledge of these woes should inform coverage of biomedical advances, especially since technological innovation is arguably contributing to our high health care costs.

I agree, but again this is not a problem with optogenetics, or even the scientists that try to hype it.

John’s posts touch on an issue with the way that science is funded, which (in the UK at least, and I assume elsewhere) requires an “impact” assessment to try to ensure that research spending isn’t a waste of money. This is a big problem because it can be very difficult to predict what impact most research will have in the short term, let alone the long term. The most obvious way to demonstrate “impact” in neuroscience is to refer to potential treatments for brain disorders, though such treatments might be years or decades away. The brain is so complex that it’s impossible to predict how a particular piece of research might impact medical practice, but you are compelled to spin your case because of this demand for “impact” – hence why all neuroscience press-releases will refer to potential treatments, no matter how relevant the research is to medicine. I completely agree that if scientists want to justify receiving public money then they need to justify their research to the public, but the current incentives promote hype – particularly medical hype. Note that I don’t offer a solution to this problem…

As I said in the previous post, there are good points to be made about the hype surrounding optogenetics (as in this post), it’s just unfortunate that John instead went for criticisms that could be leveled at any hyped science. Rather than attacking a particular field with some quite shaky points, it would have been much more interesting to address why scientists feel the need to hype their work in the first place.


I read an article that annoyed me a bit. It’s a rant by John Horgan against optogenetics and why the author is vexed by breathless reports of manipulating brain functions using light (optogenetics is where you genetically modify brain cells to enable you to manipulate their behaviour – stimulating or suppressing their firing – using light. This is particularly cool because it allows much better targeted control of brain cells than using implanted electrodes or injecting drugs, the other most precise methods of controlling the activity of many cells). I love me a good rant, and here is a nicely considered article about the limits and hype over optogenetics [NB: I am not an expert in optogenetics], but this was neither good nor considered.

The first half of the article raises a complaint about the hype, which might have been legitimate if it had not misrepresented said hype. It grumbles that articles about optogenetics tout its therapeutic potential for human patients, but we don’t know enough about the mechanisms underlying mental illnesses to treat them with optogenetics. While this latter point is certainly true, it’s a straw-man: read the articles linked-to in the first half and see which ones you think are about human therapeutic potential (I’ve included the links at the bottom*). They all clearly report on animal studies, though of course make reference to the potential for helping to treat human illnesses (not necessarily using optogenetics directly, but by better understanding the brain through optogenetics). Indeed, this point was made to John on Twitter, so his article now includes a clarification at the end admitting as much, but still making some unconvincing points, which we’ll come to later.

The second part of the article addresses John’s “meta-problem” with optogenetics: he “can’t get excited about an extremely high-tech, blue-sky, biomedical ‘breakthrough’-involving complex and hence costly gene therapy and brain surgery-when tens of millions of people in this country [USA] still can’t afford decent health care.” Surely this is a problem with all medical (and, indeed, basic) research that doesn’t address the very largest problems in the health system? I agree totally that this is a massive problem, but it is entirely socio-political, not scientific. Moaning that optogenetic treatments will be expensive is like criticising NASA because only a few lucky astronauts get to go into space.**

John has been good enough to add some “examples of researchers discussing therapeutic applications” to his post. Briefly looking through these, we have a 2011 article in the Joural of Neuroscience, which uses optogenetics to study the role of a particular brain area in depression (doesn’t mention therapeutic optogenetics in abstract, only as a potential avenue for further research in the conclusion); a 2011 TED talk (the whole point of such talks is hype and speculation); this press release from the University of Oxford (which alludes to possible therapeutic use “in the more distant future” in one paragraph in a sixteen paragraph article); a 2011 article in Medical Hypotheses (a non-peer-reviewed journal whose entire point is to publish speculative articles that propose potentially fanciful hypotheses); and this article in the New York Times (I can’t argue with this – there is a fair bit on therapies for humans; John’s main gripe here, from his comments about this article, appears to be with the military funding that one of the several mentioned projects is receiving).

In the second amendment to the article – labeled “clarification” – John admits that he “overstated the degree to which coverage of optogenetics has focused on its potential as a treatment rather than research tool”, which is nice, but then criticises the potential insights from optogenetics research, saying:

But the insights-into-mental-illness angle has also been over-hyped, for the following reasons: First, optogenetics is so invasive that it is unlikely to be tested for research purposes on even the most disabled, desperate human patients any time soon, if ever. Second, research on mice, monkeys and other animals provides limited insights–at best–into complex human illnesses such as depression, bipolar disorder and schizophrenia (or our knowledge of these disorders wouldn’t still be so appallingly primitive). Finally, optogenetics alters the cells and circuits it seeks to study so much that experimental results might not apply to unaltered tissue.

Regarding point one: this is still about therapeutic, not research, uses of optogenetics; it also ignores that many patients undergo invasive surgery for epilepsy (which involves actually cutting bits of brain out – surely optogenetics could be a bit better here?) as well as for deep brain stimulation to treat severe depression and Parkinson’s symptoms. Regarding point two: this is a criticism of using animal models in any kind of research rather than optogenetic research in particular – it is valid, but totally besides the point. Regarding point three: if we’re looking to modify the cells in therapies anyway, why does this matter? Stimulating cells with electrodes or drugs changes the way they behave compared to “unaltered” tissue, too!

TL;DR – read this article instead, and don’t pay much attention to this one. It could have made some good points about optogenetics-hype, but didn’t.

*Links from the original article:

OCD and Optogenetics (Scicurious blog)

Implanting false memories in mice (MIT technology review)

Breaking habits with a  flash of light (Not Exactly Rocket Science blog)

Optogenetics relives depression in mouse trial (Neuron Culture blog)

How to ‘take over’ a brain (CNN)

A laser light show in the brain (The New Yorker)

** yeah I know, tenuous analogy – but let’s face it, all analogies are pretty shite