Wrong studies: is this how science progresses?

An article by Sylvia McLain in the Guardian’s Science Blogs section yesterday argued against John Ioannidis’ provocative view that “most scientific studies are wrong, and they are wrong because scientists are interested in funding and careers rather than truth.” The comments on the Guardian article are good; I thought I might add a little example of why I think Sylvia is wrong in saying that prevailing trends in published research (that most studies turn out to be wrong) just reflect scientific progress as usual.

There is a debate in the neuroscience literature at the moment regarding the electrical properties of brain tissue. When analysing the frequencies of electrical potential recordings from the brain, it is apparent that higher frequencies are attenuated more than lower frequencies – slower events show up with more power than faster events. The electrical properties of brain tissue affect the measured potentials, so it is important to know what these properties are so that the recordings can be properly interpreted. Currently, two theories can explain the observed data: the high-frequency reduction is a result of the properties of the space around neurons (made up mostly of glial cells), which result in a varying impedance that attenuates higher frequencies; or it is a result of passive neuronal membrane properties and the physics of current flow through neurons’ dendrites, and the space around neurons doesn’t have an effect. Both of these explanations are plausible, both are supported by theoretical models, and both have some experimental data supporting them. This is a good case of scientific disagreement, which will be resolved by further more refined models and experiments (I’ll put some links below). It could be that aspects of both theory become accepted, or that one is rejected outright. In that case, the studies will have been shown to be “wrong”, but that is besides the point. They will have advanced scientific knowledge by providing alternative plausible and testable theories to explore.

The kind of “wrong” study that Ioannidis describes is quite different. His hypothesis is that many positive findings are results of publication bias. High profile journals want to publish exciting results, and exciting results are usually positive findings (“we found no effect” is rarely exciting). Scientists are under pressure to publish in high profile journals in order to progress in their careers (in some cases even just to graduate), so are incentivised to fudge statistics, fish for p-values, or just not publish their negative results (not to mention the problems inherent in null hypothesis testing, which are often ignored or not known about by many study designers). Pharmaceutical companies have further obvious incentives only to publish positive results from trials (visit www.alltrials.net !). This doesn’t lead to a healthy environment for scientific debate between theories; it distorts the literature and hinders scientific progress by allowing scientists and doctors to become distracted by spurious results. It is not – or should not be – “business as usual”, but is a result of the incentive structure scientists currently face.

Hopefully it’s clear why the second kind of wrong is much more damaging than the first kind (the first is healthy), and that’s why I think Sylvia’s Guardian piece is a bit wrong. Changing the incentives is a tricky matter that I won’t go into now, but as an early career researcher it’s something I don’t feel I have a lot of power over.

REFERENCES
Note: this is far from comprehensive and mostly focuses on the work of two groups

References in support of the variable impedance of brain tissue causing the low-pass filtering of brain recordings:
Modeling Extracellular Field Potentials and the Frequency-Filtering Properties of Extracellular Space
Model of low-pass filtering of local field potentials in brain tissue
Evidence for frequency-dependent extracellular impedance from the transfer function between extracellular and intracellular potentials
Comparative power spectral analysis of simultaneous elecroencephalographic and magnetoencephalographic recordings in humans suggests non-resistive extracellular media

References in support of intrinsic dendritic filtering properties causing the low-pass filtering of brain recordings:
Amplitude Variability and Extracellular Low-Pass Filtering of Neuronal Spikes
Intrinsic dendritic filtering gives low-pass power spectra of local field potentials
Frequency Dependence of Signal Power and Spatial Reach of the Local Field Potential
In Vivo Measurement of Cortical Impedance Spectrum in Monkeys: Implications for Signal Propagation (this is, as far as I know, the most recent direct experimental study measuring the impedance of brain tissue, finding that the impedance is frequency-independent)

  • Sylvia McLain

    Thanks for the article about my article – it’s nice to stir a civil debate.
    I think there is a very large difference between fraud and mistakes – which is how I interpret your different kinds of being wrong.

    There have always been crappy scientists making bold claims using tiny bits of data which sort of show something – and they are easily proven wrong. However these things don’t stand the test of time. Good science and theories have to stand the test of time and often it takes years to discover what is a good theory and what is a crap theory. Also, as you say, wrong results will kick start a field into looking into that area and saying hey wait a minute – there are other answers. Most debates in any field are about just this.

    This in a way is a separate issue – yes we are under and enormous amount of pressure to publish high impact but does this necessarily mean that people are ‘cheating’ (for lack of a better word) – also to be fair there are many more people doing science now than there were say even 50 years ago – so sheer number of bad theories on the increase (if they really are) may just be in the noise. We don’t know.

    my main point in the Guardian article was I don’t think this is huge change from how science has always been – there have always been folks that make big crazy claims, and in the end these theories fall by the wayside….

    • Hi Sylvia, thanks very much for taking the time to reply! I think my initial explanation perhaps didn’t capture quite what my thoughts are – I think fraud is a different issue entirely.

      I agree with your second paragraph that it takes time to form a strong consensus on a particular theory, but I don’t think I agree that certain claims based on weak data are easy to prove wrong, especially in the “softer” sciences. So many variables can be invoked to explain away disagreements from other scientists, and further datasets can be gathered that, given the right experimental design and statistical treatment, are bound to agree with the original weak claim. Misuse of statistics is a big problem here, but there are no incentives for anyone to be cautious with their stats. I wouldn’t call this fraud, though (unless the statistics are used willfully incorrectly).

      But yes I can’t say whether this is a change from the past. My guess is that the increase of quantitative methods in “softer” fields has increased the number of definitive claims in such fields when such claims aren’t actually warranted, but also yes the number of scientists has increased dramatically, so the amount of noise also increases.

      The impression I get from people who’ve been around longer than I have is that the introduction of journal/author metrics *has* changed things. Hiring committees can now look at numbers rather than trying to assess quality themselves. Numbers are boosted by publications in glamour journals, and as mentioned above the incentives for the kind of science that glamour journals publish are all wrong. As such you now not only have people producing incorrect science, but also operating under a different incentive regime from before. I guess time will tell how big a change from before this will turn out to be…

  • Gaute T. Einevoll

    Hello Richard; I was just notified about your blog post. I take it as a compliment that the on-going discussion regarding the origin of filtering of extracellular potentials was used as an example of how science should be done and make progress. Thanks! In fact, I just wrote an email to Alain about it. Best regards Gaute

    • Hi Gaute, thanks for reading! Hope you think I’ve given a fair (if overly brief) representation of the current discussion, if there’s anything I’ve obviously missed/misunderstood then let me know.

      As it happens, I’m not convinced myself by Alain’s argument against the influence of dendritic filtering under in vivo conditions (e.g. http://autap.se/LFP ). In their example model (fig 2 in “Evidence for frequency-dependent extracellular impedance from the transfer function between extracellular and intracellular potentials” 2010 J Comput Neuro paper), from my understanding the conductances were distributed evenly along the length of the ball-and-stick model. However, synapses onto pyramidal cells (in layer 5 at least) are mostly onto the basal dendrites, or the basal dendrites + apical tuft (eg Binzegger et al 2004), so the situations in your 2013 PLoS CB paper showing frequency filtering with basal input seem more appropriate for the general “decorrelated” in vivo case, and even more so when the input is correlated. Though of course if the new findings/theories about monopolar current sources hold up then perhaps this observation would have to be revised…

  • Hello everyone,

    Thanks Gaute for mentioning the blog, and thanks to everyone for considering us as an example of how science should progress in a civil way!

    It is true that our labs defend different hypotheses, and that different experiments support both hypotheses, and yet we are good friends and we collaborate together as partners in European grants.

    On a technical note, at the SFN meeting in San Diego, we will present new direct measurements of the extracellular impedance, in vivo and in vitro, showing frequency filtering. This will make the debate even hotter.

    Of course it does not show that the other hypothesis is wrong. As always, the truth may be that both are true — or both wrong (!) — we’ll see…

    best,
    Alain

    • Hi Alain, thanks for commenting! Unfortunately I can’t make it to SfN this year but it’ll be interesting to hear your new results. I should have made more effort to discuss this with you at last year’s Hertie winter school but I was a bit distracted by my 2nd day leg-breaking incident…