My week on Biotweeps

Last Sunday I finished a week curating the Biotweeps Twitter account. Biotweeps features a different researcher each week, tweeting about their particular areas of interest. It’s a great account to follow for broadening your biology knowledge (as I’m a fake biologist mine is extremely limited). I tweeted about my PhD research, the work I’m currently doing, and some interesting projects and papers from my previous colleagues at Newcastle University (particularly the CANDO project, which aims to create an implantable device for preventing seizures).

My tweets are archived on Storify here.

Publishing on The Winnower

I submitted a review of a paper for publication in the Journal of Neuroscience‘s Journal Club series. The original paper is here (sadly paywalled) – it’s an interesting modelling study that attempts to fit simplified models of neuron population dynamics to experimental recordings to shed some light on the neural network dynamics underlying those recordings. Unfortunately J. Neurosci. made a terrible mistake and chose not to publish my submission, so instead of letting that work go to waste I’ve published it on The Winnower, an innovative open platform for publishing papers without pre-publication peer review. Instead, your article is public immediately, and readers can submit public post-publication reviews (and the article can be updated). This is clearly the future of scientific publishing, and some other bigger publishers (e.g. F1000) are already using similar models. The Winnower is a cool independent alternative, and is currently free to use. I really hope it takes off, but in the current impact-obsessed environment it’s fighting an uphill battle.

Are replication efforts pointless?

A couple of people have tweeted/blogged (EDIT: additional posts from Neuroskeptic, Drugmonkey, Jan Moren, Chris Said, Micah Allen EDIT 2: more, by Sanjay Srivastava, Pete Etchells, Neuropolarbear EDIT 3: more, by Sean Mackinnon) about a recent essay by Jason Mitchell, Professor of Psychology at Harvard, titled On the emptiness of failed replications. Its primary thesis is that efforts to replicate experimental results are pointless, “because unsuccessful experiments have no meaningful scientific value”. This is, of course, counter to the recent replication drive in social psychology – and to how I understand experimental science should be done (caveat: I am not an experimental scientist).

I disagree with the above quotation, and thought I would counter a couple of his arguments that stuck out to me as wrong or misleading:

…A replication attempt starts with good reasons to run an experiment: some theory predicts positive findings, and such findings have been reported in the literature, often more than once. Nevertheless, the experiment fails. In the normal course of science, the presumption would be that the researcher flubbed something important (perhaps something quite subtle) in carrying out the experiment, because that is far-and-away the most likely cause of a scientific failure.

In the case of a very well established result, the most likely cause of scientific failure would certainly be experimental error. But for most hypotheses and theories this is surely not true. The likelihood of each possible cause of a negative result would depend on the likelihood of the hypothesis (consider homeopathy: any number of “positive” results are much better explained through experimental error/bad design/bad analysis than by the hypothesis that homeopathy is effective at curing X – indeed, later in the essay Mitchell acknowledges that spurious positive results can and do come about through bad statistical practices) and potentially unidentified variables. In much “frontier science” the likelihood of the theory is not well known (or even slightly known) and the unidentified variables can be many as the theory is incomplete. We’re getting into experimenter’s regress territory.

… if a replication effort were to be capable of identifying empirically questionable results, it would have to employ flawless experimenters. Otherwise, how do we identify replications that fail simply because of undetected experimenter error?

Adversarial collaboration; alas this happens infrequently and apparently not in many current replication efforts. This is a legitimate criticism of the replication movement: collaboration is essential to avoid experimenter’s regress.

…And here is the rub: if the most likely explanation for a failed experiment is simply a mundane slip-up, and the replicators are themselves not immune to making such mistakes, then the replication efforts have no meaningful evidentiary value outside of the very local (and uninteresting) fact that Professor So-and-So’s lab was incapable of producing an effect.

*Why* they weren’t able to produce the result should be very interesting, but can only really be investigated with collaboration, data sharing etc.

There are three standard rejoinders to these points. The first is to argue that because the replicator is closely copying the method set out in an earlier experiment, the original description must in some way be insufficient or otherwise defective…

…there is more to being a successful experimenter than merely following what’s printed in a method section…Collecting meaningful neuroimaging data, for example, requires that participants remain near-motionless during scanning, and thus in my lab, we go through great lengths to encourage participants to keep still. We whine about how we will have spent a lot of money for nothing if they move, we plead with them not to sneeze or cough or wiggle their foot while in the scanner, and we deliver frequent pep talks and reminders throughout the session. These experimental events, and countless more like them, go unreported in our method section for the simple fact that they are part of the shared, tacit know-how of competent researchers in my field; we also fail to report that the experimenters wore clothes and refrained from smoking throughout the session…

But I can conceive of scenarios where all this pleading and pressure on the participant may in fact cause them to move differently in the scanner from other labs’ approaches to dealing with participant movement, or to perform differently on tasks because they are so distracted by not moving. However, wearing clothes and not smoking indoors is common in many societies. If the participants were naked, that should definitely be reported, as nakedness in front of strangers is often considered socially uncomfortable (perhaps the participants at Harvard have transcended cultural norms around nudity).

A second common rejoinder is to argue that if other professional scientists cannot reproduce an effect, then it is unlikely to be “real.”…

This is a slightly more seductive argument, but it, too, falls short. Many of the most robust and central phenomena in psychology started life as flimsy and capricious effects, their importance only emerging after researcher developed more powerful methods with which to study them.

I agree with this, but I would again suggest that if competent scientists are producing contradicting results, they should collaborate and run experiments together using protocols they both agree on.

A third rejoinder argues that the replication effort ought to be considered a counterweight to our publication bias in favor of positive results… if an effect has been reported twice, but hundreds of other studies have failed to obtain it, isn’t it important to publicize that fact?

No, it isn’t.


Although the notion that negative findings deserve equal treatment may hold intuitive appeal, the very foundation of science rests on a profound asymmetry between positive and negative claims. Suppose I assert the existence of some phenomenon, and you deny it; for example, I claim that some non-white swans exist, and you claim that none do (i.e., that no swans exist that are any color other than white). Whatever our a priori beliefs about the phenomenon, from an inductive standpoint, your negative claim (of nonexistence) is infinitely more tenuous than mine. A single positive example is sufficient to falsify the assertion that something does not exist; one colorful swan is all it takes to rule out the impossibility that swans come in more than one color. In contrast, negative examples can never establish the nonexistence of a phenomenon, because the next instance might always turn up a counterexample…Thus, negative findings—such as failed replications—cannot bear against positive evidence for a phenomenon…Positive scientific assertion cannot be reversed solely on the basis of null observations.

But most experiments do not give us a “positive” result in this sense – they tell us the probability of a result given that the data were generated by a null distribution, not about the truth of our hypothesis. “Positive” experimental studies cannot be reasoned about in the same way as this illustration of the limits of induction.

Replications are not futile, but they are perhaps being conducted sub-optimally (and certainly ruffling some feathers). Adversarial collaboration and data sharing would maximise the benefit of replication experiments.

Says the non-experimentalist.

Wrong studies: is this how science progresses?

An article by Sylvia McLain in the Guardian’s Science Blogs section yesterday argued against John Ioannidis’ provocative view that “most scientific studies are wrong, and they are wrong because scientists are interested in funding and careers rather than truth.” The comments on the Guardian article are good; I thought I might add a little example of why I think Sylvia is wrong in saying that prevailing trends in published research (that most studies turn out to be wrong) just reflect scientific progress as usual.

There is a debate in the neuroscience literature at the moment regarding the electrical properties of brain tissue. When analysing the frequencies of electrical potential recordings from the brain, it is apparent that higher frequencies are attenuated more than lower frequencies – slower events show up with more power than faster events. The electrical properties of brain tissue affect the measured potentials, so it is important to know what these properties are so that the recordings can be properly interpreted. Currently, two theories can explain the observed data: the high-frequency reduction is a result of the properties of the space around neurons (made up mostly of glial cells), which result in a varying impedance that attenuates higher frequencies; or it is a result of passive neuronal membrane properties and the physics of current flow through neurons’ dendrites, and the space around neurons doesn’t have an effect. Both of these explanations are plausible, both are supported by theoretical models, and both have some experimental data supporting them. This is a good case of scientific disagreement, which will be resolved by further more refined models and experiments (I’ll put some links below). It could be that aspects of both theory become accepted, or that one is rejected outright. In that case, the studies will have been shown to be “wrong”, but that is besides the point. They will have advanced scientific knowledge by providing alternative plausible and testable theories to explore.

The kind of “wrong” study that Ioannidis describes is quite different. His hypothesis is that many positive findings are results of publication bias. High profile journals want to publish exciting results, and exciting results are usually positive findings (“we found no effect” is rarely exciting). Scientists are under pressure to publish in high profile journals in order to progress in their careers (in some cases even just to graduate), so are incentivised to fudge statistics, fish for p-values, or just not publish their negative results (not to mention the problems inherent in null hypothesis testing, which are often ignored or not known about by many study designers). Pharmaceutical companies have further obvious incentives only to publish positive results from trials (visit !). This doesn’t lead to a healthy environment for scientific debate between theories; it distorts the literature and hinders scientific progress by allowing scientists and doctors to become distracted by spurious results. It is not – or should not be – “business as usual”, but is a result of the incentive structure scientists currently face.

Hopefully it’s clear why the second kind of wrong is much more damaging than the first kind (the first is healthy), and that’s why I think Sylvia’s Guardian piece is a bit wrong. Changing the incentives is a tricky matter that I won’t go into now, but as an early career researcher it’s something I don’t feel I have a lot of power over.

Note: this is far from comprehensive and mostly focuses on the work of two groups

References in support of the variable impedance of brain tissue causing the low-pass filtering of brain recordings:
Modeling Extracellular Field Potentials and the Frequency-Filtering Properties of Extracellular Space
Model of low-pass filtering of local field potentials in brain tissue
Evidence for frequency-dependent extracellular impedance from the transfer function between extracellular and intracellular potentials
Comparative power spectral analysis of simultaneous elecroencephalographic and magnetoencephalographic recordings in humans suggests non-resistive extracellular media

References in support of intrinsic dendritic filtering properties causing the low-pass filtering of brain recordings:
Amplitude Variability and Extracellular Low-Pass Filtering of Neuronal Spikes
Intrinsic dendritic filtering gives low-pass power spectra of local field potentials
Frequency Dependence of Signal Power and Spatial Reach of the Local Field Potential
In Vivo Measurement of Cortical Impedance Spectrum in Monkeys: Implications for Signal Propagation (this is, as far as I know, the most recent direct experimental study measuring the impedance of brain tissue, finding that the impedance is frequency-independent)

Anti-optogenetics 2

This is a response to John Horgan’s response to the responses to his original anti-optogenetics-hype article what I blogged about. The comments section is worth reading, but I thought I’d respond to a couple of points here, too.

Neuroscientist Richard Tomsett says one of my examples of hype—a TED talk by Ed Boyden, another leader of optogenetics—doesn’t count because “the whole point of such talks is hype and speculation.” Really? So scientists shouldn’t be criticized for hyping their research in mass-media venues like TED—which reaches gigantic audiences–because no one is taking them seriously? Surely that can’t be right.

I perhaps wasn’t clear enough here – my point was that it seemed silly to refer to a TED talk as an example of hype when all TED talks hype their particular topics. Scientists certainly should be criticised for hyping research, but this is a problem with the TED format rather than optogenetics.

…the abysmal state of health care in the U.S. should have a bearing on discussions about biomedical research. I’m not saying that journalists, every time they report on a biomedical advance, need to analyze its potential impact on our health-care problems. But knowledge of these woes should inform coverage of biomedical advances, especially since technological innovation is arguably contributing to our high health care costs.

I agree, but again this is not a problem with optogenetics, or even the scientists that try to hype it.

John’s posts touch on an issue with the way that science is funded, which (in the UK at least, and I assume elsewhere) requires an “impact” assessment to try to ensure that research spending isn’t a waste of money. This is a big problem because it can be very difficult to predict what impact most research will have in the short term, let alone the long term. The most obvious way to demonstrate “impact” in neuroscience is to refer to potential treatments for brain disorders, though such treatments might be years or decades away. The brain is so complex that it’s impossible to predict how a particular piece of research might impact medical practice, but you are compelled to spin your case because of this demand for “impact” – hence why all neuroscience press-releases will refer to potential treatments, no matter how relevant the research is to medicine. I completely agree that if scientists want to justify receiving public money then they need to justify their research to the public, but the current incentives promote hype – particularly medical hype. Note that I don’t offer a solution to this problem…

As I said in the previous post, there are good points to be made about the hype surrounding optogenetics (as in this post), it’s just unfortunate that John instead went for criticisms that could be leveled at any hyped science. Rather than attacking a particular field with some quite shaky points, it would have been much more interesting to address why scientists feel the need to hype their work in the first place.


I read an article that annoyed me a bit. It’s a rant by John Horgan against optogenetics and why the author is vexed by breathless reports of manipulating brain functions using light (optogenetics is where you genetically modify brain cells to enable you to manipulate their behaviour – stimulating or suppressing their firing – using light. This is particularly cool because it allows much better targeted control of brain cells than using implanted electrodes or injecting drugs, the other most precise methods of controlling the activity of many cells). I love me a good rant, and here is a nicely considered article about the limits and hype over optogenetics [NB: I am not an expert in optogenetics], but this was neither good nor considered.

The first half of the article raises a complaint about the hype, which might have been legitimate if it had not misrepresented said hype. It grumbles that articles about optogenetics tout its therapeutic potential for human patients, but we don’t know enough about the mechanisms underlying mental illnesses to treat them with optogenetics. While this latter point is certainly true, it’s a straw-man: read the articles linked-to in the first half and see which ones you think are about human therapeutic potential (I’ve included the links at the bottom*). They all clearly report on animal studies, though of course make reference to the potential for helping to treat human illnesses (not necessarily using optogenetics directly, but by better understanding the brain through optogenetics). Indeed, this point was made to John on Twitter, so his article now includes a clarification at the end admitting as much, but still making some unconvincing points, which we’ll come to later.

The second part of the article addresses John’s “meta-problem” with optogenetics: he “can’t get excited about an extremely high-tech, blue-sky, biomedical ‘breakthrough’-involving complex and hence costly gene therapy and brain surgery-when tens of millions of people in this country [USA] still can’t afford decent health care.” Surely this is a problem with all medical (and, indeed, basic) research that doesn’t address the very largest problems in the health system? I agree totally that this is a massive problem, but it is entirely socio-political, not scientific. Moaning that optogenetic treatments will be expensive is like criticising NASA because only a few lucky astronauts get to go into space.**

John has been good enough to add some “examples of researchers discussing therapeutic applications” to his post. Briefly looking through these, we have a 2011 article in the Joural of Neuroscience, which uses optogenetics to study the role of a particular brain area in depression (doesn’t mention therapeutic optogenetics in abstract, only as a potential avenue for further research in the conclusion); a 2011 TED talk (the whole point of such talks is hype and speculation); this press release from the University of Oxford (which alludes to possible therapeutic use “in the more distant future” in one paragraph in a sixteen paragraph article); a 2011 article in Medical Hypotheses (a non-peer-reviewed journal whose entire point is to publish speculative articles that propose potentially fanciful hypotheses); and this article in the New York Times (I can’t argue with this – there is a fair bit on therapies for humans; John’s main gripe here, from his comments about this article, appears to be with the military funding that one of the several mentioned projects is receiving).

In the second amendment to the article – labeled “clarification” – John admits that he “overstated the degree to which coverage of optogenetics has focused on its potential as a treatment rather than research tool”, which is nice, but then criticises the potential insights from optogenetics research, saying:

But the insights-into-mental-illness angle has also been over-hyped, for the following reasons: First, optogenetics is so invasive that it is unlikely to be tested for research purposes on even the most disabled, desperate human patients any time soon, if ever. Second, research on mice, monkeys and other animals provides limited insights–at best–into complex human illnesses such as depression, bipolar disorder and schizophrenia (or our knowledge of these disorders wouldn’t still be so appallingly primitive). Finally, optogenetics alters the cells and circuits it seeks to study so much that experimental results might not apply to unaltered tissue.

Regarding point one: this is still about therapeutic, not research, uses of optogenetics; it also ignores that many patients undergo invasive surgery for epilepsy (which involves actually cutting bits of brain out – surely optogenetics could be a bit better here?) as well as for deep brain stimulation to treat severe depression and Parkinson’s symptoms. Regarding point two: this is a criticism of using animal models in any kind of research rather than optogenetic research in particular – it is valid, but totally besides the point. Regarding point three: if we’re looking to modify the cells in therapies anyway, why does this matter? Stimulating cells with electrodes or drugs changes the way they behave compared to “unaltered” tissue, too!

TL;DR – read this article instead, and don’t pay much attention to this one. It could have made some good points about optogenetics-hype, but didn’t.

*Links from the original article:

OCD and Optogenetics (Scicurious blog)

Implanting false memories in mice (MIT technology review)

Breaking habits with a  flash of light (Not Exactly Rocket Science blog)

Optogenetics relives depression in mouse trial (Neuron Culture blog)

How to ‘take over’ a brain (CNN)

A laser light show in the brain (The New Yorker)

** yeah I know, tenuous analogy – but let’s face it, all analogies are pretty shite

Policy and Bright Club Cambridge

Alas my neglect for this web-site has been wanton and unmerciful, sorry little autapses. I am currently in Cambridge (UK) doing a policy placement at the wonderful Centre for Science and Policy (CSaP). Part of what they do is help civil servants network with academics to provide government departments with direct access to the best available research. Before I joined I did wonder why they couldn’t just use Google Scholar, but I quickly learned this was a fabulously naive point of view. Civil servants are often very busy, inexperienced with research, unable to devote the time to finding the best and most relevant information in the mountains of muck that populate the literature, and as a result of moving between departments have great breadth but not so much depth of knowledge. The direct links to relevant academics that CSaP provides are really important for getting good research knowledge into government.

Anyway, evangelism section is now complete, advertising commences: I will be “doing a bit” at Cambridge Bright Club at the Portland Arms tomorrow (Friday 14th June) evening, I think there are still some tickets left (just checked, yes there are, BUT NOT MANY). Professional funny people will be there to make you laugh, and six researchers will be there to try to make you laugh. At the very least, you may learn something.

More soon…


The other day I finished reading Complexity: The Emerging Science at the Edge of Order and Chaos by M. Mitchell Waldrop (hilarious back-page plug: “if you liked chaos, you’ll love complexity!”). I’m not much of a bookworm usually, but very occasionally I will relentlessly read something from cover to cover in a short space of time, which I managed to do in this instance (this also happened recently with Robin Ince’s Bad Book Club [which is hilarious, and you should buy it immediately], so it’s not just a science book phenomenon). Complexity was written in the early 90s, shortly after the Santa Fe Institute for complexity research was founded. It serves partly as a documentary of the founding of the institute and its early years, partly as a biography of the key figures involved in its development, and partly as an introduction to the ideas of the new kind of science that the institute was trying to pursue. This works remarkably well – you get a good overview of the relevant scientific ideas, but almost feel like you’re reading a novel. The constant breathless excitement in the tone does begin to grate a little, but it manages to convey the passion of the researchers and the thrill of scientific discovery. It certainly rekindled my excitement with some of the ideas that made me interested in computational modelling and biological systems in the first place.

As the book was written at the height of enthusiasm of the scientists involved, it unfortunately lacks a critical view – no stories of the less successful avenues of research that were pursued, and little mention of criticisms by “mainstream science”. Interestingly, the Santa Fe Institute’s own history page mentions this kind of criticism in passing:

As the Institute’s research interests and reputation grew, so did its list of detractors. Exploring new scientific territory meant that some lines of inquiry failed to live up to expectations. Some researchers in mainstream science felt that complexity science was long on promise but short on results. The criticism culminated in a June 1995 Scientific American article by senior writer John Horgan that openly mocked not only the science of complexity but the scientists doing it. The article, today regarded at the Institute as a wakeup call, caused many in the complexity community to do some soul searching about their field.

I am definitely sympathetic to the approaches pioneered by the Santa Fe institute (amongst other places, of course), though. I particularly love the emergence of complexity in the behaviour of cellular automata – such simple rules can lead to such interesting behaviours. You can’t prove truths about the universe from such simulations, but I think they can give you very deep insights into problems that you just can’t get from other approaches. But that’s just, like, my opinion, man. Go and have a play with a game of life simulator, like this one, to see what I mean.

Anyway, I recommend it, but bear in mind it is quite un-critical, and quite out-of-date. If you want to have a hands-on play with some of the ideas of complexity theory and know a bit of programming, you could do a lot worse than pick up Alan Downey‘s book “Think Complexity” (available for free from his website).

I’m now onto an old software classic, The Cathedral and The Bazaar, which actually has some surprising parallels – complex emergent behaviour in software development processes, defying standard theory through the collective behaviour of individual programmers…

Ridiculous thesis requirements: impact factors (again)

The thesis: a culmination of your years of hard toil as a PhD student, representing a significant original contribution to your field of study. So sayeth the dictionary (probably; I haven’t checked), but one can go about structuring a thesis in various ways. Some universities offer the option of submitting a collection of published papers, together with an introduction and conclusion to tie-together the various works, as an alternative to writing a thesis from scratch and having to reformat your data and ideas from your papers into a new entity that few will ever read. This seems like a sensible compromise to me – if you’ve proven that you can produce research of suitable interest and originality to be published in peer-reviewed journals, it seems somewhat superfluous to have to spend further effort writing a thesis (though of course there are good arguments for writing a separate thesis, which I won’t go into now because I’ll lose the impetus behind my indignant rant).

A friend of mine doing their PhD at a university in East Asia is allowed to submit a thesis in this form, and is currently finishing writing up several papers to include. However, the regulations from their university demand that the combined impact factor for the articles be 10 or more. The problems with impact factors have been well documented elsewhere, and it’s really worrying that this is being asked for as a requirement for a PhD degree when it makes little sense to apply impact factors for journals to individual papers from those journals, let alone to the researchers themselves. These kinds of demands are apparently quite common across universities in the country my friend is working in (apologies for clumsy anonymiserating…), resulting in instances of research fraud – I’ve heard of a supervisor who submitted a review article to a journal with their student listed as first author, though they hadn’t worked on the article, just to bump up the student’s impact for their thesis.

Hopefully my friend won’t have too any problems meeting this ludicrous requirement, as they are doing good research in a “high impact field”, but I wonder how many students, having published good work, can’t achieve this target simply because the good journals in their specialism don’t have comparatively high impact factors.