CNS 2015 in Prague, deadline extended

It seems unlikely that I’ll be able to attend CNS (the annual computational neuroscience meeting) this year (I went to the Quebec meeting last year and it was great – my posters are on figshare), but they’ve just announced that the abstract submission deadline has been extended to 1st March. It’s a cool meeting with some fun and interesting people, and Prague is swell. I feel I should be pimping it out, too, as I did the poster:

CNS 2015 Prague

Obviously the poster is now out of date with the abstract deadline there. Anyway.

 

Consciousness, memory, booze

I was having a read of Neuroskeptic‘s interview with Dr Srivas Chennu on PLOS Neuro earlier (recommended), and found myself nodding vigorously in agreement on reading this quotation:

Consciousness is not just being aware of something, but also being aware that you were aware of it yesterday.

I don’t read too much about consciousness research, but I find sometimes that consciousness researchers neglect the crucial aspect of memory. It is quite possible to appear to be conscious (moving, responding to stimuli, holding conversations) without storing memories. Consider becoming black-out drunk. If you’ve never done this, I wouldn’t necessarily recommend it (unless you are a consciousness researcher, in which case it’s completely essential research), but it is interesting from a philosophical perspective. At some point during an evening of drinking, you completely cease forming memories, but your behaviour is not dramatically different from your usual drunken antics. Are you conscious during this period?

What about when you were an infant and unable to form longer-term memories, were you conscious then? No one can remember, so you can’t just ask someone “were you conscious when you were 3 months old?” – but most people can answer affirmatively if you ask “were you conscious when you were 6 years old?” even if they’ve forgotten much of what they did at age 6.

One of the current popular theories of consciousness, integrated information theory (IIT), doesn’t take into account memory (Scott Aaronson’s detailed post on IIT is great). It does many other strange things like predicting small amounts of consciousness for items that intuitively would not be conscious in any way, but I suppose this just helps to show that whether something is conscious or not is determinable only by the something itself. If it can remember(?). Medical consciousness researchers have got their work cut out for them.

Are replication efforts pointless?

A couple of people have tweeted/blogged (EDIT: additional posts from Neuroskeptic, Drugmonkey, Jan Moren, Chris Said, Micah Allen EDIT 2: more, by Sanjay Srivastava, Pete Etchells, Neuropolarbear EDIT 3: more, by Sean Mackinnon) about a recent essay by Jason Mitchell, Professor of Psychology at Harvard, titled On the emptiness of failed replications. Its primary thesis is that efforts to replicate experimental results are pointless, “because unsuccessful experiments have no meaningful scientific value”. This is, of course, counter to the recent replication drive in social psychology – and to how I understand experimental science should be done (caveat: I am not an experimental scientist).

I disagree with the above quotation, and thought I would counter a couple of his arguments that stuck out to me as wrong or misleading:

…A replication attempt starts with good reasons to run an experiment: some theory predicts positive findings, and such findings have been reported in the literature, often more than once. Nevertheless, the experiment fails. In the normal course of science, the presumption would be that the researcher flubbed something important (perhaps something quite subtle) in carrying out the experiment, because that is far-and-away the most likely cause of a scientific failure.

In the case of a very well established result, the most likely cause of scientific failure would certainly be experimental error. But for most hypotheses and theories this is surely not true. The likelihood of each possible cause of a negative result would depend on the likelihood of the hypothesis (consider homeopathy: any number of “positive” results are much better explained through experimental error/bad design/bad analysis than by the hypothesis that homeopathy is effective at curing X – indeed, later in the essay Mitchell acknowledges that spurious positive results can and do come about through bad statistical practices) and potentially unidentified variables. In much “frontier science” the likelihood of the theory is not well known (or even slightly known) and the unidentified variables can be many as the theory is incomplete. We’re getting into experimenter’s regress territory.

… if a replication effort were to be capable of identifying empirically questionable results, it would have to employ flawless experimenters. Otherwise, how do we identify replications that fail simply because of undetected experimenter error?

Adversarial collaboration; alas this happens infrequently and apparently not in many current replication efforts. This is a legitimate criticism of the replication movement: collaboration is essential to avoid experimenter’s regress.

…And here is the rub: if the most likely explanation for a failed experiment is simply a mundane slip-up, and the replicators are themselves not immune to making such mistakes, then the replication efforts have no meaningful evidentiary value outside of the very local (and uninteresting) fact that Professor So-and-So’s lab was incapable of producing an effect.

*Why* they weren’t able to produce the result should be very interesting, but can only really be investigated with collaboration, data sharing etc.

There are three standard rejoinders to these points. The first is to argue that because the replicator is closely copying the method set out in an earlier experiment, the original description must in some way be insufficient or otherwise defective…

…there is more to being a successful experimenter than merely following what’s printed in a method section…Collecting meaningful neuroimaging data, for example, requires that participants remain near-motionless during scanning, and thus in my lab, we go through great lengths to encourage participants to keep still. We whine about how we will have spent a lot of money for nothing if they move, we plead with them not to sneeze or cough or wiggle their foot while in the scanner, and we deliver frequent pep talks and reminders throughout the session. These experimental events, and countless more like them, go unreported in our method section for the simple fact that they are part of the shared, tacit know-how of competent researchers in my field; we also fail to report that the experimenters wore clothes and refrained from smoking throughout the session…

But I can conceive of scenarios where all this pleading and pressure on the participant may in fact cause them to move differently in the scanner from other labs’ approaches to dealing with participant movement, or to perform differently on tasks because they are so distracted by not moving. However, wearing clothes and not smoking indoors is common in many societies. If the participants were naked, that should definitely be reported, as nakedness in front of strangers is often considered socially uncomfortable (perhaps the participants at Harvard have transcended cultural norms around nudity).

A second common rejoinder is to argue that if other professional scientists cannot reproduce an effect, then it is unlikely to be “real.”…

This is a slightly more seductive argument, but it, too, falls short. Many of the most robust and central phenomena in psychology started life as flimsy and capricious effects, their importance only emerging after researcher developed more powerful methods with which to study them.

I agree with this, but I would again suggest that if competent scientists are producing contradicting results, they should collaborate and run experiments together using protocols they both agree on.

A third rejoinder argues that the replication effort ought to be considered a counterweight to our publication bias in favor of positive results… if an effect has been reported twice, but hundreds of other studies have failed to obtain it, isn’t it important to publicize that fact?

No, it isn’t.

Eh?

Although the notion that negative findings deserve equal treatment may hold intuitive appeal, the very foundation of science rests on a profound asymmetry between positive and negative claims. Suppose I assert the existence of some phenomenon, and you deny it; for example, I claim that some non-white swans exist, and you claim that none do (i.e., that no swans exist that are any color other than white). Whatever our a priori beliefs about the phenomenon, from an inductive standpoint, your negative claim (of nonexistence) is infinitely more tenuous than mine. A single positive example is sufficient to falsify the assertion that something does not exist; one colorful swan is all it takes to rule out the impossibility that swans come in more than one color. In contrast, negative examples can never establish the nonexistence of a phenomenon, because the next instance might always turn up a counterexample…Thus, negative findings—such as failed replications—cannot bear against positive evidence for a phenomenon…Positive scientific assertion cannot be reversed solely on the basis of null observations.

But most experiments do not give us a “positive” result in this sense – they tell us the probability of a result given that the data were generated by a null distribution, not about the truth of our hypothesis. “Positive” experimental studies cannot be reasoned about in the same way as this illustration of the limits of induction.

Replications are not futile, but they are perhaps being conducted sub-optimally (and certainly ruffling some feathers). Adversarial collaboration and data sharing would maximise the benefit of replication experiments.

Says the non-experimentalist.

PhD done, moved to Okinawa

Hello everybody!

I realise I haven’t done a post since last September, which I believe is long enough to declare this blog legally dead. Fortunately I am trained in internet CPR and am able to kickstart the heart of this here blogging enterprise using my natural guile, expert medical training, and the WordPress “add new post” button. In my defense, I have been finishing off my thesis, which has now been submitted, scrutinised, corrected, resubmitted, re-scrutinised, and finally deemed worthy by the Powers That Be, which means I am now officially Dr. Richard Tömsett, PhD.

In more interesting developments, I have moved away from the City of Dreams to the wonderful island of Okinawa to start a 1 year postdoctoral research thingy at OIST (many thanks to the Japan Society for the Promotion of Science for moneys). I’m lucky enough to have been to Okinawa before, in 2011 when I did the Okinawa Computational Neuroscience Course. They have a beer vending machine. Insanity. Anyway, a big attraction of Okinawa is the beaches and sunshine, but of course it’s been raining pretty much since I arrived so far. The university itself is pretty sexy though.

There are already plenty of pictures of how sexy and nice Okinawa is so I thought I’d mainly post pictures of things that tickled me about Japan. Behold, the chewing gum that, when you put it in your mouth, gives you “special breath”:

Special Breath Chewing Gum

Your breath, it will be special

As I haven’t posted for ages, you get the bonus treat of the magical toilet that has a sink on top of it, so when you flush, you can wash your hands and not waste any water! Ingenious

Wrong studies: is this how science progresses?

An article by Sylvia McLain in the Guardian’s Science Blogs section yesterday argued against John Ioannidis’ provocative view that “most scientific studies are wrong, and they are wrong because scientists are interested in funding and careers rather than truth.” The comments on the Guardian article are good; I thought I might add a little example of why I think Sylvia is wrong in saying that prevailing trends in published research (that most studies turn out to be wrong) just reflect scientific progress as usual.

There is a debate in the neuroscience literature at the moment regarding the electrical properties of brain tissue. When analysing the frequencies of electrical potential recordings from the brain, it is apparent that higher frequencies are attenuated more than lower frequencies – slower events show up with more power than faster events. The electrical properties of brain tissue affect the measured potentials, so it is important to know what these properties are so that the recordings can be properly interpreted. Currently, two theories can explain the observed data: the high-frequency reduction is a result of the properties of the space around neurons (made up mostly of glial cells), which result in a varying impedance that attenuates higher frequencies; or it is a result of passive neuronal membrane properties and the physics of current flow through neurons’ dendrites, and the space around neurons doesn’t have an effect. Both of these explanations are plausible, both are supported by theoretical models, and both have some experimental data supporting them. This is a good case of scientific disagreement, which will be resolved by further more refined models and experiments (I’ll put some links below). It could be that aspects of both theory become accepted, or that one is rejected outright. In that case, the studies will have been shown to be “wrong”, but that is besides the point. They will have advanced scientific knowledge by providing alternative plausible and testable theories to explore.

The kind of “wrong” study that Ioannidis describes is quite different. His hypothesis is that many positive findings are results of publication bias. High profile journals want to publish exciting results, and exciting results are usually positive findings (“we found no effect” is rarely exciting). Scientists are under pressure to publish in high profile journals in order to progress in their careers (in some cases even just to graduate), so are incentivised to fudge statistics, fish for p-values, or just not publish their negative results (not to mention the problems inherent in null hypothesis testing, which are often ignored or not known about by many study designers). Pharmaceutical companies have further obvious incentives only to publish positive results from trials (visit www.alltrials.net !). This doesn’t lead to a healthy environment for scientific debate between theories; it distorts the literature and hinders scientific progress by allowing scientists and doctors to become distracted by spurious results. It is not – or should not be – “business as usual”, but is a result of the incentive structure scientists currently face.

Hopefully it’s clear why the second kind of wrong is much more damaging than the first kind (the first is healthy), and that’s why I think Sylvia’s Guardian piece is a bit wrong. Changing the incentives is a tricky matter that I won’t go into now, but as an early career researcher it’s something I don’t feel I have a lot of power over.

REFERENCES
Note: this is far from comprehensive and mostly focuses on the work of two groups

References in support of the variable impedance of brain tissue causing the low-pass filtering of brain recordings:
Modeling Extracellular Field Potentials and the Frequency-Filtering Properties of Extracellular Space
Model of low-pass filtering of local field potentials in brain tissue
Evidence for frequency-dependent extracellular impedance from the transfer function between extracellular and intracellular potentials
Comparative power spectral analysis of simultaneous elecroencephalographic and magnetoencephalographic recordings in humans suggests non-resistive extracellular media

References in support of intrinsic dendritic filtering properties causing the low-pass filtering of brain recordings:
Amplitude Variability and Extracellular Low-Pass Filtering of Neuronal Spikes
Intrinsic dendritic filtering gives low-pass power spectra of local field potentials
Frequency Dependence of Signal Power and Spatial Reach of the Local Field Potential
In Vivo Measurement of Cortical Impedance Spectrum in Monkeys: Implications for Signal Propagation (this is, as far as I know, the most recent direct experimental study measuring the impedance of brain tissue, finding that the impedance is frequency-independent)

Policy and Bright Club Cambridge

Alas my neglect for this web-site has been wanton and unmerciful, sorry little autapses. I am currently in Cambridge (UK) doing a policy placement at the wonderful Centre for Science and Policy (CSaP). Part of what they do is help civil servants network with academics to provide government departments with direct access to the best available research. Before I joined I did wonder why they couldn’t just use Google Scholar, but I quickly learned this was a fabulously naive point of view. Civil servants are often very busy, inexperienced with research, unable to devote the time to finding the best and most relevant information in the mountains of muck that populate the literature, and as a result of moving between departments have great breadth but not so much depth of knowledge. The direct links to relevant academics that CSaP provides are really important for getting good research knowledge into government.

Anyway, evangelism section is now complete, advertising commences: I will be “doing a bit” at Cambridge Bright Club at the Portland Arms tomorrow (Friday 14th June) evening, I think there are still some tickets left (just checked, yes there are, BUT NOT MANY). Professional funny people will be there to make you laugh, and six researchers will be there to try to make you laugh. At the very least, you may learn something.

More soon…

Ridiculous thesis requirements: impact factors (again)

The thesis: a culmination of your years of hard toil as a PhD student, representing a significant original contribution to your field of study. So sayeth the dictionary (probably; I haven’t checked), but one can go about structuring a thesis in various ways. Some universities offer the option of submitting a collection of published papers, together with an introduction and conclusion to tie-together the various works, as an alternative to writing a thesis from scratch and having to reformat your data and ideas from your papers into a new entity that few will ever read. This seems like a sensible compromise to me – if you’ve proven that you can produce research of suitable interest and originality to be published in peer-reviewed journals, it seems somewhat superfluous to have to spend further effort writing a thesis (though of course there are good arguments for writing a separate thesis, which I won’t go into now because I’ll lose the impetus behind my indignant rant).

A friend of mine doing their PhD at a university in East Asia is allowed to submit a thesis in this form, and is currently finishing writing up several papers to include. However, the regulations from their university demand that the combined impact factor for the articles be 10 or more. The problems with impact factors have been well documented elsewhere, and it’s really worrying that this is being asked for as a requirement for a PhD degree when it makes little sense to apply impact factors for journals to individual papers from those journals, let alone to the researchers themselves. These kinds of demands are apparently quite common across universities in the country my friend is working in (apologies for clumsy anonymiserating…), resulting in instances of research fraud – I’ve heard of a supervisor who submitted a review article to a journal with their student listed as first author, though they hadn’t worked on the article, just to bump up the student’s impact for their thesis.

Hopefully my friend won’t have too any problems meeting this ludicrous requirement, as they are doing good research in a “high impact field”, but I wonder how many students, having published good work, can’t achieve this target simply because the good journals in their specialism don’t have comparatively high impact factors.

On mud and blog titles

I was away this past weekend doing Tough Mudder in Scotland. It was fun, but I could barely move afterwards. We ran on Saturday and I’m still aching on Tuesday. Then again, I am very unfit. I would recommend it if you fancy a nice long run but find the thought of a marathon tedious, or if you are a masochist.

I’ll have something to post on our wonderful EURO 2012 league soon, complete with analysis of the non-linear complex scoring function, but I still need to make some graphs. In the mean-time, here’s a little something about autapses:  Massive Autaptic Self-Innervation of GABAergic Neurons in Cat Visual Cortex. It’s an oldish paper quantifying the number of connections that different types of neurons make back onto themselves (background: most current brain theories consider the brain to generate and process information in networks of neurons, which communicate by sending electrical and chemical signals to each other – more here. In most of the brain, neurons can be divided into two categories, excitatory and inhibitory, depending on whether they send signals that make other neurons more or less likely to send on signals of their own). The authors found that, in cat visual cortex at least, inhibitory (GABAergic) neurons made substantially more self-connections than excitatory neurons, meaning that when they “spike” and send inhibitory signals to other neurons, they also inhibit their own spiking, thus stopping themselves from sending out more signals. This provides another mechanism for inhibitory neurons to control their output, in addition to the inhibition provided to them by the many connections from other inhibitory neurons in the network, that is separate from the inhibition provided by these other neurons.

I’m unaware of how much work has been done on the functional significance of autapses, but they are a rather interesting concept and usually ignored in the kind of neuronal network research that I am involved in. More digging required.

Networking with myself

These are the papers I referred to at Bright Club:

The Web of Human Sexual Contacts: this paper was published in Nature in 2001 (the link is to a preprint version). The authors analysed a 1996 Swedish survey of sexual behaviour (2810 respondents) and found that the number of sexual partners reported, both in the short term (12 months prior to survey) and long term (lifetime), varied according to a power law. This means that most people haven’t had that many sexual partners, a few people have had a few more, but a very small number of people have had very many partners; in the picture on the Wikipedia page (showing an idealised power law distribution), the x-axis would represent number of sexual partners, and the y-axis the cumulative distribution (i.e. as you go up the y-axis, you see more and more people having had a smaller number of partners). When plotted on a log-log scale (linked-to graph shows example simulated data), the curve becomes a straight line with a negative gradient – the gradient is the exponent of the power law. This kind of network is called scale-free, because whatever scale you consider the network at, its statistics are similar.

The small number of people with a very large number of connections to others are referred to as network ‘hubs’, analogous to a transport hub, as disparate parts of the network are linked up through them. Knowing the structure of a sexual network is very important for targeting effective interventions dealing with the spread of sexually transmitted infections, so this research has serious implications for public health policy. An important feature of scale-free networks is their resilience against random ‘node deletions’: removing a random person from the sexual network (I know what you’re thinking – no, not in any sinister way) will have very little effect on how disease spreads. However, by specifically targeting the network hubs, disease spread can be reduced dramatically just by influencing a small number of hub people, simultaneously reducing cost and improving efficacy. The trick is successfully identifying your hub nodes…

Hubs are also a frequent (though not defining) feature in small-world networks.

Sexual network analysis of a gonorrhoea outbreak: analysis of a gonorrhoea outbreak using network theory. The authors trace the initial spread to patrons frequenting a certain motel bar in Alberta, which they don’t actually name in the paper presumably for legal reasons. The main interesting findings were that cheaper network analysis methods could be used instead of standard case-control analysis to arrive at similar results, including the identification of the causal link between several seemingly isolated disease outbreaks.

Chains of Affection: analysis of a high-school “romance network”. This revealed a very different network structure, with long chains of links between students rather than clear hubs, with obviously different implications for STI spread through the network. The authors suggest the different structure arises from the social rules that operate at high-school: not dating your friend’s ex, for example.

Finally I used this lovely picture from the Human Connectome Project. Yes, your brain is riddled with STIs*.

*not really. Probably.

Newcastle Bright Club

IT’S TONIGHT, AND I’M SPEAKING! Wed 27th June 2012, 7.30pm at the Black Swan (on Westgate Road near the Academy), the “thinking person’s variety night” returns to bring you a fragrant blend of music, research and comedy.

We had a little rehearsal last night and I can tell you all now that you are in for a right good treat, as the other speakers at least are fabulous. Helen Keen returns to compère – she’s been helping us out with our sets, so to be honest you can blame her if you don’t enjoy it. I have to say, talking to a comedian about your own jokes is a strangely intimidating experience, even though she is both very nice and very helpful.

I’m going to be speaking about network science and the brain. It will be sexed-up to the point that Labour will want to use it to force through policy decisions if they ever manage to weasel back into power. Sod the football, come to Bright Club.

P.S. If you’re interested in either network science or the brain, or both, have a look at our lab’s web-site. These articles are good overviews of using network science to learn more about the brain:

Organization, development and function of complex brain networks

A tutorial in connectome analysis