SPAUN versus the Human Brain Project

November 27th, 2014 § 3 comments § permalink

As mentioned in my previous-but-one post, in August I went to Quebec City for the 2014 Computational Neuroscience meeting. It went splendidly from my perspective – I got to meet and chat with some interesting people, hopefully starting some collaborations, as well as eating inordinate quantities of poutine and sampling some good beer. This was also the first meeting where I had actually completed work in advance for my posters, so I didn’t have to search frantically for a printer in Quebec City the day before I presented.

Along with some nice talks and workshops, the keynote speakers were all Big Names this year: Frances Skinner, Christoph Koch, Chris Eliasmith and Henry Markram. Of course, given the recent furore sparked by the open letter to the European Commission expressing doubts about both the conception and implementation of the Human Brain Project (HBP), everyone was keen to hear Markram speak. Sadly he couldn’t make it, but Sean Hill (co-director of Neuroinformatics for the HBP) stepped in at the last minute in his stead.

He gave a good talk, clear and well organised – you don’t get a billion euro grant without being good at presentation. The Q&A session was necessarily quite brief, but still interesting from a sociological perspective. The same question was asked point was made several times by different people: the HBP will never succeed in understanding the brain, because the approach of examining all the minute details of neural circuits, then rebuilding them in a huge, complicated model, is a bad approach for this purpose. By now though it’s been made pretty clear that this is not the entirety of what the HBP aims to do. It’s an easy criticism to make, and people made it frequently about the HBP’s predecessor, the Blue Brain Project (BBP), but this is not the HBP’s main goal. The HBP is funded from an informatics grant – the initial stages will involve building the IT infrastructure and tools to allow data integration and modelling across all scales, not just a BBP style megamodel (NB: any assertions I make about goals of projects etc. are purely my interpretation, I’m not affiliated with anything myself so can’t speak authoritatively – though if my tweets at the time are accurate then Hill said himself that he was frustrated by the repeated assertion that the HBP was just going to try to build a brain in a computer).

Hill’s explanation of the critical open letter was that an internal disagreement about the details of this initial ramp-up phase had gone public. It’s hardly surprising that there are internal disagreements; it’s more unusual for them to go public and suggests quite a serious level of dispute. Gaute Einevoll made a point from the floor about petitioning against projects: in his previous life as a physicist, he had seen a big project publicly attacked and the funding taken away. The money wasn’t put back into that area of physics, it was just lost. This seems likely to happen if the HBP loses its funding: as it’s funded from money earmarked for IT projects, how likely is it that neuroscientists would get any of that money back if it were reallocated? Another voice from the floor contended that the open letter was not a petition against the HBP, but a genuine request for clarification of the project’s goals given the removal of the cognitive science subproject from the second round of funding. Hill’s response was that, while the questions raised were legitimate, the open letter approach is portrayed in the media as an attack, so it could certainly have implications for HBP funding and potentially the public image of neuroscience. I think this is fair enough, really. You only write an open letter if you’re trying to put aggressive pressure on something. Here is a longer article outlining some of the concerns regarding the project’s funding. Since I started writing this post, the project has celebrated its 1-year anniversary and an external mediator has been appointed to “develop and realize proposals for restructuring the project governance”, whatever that means.

Politics aside, I want to go into the criticism that keeps coming up: we’re not going to learn much from a BBP type model. I suppose that some of Markram’s media quotes about other brain models haven’t helped the HBP out here. There was the infamous Modha-Markram incident (in which Dharmendra Modha of IBM overhyped their model, and Henry Markram responded with an amusingly aggressive open letter), as well as describing the aforementioned Chris Eliasmith’s SPAUN functional brain model as “…not a brain model“. Markram clearly has a very set idea of what he thinks a brain model is (or at least, what a brain model isn’t). One can see why some may be wary of his leadership of the HBP, then, given that it is meant to be considering a variety of approaches, and that when the the submission for the second round of HBP funding was made, the cognitive science aspects had been dropped.

Plasticine brain model

A brain model

Assuming Markram means “not a good brain model” when describing SPAUN, rather than literally “not a brain model” (a lump of plasticine, or piece of knitting, could be brain models if we use them to improve our understanding of the brain), then why does he think these other approaches are no good? Given his criticisms of Modha’s work, one might assume that his issue is with the lack of biological detail used in these models. But lack of detail is something every model suffers from, even BBP models (“suffers” is the wrong word; abstracting away non-crucial details to provide an understandable description of a phenomenon is a crucial part of modelling). Who gets to say what the “right” level of detail is to “understand” something?

Level of detail is not the fundamental difference between a BBP style model and a SPAUN style model. Rather, they represent different philosophies regarding how models should be used to investigate reality. With SPAUN, the model creators have specific hypotheses about how the brain implements certain functions mathematically, and how these functions can be computed using networks of neuron-like units using their Neural Engineering Framework (NEF). SPAUN is remarkably successful at several tasks – notably, it can perform 8 different tasks, after learning, using the same model without modification (though it cannot learn new tasks). The basic idea behind how the functions are implemented neurally is explained in the supplementary material [may be paywalled] of the original article in Science:

The central idea behind the NEF is that a group of spiking neurons can represent a vector space over time, and that connections between groups of neurons can compute functions on those vectors. The NEF provides a set of methods for determining what the connections need to be to compute a given function on the vector space represented by a group of neurons. Suppose we wish to compute the function y = f(x), where vector space x is represented in population A, and vector space y is represented in population B. To do so, the NEF assumes that each neuron in A and B has a “preferred direction vector” (1). The preferred direction vector is the vector (i.e. direction in the vector space) for which that neuron will fire most strongly. This is a well-established way to characterize the behavior of motor neurons (2), because the direction of motion – hence the vector represented in the neural group in motor cortex – is directly observable. This kind of characterization of neural response has also been used in the head direction system (3), visual system (4), and auditory system (5). The NEF generalizes this notion to all neural representation.

(my emphasis; references renumbered). The bold sentence is key here – Eliasmith et al. are stating a hypothesis about how populations of neurons compute functions, and SPAUN represents their hypotheses about which functions the brain computes. They go from function to implementation by considering some biological constraints: neurons are connected with synapses, they communicate using spikes, there are two main types of neuron (inhibitory and excitatory), etc. In addition to the behavioural output, we can then see how well the model captures the brain’s dynamics by comparing measures of the model’s activity against measures of brain activity that haven’t been used to constrain the model. Are the temporal patterns of the spikes in line with experimental data (and if not, why not)? What happens when you remove bits of the model (analogous to lesion studies)? What kind of connectivity structure does the model predict, and is this realistic? This last question in particular I think is important, and as far as I can tell isn’t addressed in the SPAUN paper or in subsequent discussion. Given that SPAUN optimises neural connectivity to perform particular functions, comparing the connectivity against real brain connectivity seems one fairly obvious test of how well SPAUN captures real brain computation.

The Blue Brain Project is quite a different undertaking. The models developed in the BBP attempt to capture as much as the low-level biological detail as possible. The neurons are represented by sets of equations that describe their electrical dynamics, including as many experimentally constrained details about 3D structure and interactions between nonlinear membrane conductances as possible. These neuron models are connected together by model synapses, the numbers, positions and dynamics of which are again constrained by experimental measurements. The result is a fantastically detailed and complex model that is as close as we can currently get to the physics of the system, but with no hypotheses about the network’s function. Building this takes meticulous work and produces a model incorporating much current low-level neuroscience knowledge. The process of building it can also reveal aspects of the physiology that are unknown, suggesting further experiments – or reveal if a model is unable to capture a particular physical phenomenon. We can potentially learn a lot about biophysics and brain structure from this kind approach.

Using this model to address function, though, is much more tricky. The hope (of some) is that, when the brain is simulated fully in this manner, function will emerge because the model of the underlying physics is so accurate (presumably this will require adding a virtual body and environment to assess function – another difficult task). I am sceptical that this can work for various reasons. You will always miss something out of your model and your parameter fits will often be poorly constrained, certainly, but there’s a bit more to it than that. Here’s what Romain Brette has to say on his blog, which I am going to reproduce a large part of because he makes the relevant points very clearly.

Such [data-driven] simulations are based on the assumption that the laws that govern the underlying processes are very well understood. This may well be true for the laws of neural electricity… However, in biology in general and in neuroscience in particular, the relevant laws are also those that describe the relations between the different elements of the model. This is a completely different set of laws. For the example of action potential generation, the laws are related to the co-expression of channels, which is more related to the molecular machinery of the cell than to its electrical properties.

Now these laws, which relate to the molecular and genetic machinery, are certainly not so well known. And yet, they are more relevant to what defines a living thing than those describing the propagation of electrical activity, since indeed these are the laws that maintain the structure that maintain the cells alive. Thus, models based on measurements attempt to reproduce biological function without capturing the logics of the living, and this seems rather hopeful.

…I do not want to sound as if I were entirely dismissing data-driven simulations. Such simulations can still be useful, as an exploratory tool. For example, one may simulate a neuron using measured channel densities and test whether the results are consistent with what the actual cell does. If they are not, then we know we are missing some important property. But it is wrong to claim that such models are more realistic because they are based on measurements. On one hand, they are based on empirical measurements, on the other hand, they are dismissing mechanisms (or “principles”), which is another empirical aspect to be accounted for in living things.

This is what I see as being the main purpose of the Blue Brain style models: cataloguing knowledge, and exploration through “virtual experiments.” The models will always be missing details, and contain poorly constrained parameters (e.g. fits of ionic conductances in a 3D neuron model using measurements only made at a real neuron’s soma [or if you’re lucky, soma and apical dendrite]), but they represent probably the most detailed description of what we know about neurophysics at the moment. However, even if function does “emerge”, how much does this kind of model really help with our understanding of how the function emerges? You still have a lot of work to do to get there – hopefully the HBP will help with this by incorporating many different modelling approaches, and providing the IT tools and data sharing to facilitate this effort (as Brette also points out, we supposedly already have loads of data for testing models, but getting at it is a pain in the arse).

Ultimately both SPAUN and the BBP have some utility, but they represent fundamentally different ways of describing the brain. The question of whether SPAUN or a BBP type model is “more realistic” doesn’t really make much sense; rather, we should ask how different models help us to understand the phenomena we are interested in. Equally, the criticism that we won’t learn much about the brain from a BBP style model isn’t necessarily true – it depends on what you’re interested in knowing about the brain and whether the model helps you to understand that. I’m keeping my fingers crossed that the Human Brain Project will facilitate this variety of approaches.

References from the SPAUN paper appendix quote:

1. T. C. Stewart, T. Bekolay, C. Eliasmith, Neural representations of compositional structures: Representing and manipulating vector spaces with spiking neurons. Connection Sci. 23, 145 (2011).
2. A. P. Georgopoulos, J. T. Lurito, M. Petrides, A. B. Schwartz, J. T. Massey, Mental rotation of the neuronal population vector. Science 243, 234 (1989). doi:10.1126/science.2911737
3. J. S. Taube, The head direction signal: Origins and sensory-motor integration. Annu. Rev. Neurosci. 30, 181 (2007). doi:10.1146/annurev.neuro.29.051605.112854
4. N. C. Rust, V. Mante, E. P. Simoncelli, J. A. Movshon, How MT cells analyze the motion of visual patterns. Nat. Neurosci. 9, 1421 (2006). doi:10.1038/nn1786
5. B. J. Fischer, J. L. Peña, M. Konishi, Emergence of multiplicative auditory responses in the midbrain of the barn owl. J. Neurophysiol. 98, 1181 (2007). doi:10.1152/jn.00370.2007

Apologies, most (all?) of these are paywalled :\

Consciousness, memory, booze

October 28th, 2014 § 6 comments § permalink

I was having a read of Neuroskeptic‘s interview with Dr Srivas Chennu on PLOS Neuro earlier (recommended), and found myself nodding vigorously in agreement on reading this quotation:

Consciousness is not just being aware of something, but also being aware that you were aware of it yesterday.

I don’t read too much about consciousness research, but I find sometimes that consciousness researchers neglect the crucial aspect of memory. It is quite possible to appear to be conscious (moving, responding to stimuli, holding conversations) without storing memories. Consider becoming black-out drunk. If you’ve never done this, I wouldn’t necessarily recommend it (unless you are a consciousness researcher, in which case it’s completely essential research), but it is interesting from a philosophical perspective. At some point during an evening of drinking, you completely cease forming memories, but your behaviour is not dramatically different from your usual drunken antics. Are you conscious during this period?

What about when you were an infant and unable to form longer-term memories, were you conscious then? No one can remember, so you can’t just ask someone “were you conscious when you were 3 months old?” – but most people can answer affirmatively if you ask “were you conscious when you were 6 years old?” even if they’ve forgotten much of what they did at age 6.

One of the current popular theories of consciousness, integrated information theory (IIT), doesn’t take into account memory (Scott Aaronson’s detailed post on IIT is great). It does many other strange things like predicting small amounts of consciousness for items that intuitively would not be conscious in any way, but I suppose this just helps to show that whether something is conscious or not is determinable only by the something itself. If it can remember(?). Medical consciousness researchers have got their work cut out for them.

New paper: simulating electrode recordings in the brain

August 3rd, 2014 § 1 comment § permalink

I was at the Organization for Computational Neuroscience annual meeting (CNS 2014) in Quebec City all last week, which I aim to blog about in the near future (if you’re keen you can see my posters on figshare), but before that, I should write about our paper. It’s been available online since the end of May (open access) but I’ve been tidying up bits and pieces of the code so haven’t got round to advertising it much.

The basic motivation behind the paper is the current lack of knowledge about the relationship between the voltage measurements made using extracellular electrodes (local field potentials – LFPs) and the activity of the neurons that underlies those measurements. It is very difficult to infer how currents are flowing in groups of neurons given a set of extracellular voltage measurements, as an infinite number of arrangements of current sources can give rise to the same LFP. Our approach instead was to take a particular pattern of activity in a neural network that was already well characterised experimentally, and to predict given this pattern what the extracellular voltage measurements would be given the physics of current flow in biological tissue. This is the “forward modelling” approach used previously in various studies (see here for a recent review and description of this approach). Our paper describes a simulation tool for performing these simulations (the Virtual Electrode Recording Tool for EXtracellular potentials: VERTEX), as well as some results from a large network model that we compared directly with experimental data.

The simulation tool is written in Matlab (sorry Python aficionados…) and can run in parallel if you have the Parallel Computing Toolbox installed. Matlab is often thought of as being slow, but if you’re cunning you can get things to run surprisingly speedily, which we have managed to do to a reasonable extent with VERTEX I think. You can download it from www.vertexsimulator.org – the download also includes files to run the model described in the paper, as well as some tutorials for setting up simulations. We’ve also made the experimental data available on figshare.

So now you know a little bit about what I was doing all those years up in the City of Dreams…

Reference:

Tomsett RJ, Ainsworth M, Thiele A, Sanayei M et al. Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX): comparing multi-electrode recordings from simulated and biological mammalian cortical tissue. Brain Structure and Function (2014) doi:10.1007/s00429-014-0793-x

2013 in metal

July 12th, 2014 § 0 comments § permalink

I totally forgot to post my 2013 Top 20 Heavy Metal Albums list earlier in the year (my lists for 2012, 2011 and 2010 are here)! Good year, was 2013. So, here it is:

1. Beaten to Death – Dødsfest!
2. In Solitude – Sister
3. Stomach Earth – Stomach Earth
4. Altar of Plagues – Teeth Glory and Injury
5. Ayreon – The Theory of Everything
6. Ulcerate – Vermis
7. The Ruins of Beverast – Blood Vaults
8. Oranssi Pazuzu – Vaonielu
9. VHÖL – Vhöl
10. Summoning – Old Mornings Dawn
11. Beastmilk – Beastmilk
12. Carcass – Surgical Steel
13. Coheed and Cambria – The Afterman: Descension*
14. Lychgate – Lychgate
15. Sahg – Delusions of Grandeur
16. Gorguts – Colored Sands
17. Alpha Tiger – Beneath the Surface
18. Power Trip – Manifest Decimation
19. Ihsahn – Das Seelenbrechen
20. Kvelertak – Meir

*I’m aware I lose metal points for this one.

Spotify playlist

Bonus albums:

Antigama – Meteor
ASG – Blood Drive
Author & Punisher – Women & Children
Autopsy – The Headless Ritual
Batillus – Concrete Sustain
Coliseum – Sister Faith
Convulse – Evil Prevails
Domovoyd – Oh Sensibility
Doomriders – Grand Blood
Hell – Curse and Chapter
Hybris – Heavy Machinery
Ghost – Infestissumam
KEN Mode – Entrench
Kylesa – Ultraviolet
Magister Templi – Lucifer Leviathan Logos
Monolithe – IV
Nails – Abandon All Life
Orchid – The Mouths of Madness
Portal – Vexovoid
Primitive Man – Scorn
Shining (NOR) – One One One
Sodom – Epitome of Torture
SubRosa – More Constant Than The Gods
Sulphur Aeon – Swallowed By The Ocean’s Tide
The Dillinger Escape Plan – One Of Us Is The Killer
The Meads of Asphodel – Sonderkommando
Toxic Holocaust – Chemistry of Consciousness
Twilight of the Gods – Fire On The Mountain
Uncle Acid & The Deadbeats – Mind Control
Voivod – Target Earth
Woe – Withdrawal

The Japanese have the best language logic

July 10th, 2014 § 0 comments § permalink

In Japan, they call a buffet バイキング – ba i ki n gu, which is as close as you can get to the word Viking using the sounds available. This is because at a buffet you can eat as much as you like. You can be a glutton. Just like the Vikings.

Japanese language logic is fabulous.

Are replication efforts pointless?

July 7th, 2014 § 6 comments § permalink

A couple of people have tweeted/blogged (EDIT: additional posts from Neuroskeptic, Drugmonkey, Jan Moren, Chris Said, Micah Allen EDIT 2: more, by Sanjay Srivastava, Pete Etchells, Neuropolarbear EDIT 3: more, by Sean Mackinnon) about a recent essay by Jason Mitchell, Professor of Psychology at Harvard, titled On the emptiness of failed replications. Its primary thesis is that efforts to replicate experimental results are pointless, “because unsuccessful experiments have no meaningful scientific value”. This is, of course, counter to the recent replication drive in social psychology – and to how I understand experimental science should be done (caveat: I am not an experimental scientist).

I disagree with the above quotation, and thought I would counter a couple of his arguments that stuck out to me as wrong or misleading:

…A replication attempt starts with good reasons to run an experiment: some theory predicts positive findings, and such findings have been reported in the literature, often more than once. Nevertheless, the experiment fails. In the normal course of science, the presumption would be that the researcher flubbed something important (perhaps something quite subtle) in carrying out the experiment, because that is far-and-away the most likely cause of a scientific failure.

In the case of a very well established result, the most likely cause of scientific failure would certainly be experimental error. But for most hypotheses and theories this is surely not true. The likelihood of each possible cause of a negative result would depend on the likelihood of the hypothesis (consider homeopathy: any number of “positive” results are much better explained through experimental error/bad design/bad analysis than by the hypothesis that homeopathy is effective at curing X – indeed, later in the essay Mitchell acknowledges that spurious positive results can and do come about through bad statistical practices) and potentially unidentified variables. In much “frontier science” the likelihood of the theory is not well known (or even slightly known) and the unidentified variables can be many as the theory is incomplete. We’re getting into experimenter’s regress territory.

… if a replication effort were to be capable of identifying empirically questionable results, it would have to employ flawless experimenters. Otherwise, how do we identify replications that fail simply because of undetected experimenter error?

Adversarial collaboration; alas this happens infrequently and apparently not in many current replication efforts. This is a legitimate criticism of the replication movement: collaboration is essential to avoid experimenter’s regress.

…And here is the rub: if the most likely explanation for a failed experiment is simply a mundane slip-up, and the replicators are themselves not immune to making such mistakes, then the replication efforts have no meaningful evidentiary value outside of the very local (and uninteresting) fact that Professor So-and-So’s lab was incapable of producing an effect.

*Why* they weren’t able to produce the result should be very interesting, but can only really be investigated with collaboration, data sharing etc.

There are three standard rejoinders to these points. The first is to argue that because the replicator is closely copying the method set out in an earlier experiment, the original description must in some way be insufficient or otherwise defective…

…there is more to being a successful experimenter than merely following what’s printed in a method section…Collecting meaningful neuroimaging data, for example, requires that participants remain near-motionless during scanning, and thus in my lab, we go through great lengths to encourage participants to keep still. We whine about how we will have spent a lot of money for nothing if they move, we plead with them not to sneeze or cough or wiggle their foot while in the scanner, and we deliver frequent pep talks and reminders throughout the session. These experimental events, and countless more like them, go unreported in our method section for the simple fact that they are part of the shared, tacit know-how of competent researchers in my field; we also fail to report that the experimenters wore clothes and refrained from smoking throughout the session…

But I can conceive of scenarios where all this pleading and pressure on the participant may in fact cause them to move differently in the scanner from other labs’ approaches to dealing with participant movement, or to perform differently on tasks because they are so distracted by not moving. However, wearing clothes and not smoking indoors is common in many societies. If the participants were naked, that should definitely be reported, as nakedness in front of strangers is often considered socially uncomfortable (perhaps the participants at Harvard have transcended cultural norms around nudity).

A second common rejoinder is to argue that if other professional scientists cannot reproduce an effect, then it is unlikely to be “real.”…

This is a slightly more seductive argument, but it, too, falls short. Many of the most robust and central phenomena in psychology started life as flimsy and capricious effects, their importance only emerging after researcher developed more powerful methods with which to study them.

I agree with this, but I would again suggest that if competent scientists are producing contradicting results, they should collaborate and run experiments together using protocols they both agree on.

A third rejoinder argues that the replication effort ought to be considered a counterweight to our publication bias in favor of positive results… if an effect has been reported twice, but hundreds of other studies have failed to obtain it, isn’t it important to publicize that fact?

No, it isn’t.

Eh?

Although the notion that negative findings deserve equal treatment may hold intuitive appeal, the very foundation of science rests on a profound asymmetry between positive and negative claims. Suppose I assert the existence of some phenomenon, and you deny it; for example, I claim that some non-white swans exist, and you claim that none do (i.e., that no swans exist that are any color other than white). Whatever our a priori beliefs about the phenomenon, from an inductive standpoint, your negative claim (of nonexistence) is infinitely more tenuous than mine. A single positive example is sufficient to falsify the assertion that something does not exist; one colorful swan is all it takes to rule out the impossibility that swans come in more than one color. In contrast, negative examples can never establish the nonexistence of a phenomenon, because the next instance might always turn up a counterexample…Thus, negative findings—such as failed replications—cannot bear against positive evidence for a phenomenon…Positive scientific assertion cannot be reversed solely on the basis of null observations.

But most experiments do not give us a “positive” result in this sense – they tell us the probability of a result given that the data were generated by a null distribution, not about the truth of our hypothesis. “Positive” experimental studies cannot be reasoned about in the same way as this illustration of the limits of induction.

Replications are not futile, but they are perhaps being conducted sub-optimally (and certainly ruffling some feathers). Adversarial collaboration and data sharing would maximise the benefit of replication experiments.

Says the non-experimentalist.

PhD done, moved to Okinawa

April 1st, 2014 § 1 comment § permalink

Special Breath Chewing Gum

Hello everybody!

I realise I haven’t done a post since last September, which I believe is long enough to declare this blog legally dead. Fortunately I am trained in internet CPR and am able to kickstart the heart of this here blogging enterprise using my natural guile, expert medical training, and the WordPress “add new post” button. In my defense, I have been finishing off my thesis, which has now been submitted, scrutinised, corrected, resubmitted, re-scrutinised, and finally deemed worthy by the Powers That Be, which means I am now officially Dr. Richard Tömsett, PhD.

In more interesting developments, I have moved away from the City of Dreams to the wonderful island of Okinawa to start a 1 year postdoctoral research thingy at OIST (many thanks to the Japan Society for the Promotion of Science for moneys). I’m lucky enough to have been to Okinawa before, in 2011 when I did the Okinawa Computational Neuroscience Course. They have a beer vending machine. Insanity. Anyway, a big attraction of Okinawa is the beaches and sunshine, but of course it’s been raining pretty much since I arrived so far. The university itself is pretty sexy though.

There are already plenty of pictures of how sexy and nice Okinawa is so I thought I’d mainly post pictures of things that tickled me about Japan. Behold, the chewing gum that, when you put it in your mouth, gives you “special breath”:

Special Breath Chewing Gum

Your breath, it will be special

As I haven’t posted for ages, you get the bonus treat of the magical toilet that has a sink on top of it, so when you flush, you can wash your hands and not waste any water! Ingenious

Wrong studies: is this how science progresses?

September 18th, 2013 § 6 comments § permalink

An article by Sylvia McLain in the Guardian’s Science Blogs section yesterday argued against John Ioannidis’ provocative view that “most scientific studies are wrong, and they are wrong because scientists are interested in funding and careers rather than truth.” The comments on the Guardian article are good; I thought I might add a little example of why I think Sylvia is wrong in saying that prevailing trends in published research (that most studies turn out to be wrong) just reflect scientific progress as usual.

There is a debate in the neuroscience literature at the moment regarding the electrical properties of brain tissue. When analysing the frequencies of electrical potential recordings from the brain, it is apparent that higher frequencies are attenuated more than lower frequencies – slower events show up with more power than faster events. The electrical properties of brain tissue affect the measured potentials, so it is important to know what these properties are so that the recordings can be properly interpreted. Currently, two theories can explain the observed data: the high-frequency reduction is a result of the properties of the space around neurons (made up mostly of glial cells), which result in a varying impedance that attenuates higher frequencies; or it is a result of passive neuronal membrane properties and the physics of current flow through neurons’ dendrites, and the space around neurons doesn’t have an effect. Both of these explanations are plausible, both are supported by theoretical models, and both have some experimental data supporting them. This is a good case of scientific disagreement, which will be resolved by further more refined models and experiments (I’ll put some links below). It could be that aspects of both theory become accepted, or that one is rejected outright. In that case, the studies will have been shown to be “wrong”, but that is besides the point. They will have advanced scientific knowledge by providing alternative plausible and testable theories to explore.

The kind of “wrong” study that Ioannidis describes is quite different. His hypothesis is that many positive findings are results of publication bias. High profile journals want to publish exciting results, and exciting results are usually positive findings (“we found no effect” is rarely exciting). Scientists are under pressure to publish in high profile journals in order to progress in their careers (in some cases even just to graduate), so are incentivised to fudge statistics, fish for p-values, or just not publish their negative results (not to mention the problems inherent in null hypothesis testing, which are often ignored or not known about by many study designers). Pharmaceutical companies have further obvious incentives only to publish positive results from trials (visit www.alltrials.net !). This doesn’t lead to a healthy environment for scientific debate between theories; it distorts the literature and hinders scientific progress by allowing scientists and doctors to become distracted by spurious results. It is not – or should not be – “business as usual”, but is a result of the incentive structure scientists currently face.

Hopefully it’s clear why the second kind of wrong is much more damaging than the first kind (the first is healthy), and that’s why I think Sylvia’s Guardian piece is a bit wrong. Changing the incentives is a tricky matter that I won’t go into now, but as an early career researcher it’s something I don’t feel I have a lot of power over.

REFERENCES
Note: this is far from comprehensive and mostly focuses on the work of two groups

References in support of the variable impedance of brain tissue causing the low-pass filtering of brain recordings:
Modeling Extracellular Field Potentials and the Frequency-Filtering Properties of Extracellular Space
Model of low-pass filtering of local field potentials in brain tissue
Evidence for frequency-dependent extracellular impedance from the transfer function between extracellular and intracellular potentials
Comparative power spectral analysis of simultaneous elecroencephalographic and magnetoencephalographic recordings in humans suggests non-resistive extracellular media

References in support of intrinsic dendritic filtering properties causing the low-pass filtering of brain recordings:
Amplitude Variability and Extracellular Low-Pass Filtering of Neuronal Spikes
Intrinsic dendritic filtering gives low-pass power spectra of local field potentials
Frequency Dependence of Signal Power and Spatial Reach of the Local Field Potential
In Vivo Measurement of Cortical Impedance Spectrum in Monkeys: Implications for Signal Propagation (this is, as far as I know, the most recent direct experimental study measuring the impedance of brain tissue, finding that the impedance is frequency-independent)

Bright Club audio

September 17th, 2013 § 0 comments § permalink

A little while ago I posted about doing Cambridge Bright Club – well here‘s the podcast from that event, which includes excellent pieces from the other performers. I’m on first.

Anti-optogenetics 2

September 7th, 2013 § 0 comments § permalink

This is a response to John Horgan’s response to the responses to his original anti-optogenetics-hype article what I blogged about. The comments section is worth reading, but I thought I’d respond to a couple of points here, too.

Neuroscientist Richard Tomsett says one of my examples of hype—a TED talk by Ed Boyden, another leader of optogenetics—doesn’t count because “the whole point of such talks is hype and speculation.” Really? So scientists shouldn’t be criticized for hyping their research in mass-media venues like TED—which reaches gigantic audiences–because no one is taking them seriously? Surely that can’t be right.

I perhaps wasn’t clear enough here – my point was that it seemed silly to refer to a TED talk as an example of hype when all TED talks hype their particular topics. Scientists certainly should be criticised for hyping research, but this is a problem with the TED format rather than optogenetics.

…the abysmal state of health care in the U.S. should have a bearing on discussions about biomedical research. I’m not saying that journalists, every time they report on a biomedical advance, need to analyze its potential impact on our health-care problems. But knowledge of these woes should inform coverage of biomedical advances, especially since technological innovation is arguably contributing to our high health care costs.

I agree, but again this is not a problem with optogenetics, or even the scientists that try to hype it.

John’s posts touch on an issue with the way that science is funded, which (in the UK at least, and I assume elsewhere) requires an “impact” assessment to try to ensure that research spending isn’t a waste of money. This is a big problem because it can be very difficult to predict what impact most research will have in the short term, let alone the long term. The most obvious way to demonstrate “impact” in neuroscience is to refer to potential treatments for brain disorders, though such treatments might be years or decades away. The brain is so complex that it’s impossible to predict how a particular piece of research might impact medical practice, but you are compelled to spin your case because of this demand for “impact” – hence why all neuroscience press-releases will refer to potential treatments, no matter how relevant the research is to medicine. I completely agree that if scientists want to justify receiving public money then they need to justify their research to the public, but the current incentives promote hype – particularly medical hype. Note that I don’t offer a solution to this problem…

As I said in the previous post, there are good points to be made about the hype surrounding optogenetics (as in this post), it’s just unfortunate that John instead went for criticisms that could be leveled at any hyped science. Rather than attacking a particular field with some quite shaky points, it would have been much more interesting to address why scientists feel the need to hype their work in the first place.