New paper: simulating electrode recordings in the brain

I was at the Organization for Computational Neuroscience annual meeting (CNS 2014) in Quebec City all last week, which I aim to blog about in the near future (if you’re keen you can see my posters on figshare), but before that, I should write about our paper. It’s been available online since the end of May (open access) but I’ve been tidying up bits and pieces of the code so haven’t got round to advertising it much.

The basic motivation behind the paper is the current lack of knowledge about the relationship between the voltage measurements made using extracellular electrodes (local field potentials – LFPs) and the activity of the neurons that underlies those measurements. It is very difficult to infer how currents are flowing in groups of neurons given a set of extracellular voltage measurements, as an infinite number of arrangements of current sources can give rise to the same LFP. Our approach instead was to take a particular pattern of activity in a neural network that was already well characterised experimentally, and to predict given this pattern what the extracellular voltage measurements would be given the physics of current flow in biological tissue. This is the “forward modelling” approach used previously in various studies (see here for a recent review and description of this approach). Our paper describes a simulation tool for performing these simulations (the Virtual Electrode Recording Tool for EXtracellular potentials: VERTEX), as well as some results from a large network model that we compared directly with experimental data.

The simulation tool is written in Matlab (sorry Python aficionados…) and can run in parallel if you have the Parallel Computing Toolbox installed. Matlab is often thought of as being slow, but if you’re cunning you can get things to run surprisingly speedily, which we have managed to do to a reasonable extent with VERTEX I think. You can download it from www.vertexsimulator.org – the download also includes files to run the model described in the paper, as well as some tutorials for setting up simulations. We’ve also made the experimental data available on figshare.

So now you know a little bit about what I was doing all those years up in the City of Dreams…

Reference:

Tomsett RJ, Ainsworth M, Thiele A, Sanayei M et al. Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX): comparing multi-electrode recordings from simulated and biological mammalian cortical tissue. Brain Structure and Function (2014) doi:10.1007/s00429-014-0793-x

Wrong studies: is this how science progresses?

An article by Sylvia McLain in the Guardian’s Science Blogs section yesterday argued against John Ioannidis’ provocative view that “most scientific studies are wrong, and they are wrong because scientists are interested in funding and careers rather than truth.” The comments on the Guardian article are good; I thought I might add a little example of why I think Sylvia is wrong in saying that prevailing trends in published research (that most studies turn out to be wrong) just reflect scientific progress as usual.

There is a debate in the neuroscience literature at the moment regarding the electrical properties of brain tissue. When analysing the frequencies of electrical potential recordings from the brain, it is apparent that higher frequencies are attenuated more than lower frequencies – slower events show up with more power than faster events. The electrical properties of brain tissue affect the measured potentials, so it is important to know what these properties are so that the recordings can be properly interpreted. Currently, two theories can explain the observed data: the high-frequency reduction is a result of the properties of the space around neurons (made up mostly of glial cells), which result in a varying impedance that attenuates higher frequencies; or it is a result of passive neuronal membrane properties and the physics of current flow through neurons’ dendrites, and the space around neurons doesn’t have an effect. Both of these explanations are plausible, both are supported by theoretical models, and both have some experimental data supporting them. This is a good case of scientific disagreement, which will be resolved by further more refined models and experiments (I’ll put some links below). It could be that aspects of both theory become accepted, or that one is rejected outright. In that case, the studies will have been shown to be “wrong”, but that is besides the point. They will have advanced scientific knowledge by providing alternative plausible and testable theories to explore.

The kind of “wrong” study that Ioannidis describes is quite different. His hypothesis is that many positive findings are results of publication bias. High profile journals want to publish exciting results, and exciting results are usually positive findings (“we found no effect” is rarely exciting). Scientists are under pressure to publish in high profile journals in order to progress in their careers (in some cases even just to graduate), so are incentivised to fudge statistics, fish for p-values, or just not publish their negative results (not to mention the problems inherent in null hypothesis testing, which are often ignored or not known about by many study designers). Pharmaceutical companies have further obvious incentives only to publish positive results from trials (visit www.alltrials.net !). This doesn’t lead to a healthy environment for scientific debate between theories; it distorts the literature and hinders scientific progress by allowing scientists and doctors to become distracted by spurious results. It is not – or should not be – “business as usual”, but is a result of the incentive structure scientists currently face.

Hopefully it’s clear why the second kind of wrong is much more damaging than the first kind (the first is healthy), and that’s why I think Sylvia’s Guardian piece is a bit wrong. Changing the incentives is a tricky matter that I won’t go into now, but as an early career researcher it’s something I don’t feel I have a lot of power over.

REFERENCES
Note: this is far from comprehensive and mostly focuses on the work of two groups

References in support of the variable impedance of brain tissue causing the low-pass filtering of brain recordings:
Modeling Extracellular Field Potentials and the Frequency-Filtering Properties of Extracellular Space
Model of low-pass filtering of local field potentials in brain tissue
Evidence for frequency-dependent extracellular impedance from the transfer function between extracellular and intracellular potentials
Comparative power spectral analysis of simultaneous elecroencephalographic and magnetoencephalographic recordings in humans suggests non-resistive extracellular media

References in support of intrinsic dendritic filtering properties causing the low-pass filtering of brain recordings:
Amplitude Variability and Extracellular Low-Pass Filtering of Neuronal Spikes
Intrinsic dendritic filtering gives low-pass power spectra of local field potentials
Frequency Dependence of Signal Power and Spatial Reach of the Local Field Potential
In Vivo Measurement of Cortical Impedance Spectrum in Monkeys: Implications for Signal Propagation (this is, as far as I know, the most recent direct experimental study measuring the impedance of brain tissue, finding that the impedance is frequency-independent)

Measuring brain electricity

Ah yes, I was going to write about data analysis, but I got distracted by more data analysis. Anyway, here’s a bit of information on how we measure what’s going on in the brain, and how we interpret those measurements.

The brain generates and processes information using electrical signals, as far as our current understanding goes. For example, neurons in the eye respond to light by sending electrical signals to the visual cortex (via some other brain areas), where the signals are processed, interpreted, and distributed to other parts of the brain for integration with our other senses, further interpretation etc. These signals are very small, but measurable, even from the scalp – this is called electroencephalography (EEG). Each electrode of an EEG measures contributions from hundreds of thousands, perhaps millions, of neurons, so only provides a very coarse measurement of what’s really going on. At the other end of the scale, the electrical responses of an individual neuron can be measured by using a very small electrode to attach to its cell membrane, revealing the electrical activity inside it (or even to look at the currents flowing through individual channels in the membrane). This gives you information on what a single neuron is doing, but neurons never work in isolation, so you’re missing out on a lot of information about how the rest of the neuronal network is behaving.

Various types of measurement are available to bridge this scale divide. The type that I’m working with is from Utah arrays – 3.6mm square grids of 100 small electrodes that measure electrical activity from the space around neurons. This kind of measurement is similar to an EEG in that each electrode measures activity from many neurons surrounding it, but because the electrodes are placed so close to the neurons, spikes from individual neurons can also be picked up. The smaller scale allows the construction of a detailed picture of the local brain dynamics. Utah arrays are also particularly cool, because they are one of the only types of electrode that provide information on this kind of spatial scale that have also been approved for use in humans. They have already provided previously inaccessible information about epileptic brain activity in humans, and can be used to create brain-machine interfaces.

Utah array, source http://www.sci.utah.edu/~gk/abstracts/bisti03/

The faintly terrifying-looking Utah array. Don’t worry, it’s quite small.

The data I get is from less glamorous locations than behaving humans, but in future I may get my grubby paws on recordings from brain tissue that has been removed during epilepsy surgery. Currently I’m looking at how brain activity varies in space over the small scale that the Utah array provides, and trying to match a computer model to the information provided by the recordings. The idea then is to investigate aspects of the model that cannot be changed in an experiment, such as how neurons are connected together, in order to guide future experimental research into unhealthy brain activity.

Neglect

I have been neglecting my writing duties, for which I apologise. Sorry you two.

I’ve been all over the place of late: Copenhagen visiting friends at the end of July, from whence we drove to the Wacken metal festival in Germany (best shows: Nasum, Kylesa, Volbeat). This proved true and brutal, as usual, but particularly so this year because of the mud baths created by the aggressive precipitation. Still managed to come back with a sun tan. After a fleeting visit to Hamburg, I spent most of the next week recovering, then undid most of that recovery with a visit to my sister for birthday celebrations, followed by a truly wonderful gig by Refused in London. I have a total man crush on the vocalist. The return to Newcastle on Monday took six hours: our train broke down. I managed to read a Viz from cover-to-cover though, so it wasn’t time entirely wasted.

This week I have been mostly analysing electrophysiology data from our experimental collaborators (details to follow, tomorrow maybe…), partly using a nice little toolbox for Matlab called Chronux. Unfortunately, it doesn’t seem to be under active development – the last release was in 2008 – as it has some nice functionality, but needs a little tidying up in places (one of the main authors, Partha Mitra, has co-authored a very interesting and useful book on neural data analysis and some philosophical background, but his web-site seems mysteriously inactive). If anyone knows any more details, please comment!

Passed out body

Do not pass out at Wacken