CNS 2015 in Prague, deadline extended

It seems unlikely that I’ll be able to attend CNS (the annual computational neuroscience meeting) this year (I went to the Quebec meeting last year and it was great – my posters are on figshare), but they’ve just announced that the abstract submission deadline has been extended to 1st March. It’s a cool meeting with some fun and interesting people, and Prague is swell. I feel I should be pimping it out, too, as I did the poster:

CNS 2015 Prague

Obviously the poster is now out of date with the abstract deadline there. Anyway.

 

My week on Biotweeps

Last Sunday I finished a week curating the Biotweeps Twitter account. Biotweeps features a different researcher each week, tweeting about their particular areas of interest. It’s a great account to follow for broadening your biology knowledge (as I’m a fake biologist mine is extremely limited). I tweeted about my PhD research, the work I’m currently doing, and some interesting projects and papers from my previous colleagues at Newcastle University (particularly the CANDO project, which aims to create an implantable device for preventing seizures).

My tweets are archived on Storify here.

Publishing on The Winnower

I submitted a review of a paper for publication in the Journal of Neuroscience‘s Journal Club series. The original paper is here (sadly paywalled) – it’s an interesting modelling study that attempts to fit simplified models of neuron population dynamics to experimental recordings to shed some light on the neural network dynamics underlying those recordings. Unfortunately J. Neurosci. made a terrible mistake and chose not to publish my submission, so instead of letting that work go to waste I’ve published it on The Winnower, an innovative open platform for publishing papers without pre-publication peer review. Instead, your article is public immediately, and readers can submit public post-publication reviews (and the article can be updated). This is clearly the future of scientific publishing, and some other bigger publishers (e.g. F1000) are already using similar models. The Winnower is a cool independent alternative, and is currently free to use. I really hope it takes off, but in the current impact-obsessed environment it’s fighting an uphill battle.

SPAUN versus the Human Brain Project

As mentioned in my previous-but-one post, in August I went to Quebec City for the 2014 Computational Neuroscience meeting. It went splendidly from my perspective – I got to meet and chat with some interesting people, hopefully starting some collaborations, as well as eating inordinate quantities of poutine and sampling some good beer. This was also the first meeting where I had actually completed work in advance for my posters, so I didn’t have to search frantically for a printer in Quebec City the day before I presented.

Along with some nice talks and workshops, the keynote speakers were all Big Names this year: Frances Skinner, Christoph Koch, Chris Eliasmith and Henry Markram. Of course, given the recent furore sparked by the open letter to the European Commission expressing doubts about both the conception and implementation of the Human Brain Project (HBP), everyone was keen to hear Markram speak. Sadly he couldn’t make it, but Sean Hill (co-director of Neuroinformatics for the HBP) stepped in at the last minute in his stead.

He gave a good talk, clear and well organised – you don’t get a billion euro grant without being good at presentation. The Q&A session was necessarily quite brief, but still interesting from a sociological perspective. The same question was asked point was made several times by different people: the HBP will never succeed in understanding the brain, because the approach of examining all the minute details of neural circuits, then rebuilding them in a huge, complicated model, is a bad approach for this purpose. By now though it’s been made pretty clear that this is not the entirety of what the HBP aims to do. It’s an easy criticism to make, and people made it frequently about the HBP’s predecessor, the Blue Brain Project (BBP), but this is not the HBP’s main goal. The HBP is funded from an informatics grant – the initial stages will involve building the IT infrastructure and tools to allow data integration and modelling across all scales, not just a BBP style megamodel (NB: any assertions I make about goals of projects etc. are purely my interpretation, I’m not affiliated with anything myself so can’t speak authoritatively – though if my tweets at the time are accurate then Hill said himself that he was frustrated by the repeated assertion that the HBP was just going to try to build a brain in a computer).

Hill’s explanation of the critical open letter was that an internal disagreement about the details of this initial ramp-up phase had gone public. It’s hardly surprising that there are internal disagreements; it’s more unusual for them to go public and suggests quite a serious level of dispute. Gaute Einevoll made a point from the floor about petitioning against projects: in his previous life as a physicist, he had seen a big project publicly attacked and the funding taken away. The money wasn’t put back into that area of physics, it was just lost. This seems likely to happen if the HBP loses its funding: as it’s funded from money earmarked for IT projects, how likely is it that neuroscientists would get any of that money back if it were reallocated? Another voice from the floor contended that the open letter was not a petition against the HBP, but a genuine request for clarification of the project’s goals given the removal of the cognitive science subproject from the second round of funding. Hill’s response was that, while the questions raised were legitimate, the open letter approach is portrayed in the media as an attack, so it could certainly have implications for HBP funding and potentially the public image of neuroscience. I think this is fair enough, really. You only write an open letter if you’re trying to put aggressive pressure on something. Here is a longer article outlining some of the concerns regarding the project’s funding. Since I started writing this post, the project has celebrated its 1-year anniversary and an external mediator has been appointed to “develop and realize proposals for restructuring the project governance”, whatever that means.

Politics aside, I want to go into the criticism that keeps coming up: we’re not going to learn much from a BBP type model. I suppose that some of Markram’s media quotes about other brain models haven’t helped the HBP out here. There was the infamous Modha-Markram incident (in which Dharmendra Modha of IBM overhyped their model, and Henry Markram responded with an amusingly aggressive open letter), as well as describing the aforementioned Chris Eliasmith’s SPAUN functional brain model as “…not a brain model“. Markram clearly has a very set idea of what he thinks a brain model is (or at least, what a brain model isn’t). One can see why some may be wary of his leadership of the HBP, then, given that it is meant to be considering a variety of approaches, and that when the the submission for the second round of HBP funding was made, the cognitive science aspects had been dropped.

Plasticine brain model

A brain model

Assuming Markram means “not a good brain model” when describing SPAUN, rather than literally “not a brain model” (a lump of plasticine, or piece of knitting, could be brain models if we use them to improve our understanding of the brain), then why does he think these other approaches are no good? Given his criticisms of Modha’s work, one might assume that his issue is with the lack of biological detail used in these models. But lack of detail is something every model suffers from, even BBP models (“suffers” is the wrong word; abstracting away non-crucial details to provide an understandable description of a phenomenon is a crucial part of modelling). Who gets to say what the “right” level of detail is to “understand” something?

Level of detail is not the fundamental difference between a BBP style model and a SPAUN style model. Rather, they represent different philosophies regarding how models should be used to investigate reality. With SPAUN, the model creators have specific hypotheses about how the brain implements certain functions mathematically, and how these functions can be computed using networks of neuron-like units using their Neural Engineering Framework (NEF). SPAUN is remarkably successful at several tasks – notably, it can perform 8 different tasks, after learning, using the same model without modification (though it cannot learn new tasks). The basic idea behind how the functions are implemented neurally is explained in the supplementary material [may be paywalled] of the original article in Science:

The central idea behind the NEF is that a group of spiking neurons can represent a vector space over time, and that connections between groups of neurons can compute functions on those vectors. The NEF provides a set of methods for determining what the connections need to be to compute a given function on the vector space represented by a group of neurons. Suppose we wish to compute the function y = f(x), where vector space x is represented in population A, and vector space y is represented in population B. To do so, the NEF assumes that each neuron in A and B has a “preferred direction vector” (1). The preferred direction vector is the vector (i.e. direction in the vector space) for which that neuron will fire most strongly. This is a well-established way to characterize the behavior of motor neurons (2), because the direction of motion – hence the vector represented in the neural group in motor cortex – is directly observable. This kind of characterization of neural response has also been used in the head direction system (3), visual system (4), and auditory system (5). The NEF generalizes this notion to all neural representation.

(my emphasis; references renumbered). The bold sentence is key here – Eliasmith et al. are stating a hypothesis about how populations of neurons compute functions, and SPAUN represents their hypotheses about which functions the brain computes. They go from function to implementation by considering some biological constraints: neurons are connected with synapses, they communicate using spikes, there are two main types of neuron (inhibitory and excitatory), etc. In addition to the behavioural output, we can then see how well the model captures the brain’s dynamics by comparing measures of the model’s activity against measures of brain activity that haven’t been used to constrain the model. Are the temporal patterns of the spikes in line with experimental data (and if not, why not)? What happens when you remove bits of the model (analogous to lesion studies)? What kind of connectivity structure does the model predict, and is this realistic? This last question in particular I think is important, and as far as I can tell isn’t addressed in the SPAUN paper or in subsequent discussion. Given that SPAUN optimises neural connectivity to perform particular functions, comparing the connectivity against real brain connectivity seems one fairly obvious test of how well SPAUN captures real brain computation.

The Blue Brain Project is quite a different undertaking. The models developed in the BBP attempt to capture as much as the low-level biological detail as possible. The neurons are represented by sets of equations that describe their electrical dynamics, including as many experimentally constrained details about 3D structure and interactions between nonlinear membrane conductances as possible. These neuron models are connected together by model synapses, the numbers, positions and dynamics of which are again constrained by experimental measurements. The result is a fantastically detailed and complex model that is as close as we can currently get to the physics of the system, but with no hypotheses about the network’s function. Building this takes meticulous work and produces a model incorporating much current low-level neuroscience knowledge. The process of building it can also reveal aspects of the physiology that are unknown, suggesting further experiments – or reveal if a model is unable to capture a particular physical phenomenon. We can potentially learn a lot about biophysics and brain structure from this kind approach.

Using this model to address function, though, is much more tricky. The hope (of some) is that, when the brain is simulated fully in this manner, function will emerge because the model of the underlying physics is so accurate (presumably this will require adding a virtual body and environment to assess function – another difficult task). I am sceptical that this can work for various reasons. You will always miss something out of your model and your parameter fits will often be poorly constrained, certainly, but there’s a bit more to it than that. Here’s what Romain Brette has to say on his blog, which I am going to reproduce a large part of because he makes the relevant points very clearly.

Such [data-driven] simulations are based on the assumption that the laws that govern the underlying processes are very well understood. This may well be true for the laws of neural electricity… However, in biology in general and in neuroscience in particular, the relevant laws are also those that describe the relations between the different elements of the model. This is a completely different set of laws. For the example of action potential generation, the laws are related to the co-expression of channels, which is more related to the molecular machinery of the cell than to its electrical properties.

Now these laws, which relate to the molecular and genetic machinery, are certainly not so well known. And yet, they are more relevant to what defines a living thing than those describing the propagation of electrical activity, since indeed these are the laws that maintain the structure that maintain the cells alive. Thus, models based on measurements attempt to reproduce biological function without capturing the logics of the living, and this seems rather hopeful.

…I do not want to sound as if I were entirely dismissing data-driven simulations. Such simulations can still be useful, as an exploratory tool. For example, one may simulate a neuron using measured channel densities and test whether the results are consistent with what the actual cell does. If they are not, then we know we are missing some important property. But it is wrong to claim that such models are more realistic because they are based on measurements. On one hand, they are based on empirical measurements, on the other hand, they are dismissing mechanisms (or “principles”), which is another empirical aspect to be accounted for in living things.

This is what I see as being the main purpose of the Blue Brain style models: cataloguing knowledge, and exploration through “virtual experiments.” The models will always be missing details, and contain poorly constrained parameters (e.g. fits of ionic conductances in a 3D neuron model using measurements only made at a real neuron’s soma [or if you’re lucky, soma and apical dendrite]), but they represent probably the most detailed description of what we know about neurophysics at the moment. However, even if function does “emerge”, how much does this kind of model really help with our understanding of how the function emerges? You still have a lot of work to do to get there – hopefully the HBP will help with this by incorporating many different modelling approaches, and providing the IT tools and data sharing to facilitate this effort (as Brette also points out, we supposedly already have loads of data for testing models, but getting at it is a pain in the arse).

Ultimately both SPAUN and the BBP have some utility, but they represent fundamentally different ways of describing the brain. The question of whether SPAUN or a BBP type model is “more realistic” doesn’t really make much sense; rather, we should ask how different models help us to understand the phenomena we are interested in. Equally, the criticism that we won’t learn much about the brain from a BBP style model isn’t necessarily true – it depends on what you’re interested in knowing about the brain and whether the model helps you to understand that. I’m keeping my fingers crossed that the Human Brain Project will facilitate this variety of approaches.

References from the SPAUN paper appendix quote:

1. T. C. Stewart, T. Bekolay, C. Eliasmith, Neural representations of compositional structures: Representing and manipulating vector spaces with spiking neurons. Connection Sci. 23, 145 (2011).
2. A. P. Georgopoulos, J. T. Lurito, M. Petrides, A. B. Schwartz, J. T. Massey, Mental rotation of the neuronal population vector. Science 243, 234 (1989). doi:10.1126/science.2911737
3. J. S. Taube, The head direction signal: Origins and sensory-motor integration. Annu. Rev. Neurosci. 30, 181 (2007). doi:10.1146/annurev.neuro.29.051605.112854
4. N. C. Rust, V. Mante, E. P. Simoncelli, J. A. Movshon, How MT cells analyze the motion of visual patterns. Nat. Neurosci. 9, 1421 (2006). doi:10.1038/nn1786
5. B. J. Fischer, J. L. Peña, M. Konishi, Emergence of multiplicative auditory responses in the midbrain of the barn owl. J. Neurophysiol. 98, 1181 (2007). doi:10.1152/jn.00370.2007

Apologies, most (all?) of these are paywalled :\

Consciousness, memory, booze

I was having a read of Neuroskeptic‘s interview with Dr Srivas Chennu on PLOS Neuro earlier (recommended), and found myself nodding vigorously in agreement on reading this quotation:

Consciousness is not just being aware of something, but also being aware that you were aware of it yesterday.

I don’t read too much about consciousness research, but I find sometimes that consciousness researchers neglect the crucial aspect of memory. It is quite possible to appear to be conscious (moving, responding to stimuli, holding conversations) without storing memories. Consider becoming black-out drunk. If you’ve never done this, I wouldn’t necessarily recommend it (unless you are a consciousness researcher, in which case it’s completely essential research), but it is interesting from a philosophical perspective. At some point during an evening of drinking, you completely cease forming memories, but your behaviour is not dramatically different from your usual drunken antics. Are you conscious during this period?

What about when you were an infant and unable to form longer-term memories, were you conscious then? No one can remember, so you can’t just ask someone “were you conscious when you were 3 months old?” – but most people can answer affirmatively if you ask “were you conscious when you were 6 years old?” even if they’ve forgotten much of what they did at age 6.

One of the current popular theories of consciousness, integrated information theory (IIT), doesn’t take into account memory (Scott Aaronson’s detailed post on IIT is great). It does many other strange things like predicting small amounts of consciousness for items that intuitively would not be conscious in any way, but I suppose this just helps to show that whether something is conscious or not is determinable only by the something itself. If it can remember(?). Medical consciousness researchers have got their work cut out for them.

New paper: simulating electrode recordings in the brain

I was at the Organization for Computational Neuroscience annual meeting (CNS 2014) in Quebec City all last week, which I aim to blog about in the near future (if you’re keen you can see my posters on figshare), but before that, I should write about our paper. It’s been available online since the end of May (open access) but I’ve been tidying up bits and pieces of the code so haven’t got round to advertising it much.

The basic motivation behind the paper is the current lack of knowledge about the relationship between the voltage measurements made using extracellular electrodes (local field potentials – LFPs) and the activity of the neurons that underlies those measurements. It is very difficult to infer how currents are flowing in groups of neurons given a set of extracellular voltage measurements, as an infinite number of arrangements of current sources can give rise to the same LFP. Our approach instead was to take a particular pattern of activity in a neural network that was already well characterised experimentally, and to predict given this pattern what the extracellular voltage measurements would be given the physics of current flow in biological tissue. This is the “forward modelling” approach used previously in various studies (see here for a recent review and description of this approach). Our paper describes a simulation tool for performing these simulations (the Virtual Electrode Recording Tool for EXtracellular potentials: VERTEX), as well as some results from a large network model that we compared directly with experimental data.

The simulation tool is written in Matlab (sorry Python aficionados…) and can run in parallel if you have the Parallel Computing Toolbox installed. Matlab is often thought of as being slow, but if you’re cunning you can get things to run surprisingly speedily, which we have managed to do to a reasonable extent with VERTEX I think. You can download it from www.vertexsimulator.org – the download also includes files to run the model described in the paper, as well as some tutorials for setting up simulations. We’ve also made the experimental data available on figshare.

So now you know a little bit about what I was doing all those years up in the City of Dreams…

Reference:

Tomsett RJ, Ainsworth M, Thiele A, Sanayei M et al. Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX): comparing multi-electrode recordings from simulated and biological mammalian cortical tissue. Brain Structure and Function (2014) doi:10.1007/s00429-014-0793-x

Wrong studies: is this how science progresses?

An article by Sylvia McLain in the Guardian’s Science Blogs section yesterday argued against John Ioannidis’ provocative view that “most scientific studies are wrong, and they are wrong because scientists are interested in funding and careers rather than truth.” The comments on the Guardian article are good; I thought I might add a little example of why I think Sylvia is wrong in saying that prevailing trends in published research (that most studies turn out to be wrong) just reflect scientific progress as usual.

There is a debate in the neuroscience literature at the moment regarding the electrical properties of brain tissue. When analysing the frequencies of electrical potential recordings from the brain, it is apparent that higher frequencies are attenuated more than lower frequencies – slower events show up with more power than faster events. The electrical properties of brain tissue affect the measured potentials, so it is important to know what these properties are so that the recordings can be properly interpreted. Currently, two theories can explain the observed data: the high-frequency reduction is a result of the properties of the space around neurons (made up mostly of glial cells), which result in a varying impedance that attenuates higher frequencies; or it is a result of passive neuronal membrane properties and the physics of current flow through neurons’ dendrites, and the space around neurons doesn’t have an effect. Both of these explanations are plausible, both are supported by theoretical models, and both have some experimental data supporting them. This is a good case of scientific disagreement, which will be resolved by further more refined models and experiments (I’ll put some links below). It could be that aspects of both theory become accepted, or that one is rejected outright. In that case, the studies will have been shown to be “wrong”, but that is besides the point. They will have advanced scientific knowledge by providing alternative plausible and testable theories to explore.

The kind of “wrong” study that Ioannidis describes is quite different. His hypothesis is that many positive findings are results of publication bias. High profile journals want to publish exciting results, and exciting results are usually positive findings (“we found no effect” is rarely exciting). Scientists are under pressure to publish in high profile journals in order to progress in their careers (in some cases even just to graduate), so are incentivised to fudge statistics, fish for p-values, or just not publish their negative results (not to mention the problems inherent in null hypothesis testing, which are often ignored or not known about by many study designers). Pharmaceutical companies have further obvious incentives only to publish positive results from trials (visit www.alltrials.net !). This doesn’t lead to a healthy environment for scientific debate between theories; it distorts the literature and hinders scientific progress by allowing scientists and doctors to become distracted by spurious results. It is not – or should not be – “business as usual”, but is a result of the incentive structure scientists currently face.

Hopefully it’s clear why the second kind of wrong is much more damaging than the first kind (the first is healthy), and that’s why I think Sylvia’s Guardian piece is a bit wrong. Changing the incentives is a tricky matter that I won’t go into now, but as an early career researcher it’s something I don’t feel I have a lot of power over.

REFERENCES
Note: this is far from comprehensive and mostly focuses on the work of two groups

References in support of the variable impedance of brain tissue causing the low-pass filtering of brain recordings:
Modeling Extracellular Field Potentials and the Frequency-Filtering Properties of Extracellular Space
Model of low-pass filtering of local field potentials in brain tissue
Evidence for frequency-dependent extracellular impedance from the transfer function between extracellular and intracellular potentials
Comparative power spectral analysis of simultaneous elecroencephalographic and magnetoencephalographic recordings in humans suggests non-resistive extracellular media

References in support of intrinsic dendritic filtering properties causing the low-pass filtering of brain recordings:
Amplitude Variability and Extracellular Low-Pass Filtering of Neuronal Spikes
Intrinsic dendritic filtering gives low-pass power spectra of local field potentials
Frequency Dependence of Signal Power and Spatial Reach of the Local Field Potential
In Vivo Measurement of Cortical Impedance Spectrum in Monkeys: Implications for Signal Propagation (this is, as far as I know, the most recent direct experimental study measuring the impedance of brain tissue, finding that the impedance is frequency-independent)

Anti-optogenetics 2

This is a response to John Horgan’s response to the responses to his original anti-optogenetics-hype article what I blogged about. The comments section is worth reading, but I thought I’d respond to a couple of points here, too.

Neuroscientist Richard Tomsett says one of my examples of hype—a TED talk by Ed Boyden, another leader of optogenetics—doesn’t count because “the whole point of such talks is hype and speculation.” Really? So scientists shouldn’t be criticized for hyping their research in mass-media venues like TED—which reaches gigantic audiences–because no one is taking them seriously? Surely that can’t be right.

I perhaps wasn’t clear enough here – my point was that it seemed silly to refer to a TED talk as an example of hype when all TED talks hype their particular topics. Scientists certainly should be criticised for hyping research, but this is a problem with the TED format rather than optogenetics.

…the abysmal state of health care in the U.S. should have a bearing on discussions about biomedical research. I’m not saying that journalists, every time they report on a biomedical advance, need to analyze its potential impact on our health-care problems. But knowledge of these woes should inform coverage of biomedical advances, especially since technological innovation is arguably contributing to our high health care costs.

I agree, but again this is not a problem with optogenetics, or even the scientists that try to hype it.

John’s posts touch on an issue with the way that science is funded, which (in the UK at least, and I assume elsewhere) requires an “impact” assessment to try to ensure that research spending isn’t a waste of money. This is a big problem because it can be very difficult to predict what impact most research will have in the short term, let alone the long term. The most obvious way to demonstrate “impact” in neuroscience is to refer to potential treatments for brain disorders, though such treatments might be years or decades away. The brain is so complex that it’s impossible to predict how a particular piece of research might impact medical practice, but you are compelled to spin your case because of this demand for “impact” – hence why all neuroscience press-releases will refer to potential treatments, no matter how relevant the research is to medicine. I completely agree that if scientists want to justify receiving public money then they need to justify their research to the public, but the current incentives promote hype – particularly medical hype. Note that I don’t offer a solution to this problem…

As I said in the previous post, there are good points to be made about the hype surrounding optogenetics (as in this post), it’s just unfortunate that John instead went for criticisms that could be leveled at any hyped science. Rather than attacking a particular field with some quite shaky points, it would have been much more interesting to address why scientists feel the need to hype their work in the first place.

Anti-optogenetics

I read an article that annoyed me a bit. It’s a rant by John Horgan against optogenetics and why the author is vexed by breathless reports of manipulating brain functions using light (optogenetics is where you genetically modify brain cells to enable you to manipulate their behaviour – stimulating or suppressing their firing – using light. This is particularly cool because it allows much better targeted control of brain cells than using implanted electrodes or injecting drugs, the other most precise methods of controlling the activity of many cells). I love me a good rant, and here is a nicely considered article about the limits and hype over optogenetics [NB: I am not an expert in optogenetics], but this was neither good nor considered.

The first half of the article raises a complaint about the hype, which might have been legitimate if it had not misrepresented said hype. It grumbles that articles about optogenetics tout its therapeutic potential for human patients, but we don’t know enough about the mechanisms underlying mental illnesses to treat them with optogenetics. While this latter point is certainly true, it’s a straw-man: read the articles linked-to in the first half and see which ones you think are about human therapeutic potential (I’ve included the links at the bottom*). They all clearly report on animal studies, though of course make reference to the potential for helping to treat human illnesses (not necessarily using optogenetics directly, but by better understanding the brain through optogenetics). Indeed, this point was made to John on Twitter, so his article now includes a clarification at the end admitting as much, but still making some unconvincing points, which we’ll come to later.

The second part of the article addresses John’s “meta-problem” with optogenetics: he “can’t get excited about an extremely high-tech, blue-sky, biomedical ‘breakthrough’-involving complex and hence costly gene therapy and brain surgery-when tens of millions of people in this country [USA] still can’t afford decent health care.” Surely this is a problem with all medical (and, indeed, basic) research that doesn’t address the very largest problems in the health system? I agree totally that this is a massive problem, but it is entirely socio-political, not scientific. Moaning that optogenetic treatments will be expensive is like criticising NASA because only a few lucky astronauts get to go into space.**

John has been good enough to add some “examples of researchers discussing therapeutic applications” to his post. Briefly looking through these, we have a 2011 article in the Joural of Neuroscience, which uses optogenetics to study the role of a particular brain area in depression (doesn’t mention therapeutic optogenetics in abstract, only as a potential avenue for further research in the conclusion); a 2011 TED talk (the whole point of such talks is hype and speculation); this press release from the University of Oxford (which alludes to possible therapeutic use “in the more distant future” in one paragraph in a sixteen paragraph article); a 2011 article in Medical Hypotheses (a non-peer-reviewed journal whose entire point is to publish speculative articles that propose potentially fanciful hypotheses); and this article in the New York Times (I can’t argue with this – there is a fair bit on therapies for humans; John’s main gripe here, from his comments about this article, appears to be with the military funding that one of the several mentioned projects is receiving).

In the second amendment to the article – labeled “clarification” – John admits that he “overstated the degree to which coverage of optogenetics has focused on its potential as a treatment rather than research tool”, which is nice, but then criticises the potential insights from optogenetics research, saying:

But the insights-into-mental-illness angle has also been over-hyped, for the following reasons: First, optogenetics is so invasive that it is unlikely to be tested for research purposes on even the most disabled, desperate human patients any time soon, if ever. Second, research on mice, monkeys and other animals provides limited insights–at best–into complex human illnesses such as depression, bipolar disorder and schizophrenia (or our knowledge of these disorders wouldn’t still be so appallingly primitive). Finally, optogenetics alters the cells and circuits it seeks to study so much that experimental results might not apply to unaltered tissue.

Regarding point one: this is still about therapeutic, not research, uses of optogenetics; it also ignores that many patients undergo invasive surgery for epilepsy (which involves actually cutting bits of brain out – surely optogenetics could be a bit better here?) as well as for deep brain stimulation to treat severe depression and Parkinson’s symptoms. Regarding point two: this is a criticism of using animal models in any kind of research rather than optogenetic research in particular – it is valid, but totally besides the point. Regarding point three: if we’re looking to modify the cells in therapies anyway, why does this matter? Stimulating cells with electrodes or drugs changes the way they behave compared to “unaltered” tissue, too!

TL;DR – read this article instead, and don’t pay much attention to this one. It could have made some good points about optogenetics-hype, but didn’t.

*Links from the original article:

OCD and Optogenetics (Scicurious blog)

Implanting false memories in mice (MIT technology review)

Breaking habits with a  flash of light (Not Exactly Rocket Science blog)

Optogenetics relives depression in mouse trial (Neuron Culture blog)

How to ‘take over’ a brain (CNN)

A laser light show in the brain (The New Yorker)

** yeah I know, tenuous analogy – but let’s face it, all analogies are pretty shite

Measuring brain electricity

Ah yes, I was going to write about data analysis, but I got distracted by more data analysis. Anyway, here’s a bit of information on how we measure what’s going on in the brain, and how we interpret those measurements.

The brain generates and processes information using electrical signals, as far as our current understanding goes. For example, neurons in the eye respond to light by sending electrical signals to the visual cortex (via some other brain areas), where the signals are processed, interpreted, and distributed to other parts of the brain for integration with our other senses, further interpretation etc. These signals are very small, but measurable, even from the scalp – this is called electroencephalography (EEG). Each electrode of an EEG measures contributions from hundreds of thousands, perhaps millions, of neurons, so only provides a very coarse measurement of what’s really going on. At the other end of the scale, the electrical responses of an individual neuron can be measured by using a very small electrode to attach to its cell membrane, revealing the electrical activity inside it (or even to look at the currents flowing through individual channels in the membrane). This gives you information on what a single neuron is doing, but neurons never work in isolation, so you’re missing out on a lot of information about how the rest of the neuronal network is behaving.

Various types of measurement are available to bridge this scale divide. The type that I’m working with is from Utah arrays – 3.6mm square grids of 100 small electrodes that measure electrical activity from the space around neurons. This kind of measurement is similar to an EEG in that each electrode measures activity from many neurons surrounding it, but because the electrodes are placed so close to the neurons, spikes from individual neurons can also be picked up. The smaller scale allows the construction of a detailed picture of the local brain dynamics. Utah arrays are also particularly cool, because they are one of the only types of electrode that provide information on this kind of spatial scale that have also been approved for use in humans. They have already provided previously inaccessible information about epileptic brain activity in humans, and can be used to create brain-machine interfaces.

Utah array, source http://www.sci.utah.edu/~gk/abstracts/bisti03/

The faintly terrifying-looking Utah array. Don’t worry, it’s quite small.

The data I get is from less glamorous locations than behaving humans, but in future I may get my grubby paws on recordings from brain tissue that has been removed during epilepsy surgery. Currently I’m looking at how brain activity varies in space over the small scale that the Utah array provides, and trying to match a computer model to the information provided by the recordings. The idea then is to investigate aspects of the model that cannot be changed in an experiment, such as how neurons are connected together, in order to guide future experimental research into unhealthy brain activity.