As mentioned in my previous-but-one post, in August I went to Quebec City for the 2014 Computational Neuroscience meeting. It went splendidly from my perspective – I got to meet and chat with some interesting people, hopefully starting some collaborations, as well as eating inordinate quantities of poutine and sampling some good beer. This was also the first meeting where I had actually completed work in advance for my posters, so I didn’t have to search frantically for a printer in Quebec City the day before I presented.
Along with some nice talks and workshops, the keynote speakers were all Big Names this year: Frances Skinner, Christoph Koch, Chris Eliasmith and Henry Markram. Of course, given the recent furore sparked by the open letter to the European Commission expressing doubts about both the conception and implementation of the Human Brain Project (HBP), everyone was keen to hear Markram speak. Sadly he couldn’t make it, but Sean Hill (co-director of Neuroinformatics for the HBP) stepped in at the last minute in his stead.
He gave a good talk, clear and well organised – you don’t get a billion euro grant without being good at presentation. The Q&A session was necessarily quite brief, but still interesting from a sociological perspective. The same
question was asked point was made several times by different people: the HBP will never succeed in understanding the brain, because the approach of examining all the minute details of neural circuits, then rebuilding them in a huge, complicated model, is a bad approach for this purpose. By now though it’s been made pretty clear that this is not the entirety of what the HBP aims to do. It’s an easy criticism to make, and people made it frequently about the HBP’s predecessor, the Blue Brain Project (BBP), but this is not the HBP’s main goal. The HBP is funded from an informatics grant – the initial stages will involve building the IT infrastructure and tools to allow data integration and modelling across all scales, not just a BBP style megamodel (NB: any assertions I make about goals of projects etc. are purely my interpretation, I’m not affiliated with anything myself so can’t speak authoritatively – though if my tweets at the time are accurate then Hill said himself that he was frustrated by the repeated assertion that the HBP was just going to try to build a brain in a computer).
Hill’s explanation of the critical open letter was that an internal disagreement about the details of this initial ramp-up phase had gone public. It’s hardly surprising that there are internal disagreements; it’s more unusual for them to go public and suggests quite a serious level of dispute. Gaute Einevoll made a point from the floor about petitioning against projects: in his previous life as a physicist, he had seen a big project publicly attacked and the funding taken away. The money wasn’t put back into that area of physics, it was just lost. This seems likely to happen if the HBP loses its funding: as it’s funded from money earmarked for IT projects, how likely is it that neuroscientists would get any of that money back if it were reallocated? Another voice from the floor contended that the open letter was not a petition against the HBP, but a genuine request for clarification of the project’s goals given the removal of the cognitive science subproject from the second round of funding. Hill’s response was that, while the questions raised were legitimate, the open letter approach is portrayed in the media as an attack, so it could certainly have implications for HBP funding and potentially the public image of neuroscience. I think this is fair enough, really. You only write an open letter if you’re trying to put aggressive pressure on something. Here is a longer article outlining some of the concerns regarding the project’s funding. Since I started writing this post, the project has celebrated its 1-year anniversary and an external mediator has been appointed to “develop and realize proposals for restructuring the project governance”, whatever that means.
Politics aside, I want to go into the criticism that keeps coming up: we’re not going to learn much from a BBP type model. I suppose that some of Markram’s media quotes about other brain models haven’t helped the HBP out here. There was the infamous Modha-Markram incident (in which Dharmendra Modha of IBM overhyped their model, and Henry Markram responded with an amusingly aggressive open letter), as well as describing the aforementioned Chris Eliasmith’s SPAUN functional brain model as “…not a brain model“. Markram clearly has a very set idea of what he thinks a brain model is (or at least, what a brain model isn’t). One can see why some may be wary of his leadership of the HBP, then, given that it is meant to be considering a variety of approaches, and that when the the submission for the second round of HBP funding was made, the cognitive science aspects had been dropped.
A brain model
Assuming Markram means “not a good brain model” when describing SPAUN, rather than literally “not a brain model” (a lump of plasticine, or piece of knitting, could be brain models if we use them to improve our understanding of the brain), then why does he think these other approaches are no good? Given his criticisms of Modha’s work, one might assume that his issue is with the lack of biological detail used in these models. But lack of detail is something every model suffers from, even BBP models (“suffers” is the wrong word; abstracting away non-crucial details to provide an understandable description of a phenomenon is a crucial part of modelling). Who gets to say what the “right” level of detail is to “understand” something?
Level of detail is not the fundamental difference between a BBP style model and a SPAUN style model. Rather, they represent different philosophies regarding how models should be used to investigate reality. With SPAUN, the model creators have specific hypotheses about how the brain implements certain functions mathematically, and how these functions can be computed using networks of neuron-like units using their Neural Engineering Framework (NEF). SPAUN is remarkably successful at several tasks – notably, it can perform 8 different tasks, after learning, using the same model without modification (though it cannot learn new tasks). The basic idea behind how the functions are implemented neurally is explained in the supplementary material [may be paywalled] of the original article in Science:
The central idea behind the NEF is that a group of spiking neurons can represent a vector space over time, and that connections between groups of neurons can compute functions on those vectors. The NEF provides a set of methods for determining what the connections need to be to compute a given function on the vector space represented by a group of neurons. Suppose we wish to compute the function y = f(x), where vector space x is represented in population A, and vector space y is represented in population B. To do so, the NEF assumes that each neuron in A and B has a “preferred direction vector” (1). The preferred direction vector is the vector (i.e. direction in the vector space) for which that neuron will fire most strongly. This is a well-established way to characterize the behavior of motor neurons (2), because the direction of motion – hence the vector represented in the neural group in motor cortex – is directly observable. This kind of characterization of neural response has also been used in the head direction system (3), visual system (4), and auditory system (5). The NEF generalizes this notion to all neural representation.
(my emphasis; references renumbered). The bold sentence is key here – Eliasmith et al. are stating a hypothesis about how populations of neurons compute functions, and SPAUN represents their hypotheses about which functions the brain computes. They go from function to implementation by considering some biological constraints: neurons are connected with synapses, they communicate using spikes, there are two main types of neuron (inhibitory and excitatory), etc. In addition to the behavioural output, we can then see how well the model captures the brain’s dynamics by comparing measures of the model’s activity against measures of brain activity that haven’t been used to constrain the model. Are the temporal patterns of the spikes in line with experimental data (and if not, why not)? What happens when you remove bits of the model (analogous to lesion studies)? What kind of connectivity structure does the model predict, and is this realistic? This last question in particular I think is important, and as far as I can tell isn’t addressed in the SPAUN paper or in subsequent discussion. Given that SPAUN optimises neural connectivity to perform particular functions, comparing the connectivity against real brain connectivity seems one fairly obvious test of how well SPAUN captures real brain computation.
The Blue Brain Project is quite a different undertaking. The models developed in the BBP attempt to capture as much as the low-level biological detail as possible. The neurons are represented by sets of equations that describe their electrical dynamics, including as many experimentally constrained details about 3D structure and interactions between nonlinear membrane conductances as possible. These neuron models are connected together by model synapses, the numbers, positions and dynamics of which are again constrained by experimental measurements. The result is a fantastically detailed and complex model that is as close as we can currently get to the physics of the system, but with no hypotheses about the network’s function. Building this takes meticulous work and produces a model incorporating much current low-level neuroscience knowledge. The process of building it can also reveal aspects of the physiology that are unknown, suggesting further experiments – or reveal if a model is unable to capture a particular physical phenomenon. We can potentially learn a lot about biophysics and brain structure from this kind approach.
Using this model to address function, though, is much more tricky. The hope (of some) is that, when the brain is simulated fully in this manner, function will emerge because the model of the underlying physics is so accurate (presumably this will require adding a virtual body and environment to assess function – another difficult task). I am sceptical that this can work for various reasons. You will always miss something out of your model and your parameter fits will often be poorly constrained, certainly, but there’s a bit more to it than that. Here’s what Romain Brette has to say on his blog, which I am going to reproduce a large part of because he makes the relevant points very clearly.
Such [data-driven] simulations are based on the assumption that the laws that govern the underlying processes are very well understood. This may well be true for the laws of neural electricity… However, in biology in general and in neuroscience in particular, the relevant laws are also those that describe the relations between the different elements of the model. This is a completely different set of laws. For the example of action potential generation, the laws are related to the co-expression of channels, which is more related to the molecular machinery of the cell than to its electrical properties.
Now these laws, which relate to the molecular and genetic machinery, are certainly not so well known. And yet, they are more relevant to what defines a living thing than those describing the propagation of electrical activity, since indeed these are the laws that maintain the structure that maintain the cells alive. Thus, models based on measurements attempt to reproduce biological function without capturing the logics of the living, and this seems rather hopeful.
…I do not want to sound as if I were entirely dismissing data-driven simulations. Such simulations can still be useful, as an exploratory tool. For example, one may simulate a neuron using measured channel densities and test whether the results are consistent with what the actual cell does. If they are not, then we know we are missing some important property. But it is wrong to claim that such models are more realistic because they are based on measurements. On one hand, they are based on empirical measurements, on the other hand, they are dismissing mechanisms (or “principles”), which is another empirical aspect to be accounted for in living things.
This is what I see as being the main purpose of the Blue Brain style models: cataloguing knowledge, and exploration through “virtual experiments.” The models will always be missing details, and contain poorly constrained parameters (e.g. fits of ionic conductances in a 3D neuron model using measurements only made at a real neuron’s soma [or if you’re lucky, soma and apical dendrite]), but they represent probably the most detailed description of what we know about neurophysics at the moment. However, even if function does “emerge”, how much does this kind of model really help with our understanding of how the function emerges? You still have a lot of work to do to get there – hopefully the HBP will help with this by incorporating many different modelling approaches, and providing the IT tools and data sharing to facilitate this effort (as Brette also points out, we supposedly already have loads of data for testing models, but getting at it is a pain in the arse).
Ultimately both SPAUN and the BBP have some utility, but they represent fundamentally different ways of describing the brain. The question of whether SPAUN or a BBP type model is “more realistic” doesn’t really make much sense; rather, we should ask how different models help us to understand the phenomena we are interested in. Equally, the criticism that we won’t learn much about the brain from a BBP style model isn’t necessarily true – it depends on what you’re interested in knowing about the brain and whether the model helps you to understand that. I’m keeping my fingers crossed that the Human Brain Project will facilitate this variety of approaches.
References from the SPAUN paper appendix quote:
1. T. C. Stewart, T. Bekolay, C. Eliasmith, Neural representations of compositional structures: Representing and manipulating vector spaces with spiking neurons. Connection Sci. 23, 145 (2011).
2. A. P. Georgopoulos, J. T. Lurito, M. Petrides, A. B. Schwartz, J. T. Massey, Mental rotation of the neuronal population vector. Science 243, 234 (1989). doi:10.1126/science.2911737
3. J. S. Taube, The head direction signal: Origins and sensory-motor integration. Annu. Rev. Neurosci. 30, 181 (2007). doi:10.1146/annurev.neuro.29.051605.112854
4. N. C. Rust, V. Mante, E. P. Simoncelli, J. A. Movshon, How MT cells analyze the motion of visual patterns. Nat. Neurosci. 9, 1421 (2006). doi:10.1038/nn1786
5. B. J. Fischer, J. L. Peña, M. Konishi, Emergence of multiplicative auditory responses in the midbrain of the barn owl. J. Neurophysiol. 98, 1181 (2007). doi:10.1152/jn.00370.2007
Apologies, most (all?) of these are paywalled :\