Snoopers’ Charter

Just a short one – I finished my time in Okinawa on 31st March and have since made my triumphant return to the UK. Things have been remarkably hectic since then – I’ve been job hunting, house hunting, and everyone I know seems to be getting married and having stag parties. All fun and games, but my productivity has taken a serious hit… at some point I’ll get round to posting more neuroscience stuff. Possibly.

Before then, the Snoopers’ Charter is back in the news after the recent Tory election victory, with Theresa May yearning to gorge herself on our beautiful, private data. I wrote to my MP Steve Brine, though his parliamentary voting record goes against pretty much all of my opinions, so I don’t have much hope that I’ll make any difference. I can but try. You should too, it’s pretty easy: just follow this link, enter your postcode and mash your face into your keyboard for a few minutes. Here’s the text of my message:

Dear Mr. Brine,

Judging by your voting record we disagree on many issues, but I hope we can find some common ground here. I am extremely concerned about the proposed Investigatory Powers Bill (aka Snoopers’ Charter). It appears to be a gross breach of civil liberties and an attack on our Article 8 right to privacy. This goes against the traditional Conservative stance against government overreach and conservative values of individual liberty, effectively making everyone a suspected criminal.

On a more practical note, I am unconvinced by Theresa May’s arguments that the mass collection  of our data is in any way necessary for our security, especially given the large cost of implementing such a system. The government did not make a compelling case for this when the original legislation was proposed a few years ago, and nothing has changed since then.

I urge you against support of the Charter in any form it might take, and ask that you clarify your own position on the issue.

If you would like further information on the Snoopers’ Charter, please visit

Yours sincerely,

Dr Richard Tomsett

Neuroscience papers to read 4

Bit of a long one this time as I’ve missed a few weeks. Hectic times in my last month in Okinawa…

Note: these lists are for my own reference, but hopefully you’ll find something interesting in there too.

Free access:


Neuroscience papers to read 3

Neuroscience papers to read this week. Note: these lists are for my own reference, but hopefully you’ll find something interesting in there too.

Free access:


CNS 2015 in Prague, deadline extended

It seems unlikely that I’ll be able to attend CNS (the annual computational neuroscience meeting) this year (I went to the Quebec meeting last year and it was great – my posters are on figshare), but they’ve just announced that the abstract submission deadline has been extended to 1st March. It’s a cool meeting with some fun and interesting people, and Prague is swell. I feel I should be pimping it out, too, as I did the poster:

CNS 2015 Prague

Obviously the poster is now out of date with the abstract deadline there. Anyway.


Neuroscience papers to read 2

More exciting (possibly) neuroscience papers for this week. Note: these lists are for my own reference, but hopefully you’ll find something interesting in there too.

Free access:


2014 in metal

La Fin Absolue du Monde album cover2014 perhaps wasn’t a vintage year for metal (certainly not up to 2013’s high standards), but there were still some excellent releases. Future generations will no doubt remember 2014 as the year I started reviewing albums for the finest music web-site on the internet, Angry Metal Guy. I did a top 10 list there, and here I’ve expanded it to my usual top 20 plus stragglers (top 20s were customary at the Snakenet Metal Radio forums – 2014 will also be remembered as the year that web-site finally kicked the bucket, which is sad). Since writing my AMG list I’ve listened to Opeth’s new record a lot more and it’s climbed a few positions, knocking Blut Aus Nord out of the top 10.

1. Enabler – La Fin Absolue du Monde
2. Voices – London
3. Pallbearer – Foundations of Burden
4. Emptiness – Nothing But the Whole
5. Soen – Tellurian
6. Giant Squid – Minoans
7. Opeth – Pale Communion
8. Gazpacho – Demon
9. Psalm Zero – The Drain
10. Hail Spirit Noir – Oi Magoi
11. Blut Aus Nord – Memoria Vetusta III – Saturnian Poetry
12. Matyrdöd – Elddop
13. Godflesh – A World Lit Only by Fire
14. Horrendous – Ecdysis
15. Portrait – Crossroads
16. Slugdge – Gastronomicon
17. Pyrrhon – The Mother of Virtues
18. Artificial Brain – Labyrinth Constellation
19. Menace Ruine – Venus Armata
20. Solitary Sabred – Redemption Through Force

Spotify playlist (this is missing Slugdge, whose album is on Bandcamp)

Bonus albums:

The Atlas Moth – The Old Believer
Audrey Horne – Pure Heavy
Behemoth – The Satanist
Bloodbath – Grand Morbid Funeral
Dawnbringer – Night of the Hammer
Doom:VS – Earthless
Elvenking – The Pagan Manifesto
Fallujah – The Flesh Prevails
Goatwhore – Constricting Rage of the Merciless
The Great Old Ones – Tekeli-Li
Hoth – Oathbreaker
Impaled Nazarene – Vigorous and Liberating Death
Kobra and the Lotus – High Priestess
Lantlôs – Melting Sun
Mastodon – Once More ‘Round The Sun
Morbus Chron – Sweven
Mortals – Cursed to See the Future
Panopticon – Roads to the North
Persuader – The Fiction Maze
Primordial – Where Greater Men Have Fallen
Sólstafir – Ótta
Thantifaxath – Sacred White Noise
Vainaja – Kadotetut
YOB – Clearing a Path to Ascend

Loved but not metal:

Devin Townsend – Casualties of Cool
Le Cassette – Left to Our Own Devices
Mild Peril – Matter
Perturbator – Dangerous Days
Wolves in the Throne Room – Celestite

Neuroscience papers to read

Inspired by Thomas Weisswange’s Interesting (Computational) Neuroscience Papers Tumblr, and because I don’t currently have a system for remembering interesting neuroscience papers I want to check out (shame on me), I thought I would start posting weekly lists on this blog. These are mostly recent publications, with the odd one from a few months ago sneaking in. These lists will be for my own reference, but if you discover some neuroscience gems along the way then BONUS!

Free access:


My week on Biotweeps

Last Sunday I finished a week curating the Biotweeps Twitter account. Biotweeps features a different researcher each week, tweeting about their particular areas of interest. It’s a great account to follow for broadening your biology knowledge (as I’m a fake biologist mine is extremely limited). I tweeted about my PhD research, the work I’m currently doing, and some interesting projects and papers from my previous colleagues at Newcastle University (particularly the CANDO project, which aims to create an implantable device for preventing seizures).

My tweets are archived on Storify here.

Publishing on The Winnower

I submitted a review of a paper for publication in the Journal of Neuroscience‘s Journal Club series. The original paper is here (sadly paywalled) – it’s an interesting modelling study that attempts to fit simplified models of neuron population dynamics to experimental recordings to shed some light on the neural network dynamics underlying those recordings. Unfortunately J. Neurosci. made a terrible mistake and chose not to publish my submission, so instead of letting that work go to waste I’ve published it on The Winnower, an innovative open platform for publishing papers without pre-publication peer review. Instead, your article is public immediately, and readers can submit public post-publication reviews (and the article can be updated). This is clearly the future of scientific publishing, and some other bigger publishers (e.g. F1000) are already using similar models. The Winnower is a cool independent alternative, and is currently free to use. I really hope it takes off, but in the current impact-obsessed environment it’s fighting an uphill battle.

SPAUN versus the Human Brain Project

As mentioned in my previous-but-one post, in August I went to Quebec City for the 2014 Computational Neuroscience meeting. It went splendidly from my perspective – I got to meet and chat with some interesting people, hopefully starting some collaborations, as well as eating inordinate quantities of poutine and sampling some good beer. This was also the first meeting where I had actually completed work in advance for my posters, so I didn’t have to search frantically for a printer in Quebec City the day before I presented.

Along with some nice talks and workshops, the keynote speakers were all Big Names this year: Frances Skinner, Christoph Koch, Chris Eliasmith and Henry Markram. Of course, given the recent furore sparked by the open letter to the European Commission expressing doubts about both the conception and implementation of the Human Brain Project (HBP), everyone was keen to hear Markram speak. Sadly he couldn’t make it, but Sean Hill (co-director of Neuroinformatics for the HBP) stepped in at the last minute in his stead.

He gave a good talk, clear and well organised – you don’t get a billion euro grant without being good at presentation. The Q&A session was necessarily quite brief, but still interesting from a sociological perspective. The same question was asked point was made several times by different people: the HBP will never succeed in understanding the brain, because the approach of examining all the minute details of neural circuits, then rebuilding them in a huge, complicated model, is a bad approach for this purpose. By now though it’s been made pretty clear that this is not the entirety of what the HBP aims to do. It’s an easy criticism to make, and people made it frequently about the HBP’s predecessor, the Blue Brain Project (BBP), but this is not the HBP’s main goal. The HBP is funded from an informatics grant – the initial stages will involve building the IT infrastructure and tools to allow data integration and modelling across all scales, not just a BBP style megamodel (NB: any assertions I make about goals of projects etc. are purely my interpretation, I’m not affiliated with anything myself so can’t speak authoritatively – though if my tweets at the time are accurate then Hill said himself that he was frustrated by the repeated assertion that the HBP was just going to try to build a brain in a computer).

Hill’s explanation of the critical open letter was that an internal disagreement about the details of this initial ramp-up phase had gone public. It’s hardly surprising that there are internal disagreements; it’s more unusual for them to go public and suggests quite a serious level of dispute. Gaute Einevoll made a point from the floor about petitioning against projects: in his previous life as a physicist, he had seen a big project publicly attacked and the funding taken away. The money wasn’t put back into that area of physics, it was just lost. This seems likely to happen if the HBP loses its funding: as it’s funded from money earmarked for IT projects, how likely is it that neuroscientists would get any of that money back if it were reallocated? Another voice from the floor contended that the open letter was not a petition against the HBP, but a genuine request for clarification of the project’s goals given the removal of the cognitive science subproject from the second round of funding. Hill’s response was that, while the questions raised were legitimate, the open letter approach is portrayed in the media as an attack, so it could certainly have implications for HBP funding and potentially the public image of neuroscience. I think this is fair enough, really. You only write an open letter if you’re trying to put aggressive pressure on something. Here is a longer article outlining some of the concerns regarding the project’s funding. Since I started writing this post, the project has celebrated its 1-year anniversary and an external mediator has been appointed to “develop and realize proposals for restructuring the project governance”, whatever that means.

Politics aside, I want to go into the criticism that keeps coming up: we’re not going to learn much from a BBP type model. I suppose that some of Markram’s media quotes about other brain models haven’t helped the HBP out here. There was the infamous Modha-Markram incident (in which Dharmendra Modha of IBM overhyped their model, and Henry Markram responded with an amusingly aggressive open letter), as well as describing the aforementioned Chris Eliasmith’s SPAUN functional brain model as “…not a brain model“. Markram clearly has a very set idea of what he thinks a brain model is (or at least, what a brain model isn’t). One can see why some may be wary of his leadership of the HBP, then, given that it is meant to be considering a variety of approaches, and that when the the submission for the second round of HBP funding was made, the cognitive science aspects had been dropped.

Plasticine brain model

A brain model

Assuming Markram means “not a good brain model” when describing SPAUN, rather than literally “not a brain model” (a lump of plasticine, or piece of knitting, could be brain models if we use them to improve our understanding of the brain), then why does he think these other approaches are no good? Given his criticisms of Modha’s work, one might assume that his issue is with the lack of biological detail used in these models. But lack of detail is something every model suffers from, even BBP models (“suffers” is the wrong word; abstracting away non-crucial details to provide an understandable description of a phenomenon is a crucial part of modelling). Who gets to say what the “right” level of detail is to “understand” something?

Level of detail is not the fundamental difference between a BBP style model and a SPAUN style model. Rather, they represent different philosophies regarding how models should be used to investigate reality. With SPAUN, the model creators have specific hypotheses about how the brain implements certain functions mathematically, and how these functions can be computed using networks of neuron-like units using their Neural Engineering Framework (NEF). SPAUN is remarkably successful at several tasks – notably, it can perform 8 different tasks, after learning, using the same model without modification (though it cannot learn new tasks). The basic idea behind how the functions are implemented neurally is explained in the supplementary material [may be paywalled] of the original article in Science:

The central idea behind the NEF is that a group of spiking neurons can represent a vector space over time, and that connections between groups of neurons can compute functions on those vectors. The NEF provides a set of methods for determining what the connections need to be to compute a given function on the vector space represented by a group of neurons. Suppose we wish to compute the function y = f(x), where vector space x is represented in population A, and vector space y is represented in population B. To do so, the NEF assumes that each neuron in A and B has a “preferred direction vector” (1). The preferred direction vector is the vector (i.e. direction in the vector space) for which that neuron will fire most strongly. This is a well-established way to characterize the behavior of motor neurons (2), because the direction of motion – hence the vector represented in the neural group in motor cortex – is directly observable. This kind of characterization of neural response has also been used in the head direction system (3), visual system (4), and auditory system (5). The NEF generalizes this notion to all neural representation.

(my emphasis; references renumbered). The bold sentence is key here – Eliasmith et al. are stating a hypothesis about how populations of neurons compute functions, and SPAUN represents their hypotheses about which functions the brain computes. They go from function to implementation by considering some biological constraints: neurons are connected with synapses, they communicate using spikes, there are two main types of neuron (inhibitory and excitatory), etc. In addition to the behavioural output, we can then see how well the model captures the brain’s dynamics by comparing measures of the model’s activity against measures of brain activity that haven’t been used to constrain the model. Are the temporal patterns of the spikes in line with experimental data (and if not, why not)? What happens when you remove bits of the model (analogous to lesion studies)? What kind of connectivity structure does the model predict, and is this realistic? This last question in particular I think is important, and as far as I can tell isn’t addressed in the SPAUN paper or in subsequent discussion. Given that SPAUN optimises neural connectivity to perform particular functions, comparing the connectivity against real brain connectivity seems one fairly obvious test of how well SPAUN captures real brain computation.

The Blue Brain Project is quite a different undertaking. The models developed in the BBP attempt to capture as much as the low-level biological detail as possible. The neurons are represented by sets of equations that describe their electrical dynamics, including as many experimentally constrained details about 3D structure and interactions between nonlinear membrane conductances as possible. These neuron models are connected together by model synapses, the numbers, positions and dynamics of which are again constrained by experimental measurements. The result is a fantastically detailed and complex model that is as close as we can currently get to the physics of the system, but with no hypotheses about the network’s function. Building this takes meticulous work and produces a model incorporating much current low-level neuroscience knowledge. The process of building it can also reveal aspects of the physiology that are unknown, suggesting further experiments – or reveal if a model is unable to capture a particular physical phenomenon. We can potentially learn a lot about biophysics and brain structure from this kind approach.

Using this model to address function, though, is much more tricky. The hope (of some) is that, when the brain is simulated fully in this manner, function will emerge because the model of the underlying physics is so accurate (presumably this will require adding a virtual body and environment to assess function – another difficult task). I am sceptical that this can work for various reasons. You will always miss something out of your model and your parameter fits will often be poorly constrained, certainly, but there’s a bit more to it than that. Here’s what Romain Brette has to say on his blog, which I am going to reproduce a large part of because he makes the relevant points very clearly.

Such [data-driven] simulations are based on the assumption that the laws that govern the underlying processes are very well understood. This may well be true for the laws of neural electricity… However, in biology in general and in neuroscience in particular, the relevant laws are also those that describe the relations between the different elements of the model. This is a completely different set of laws. For the example of action potential generation, the laws are related to the co-expression of channels, which is more related to the molecular machinery of the cell than to its electrical properties.

Now these laws, which relate to the molecular and genetic machinery, are certainly not so well known. And yet, they are more relevant to what defines a living thing than those describing the propagation of electrical activity, since indeed these are the laws that maintain the structure that maintain the cells alive. Thus, models based on measurements attempt to reproduce biological function without capturing the logics of the living, and this seems rather hopeful.

…I do not want to sound as if I were entirely dismissing data-driven simulations. Such simulations can still be useful, as an exploratory tool. For example, one may simulate a neuron using measured channel densities and test whether the results are consistent with what the actual cell does. If they are not, then we know we are missing some important property. But it is wrong to claim that such models are more realistic because they are based on measurements. On one hand, they are based on empirical measurements, on the other hand, they are dismissing mechanisms (or “principles”), which is another empirical aspect to be accounted for in living things.

This is what I see as being the main purpose of the Blue Brain style models: cataloguing knowledge, and exploration through “virtual experiments.” The models will always be missing details, and contain poorly constrained parameters (e.g. fits of ionic conductances in a 3D neuron model using measurements only made at a real neuron’s soma [or if you’re lucky, soma and apical dendrite]), but they represent probably the most detailed description of what we know about neurophysics at the moment. However, even if function does “emerge”, how much does this kind of model really help with our understanding of how the function emerges? You still have a lot of work to do to get there – hopefully the HBP will help with this by incorporating many different modelling approaches, and providing the IT tools and data sharing to facilitate this effort (as Brette also points out, we supposedly already have loads of data for testing models, but getting at it is a pain in the arse).

Ultimately both SPAUN and the BBP have some utility, but they represent fundamentally different ways of describing the brain. The question of whether SPAUN or a BBP type model is “more realistic” doesn’t really make much sense; rather, we should ask how different models help us to understand the phenomena we are interested in. Equally, the criticism that we won’t learn much about the brain from a BBP style model isn’t necessarily true – it depends on what you’re interested in knowing about the brain and whether the model helps you to understand that. I’m keeping my fingers crossed that the Human Brain Project will facilitate this variety of approaches.

References from the SPAUN paper appendix quote:

1. T. C. Stewart, T. Bekolay, C. Eliasmith, Neural representations of compositional structures: Representing and manipulating vector spaces with spiking neurons. Connection Sci. 23, 145 (2011).
2. A. P. Georgopoulos, J. T. Lurito, M. Petrides, A. B. Schwartz, J. T. Massey, Mental rotation of the neuronal population vector. Science 243, 234 (1989). doi:10.1126/science.2911737
3. J. S. Taube, The head direction signal: Origins and sensory-motor integration. Annu. Rev. Neurosci. 30, 181 (2007). doi:10.1146/annurev.neuro.29.051605.112854
4. N. C. Rust, V. Mante, E. P. Simoncelli, J. A. Movshon, How MT cells analyze the motion of visual patterns. Nat. Neurosci. 9, 1421 (2006). doi:10.1038/nn1786
5. B. J. Fischer, J. L. Peña, M. Konishi, Emergence of multiplicative auditory responses in the midbrain of the barn owl. J. Neurophysiol. 98, 1181 (2007). doi:10.1152/jn.00370.2007

Apologies, most (all?) of these are paywalled :\