Woman’s bag

I went to the post office this morning to pick up my fabulous new shoes:

My lovely new shoes

A child there, who it seems was the son of the people running the shop, accused me of having a “girl’s bag”. This is my bag:

My nice bag

I would suggest that this bag could be worn proudly by someone of any gender, but apparently I am mistaken. Unfortunately I couldn’t say what I was thinking (“yeah, well you’ve got a girl’s FACE”) as his dad was there, and he looked significantly harder than me. I’m never sure quite how far you can take banter with seven year olds, anyway. I tried to assure him it was definitely a man sack but he was having none of it. “Hahah, girl’s bag, girl’s bag!” So picked up my parcel and left with my tail between my legs.

It could have been worse – his mum told me that last week he’d asked a lady posting a letter why she was so fat. Oh, sweet innocence.

Reproducing computational research

Reproducing results is a crucial part of the scientific process – given uncertainties in measurements, inherent variability and often randomness in systems under investigation, and the likelihood of human error, the only way to establish the truth of a reported result is by repeating the experiment to see if the results match. In the ironically named “soft” sciences (many of the -ologies) in particular, the systems under investigation are highly variable and almost always involve elements of randomness, so even the most careful experiments can only produce tentative results (though you’d believe otherwise from reading the news), thus requiring many repetitions in order for them to be accepted as reliable.

Computing has helped to eliminate some human errors from research, and allowed increasingly more complex and large experiments (e.g. human genome project and large hadron collider would not have been possible without advances in computer science). Computers have become essential for instrument control, data collection and storage, and automated data analysis. Additionally, computers allow very detailed and complex systems to be simulated, helping to generate or refine hypotheses that can then be tested experimentally. This is what I try to do – simulate the electrical activity in brain tissue in order to investigate hypotheses about the causes of diseases like epilepsy.

Unlike experiments in the soft sciences, computer simulations are easily reproducible*: a computer runs calculations reliably, so the same code run many times should give the same results. Unfortunately, the reality is far removed from this ideal. Complex systems require complex software to simulate them, and the more complex a piece of software, the more likely it is to contain errors. Different scientists using different operating systems with different software versions installed may not be able to run each other’s code reliably. Simulations will contain so many parameters that it is impossible to remember them all, especially when some are changed in order to alter the simulation behaviour. Even something as seemingly simple as a change in the numerical method used to solve equations can have drastic consequences on the simulation results.

Conventional software engineering techniques exist to help prevent these kinds of problems and ensure software reliability across multiple computers/operating systems etc., but many scientists have never learned anything about software engineering. ALL IS NOT LOST! Andrew Davison at the Centre National de la Recherche Scientifique in Paris has written a tutorial on best practices for writing code with reproducibility in mind. The examples are computational neuroscience oriented, but the observations and advice should apply to any scientific computing area. He makes the important point that the lack of reproducibility of many results can seriously damage the field’s credibility (cf. “Climategate” in climate science) as well as hindering scientific progress.

I am certainly guilty of ignoring many of the good practices Andrew wisely advocates, so it’s great to have a concise tutorial specific to scientific computing to refer to in future. I am currently in the process of restructuring a lot of my code, too, so it couldn’t have come at a better time from my perspective…

 

*hahahahahahahahahahahahahahahahahahahahahahahaha!!!!!!!!

Music in the brain

Well not really, but indulge the metaphor because it’s a nice description of a phenomenon observed in epileptic brains. A recent paper in the journal Epilepsia (unfortunately behind a paywall, sorry) describes a pattern of electrical activity recorded from the brains of epilepsy patients that may be helpful in predicting when seizures occur. The paper is a good example of useful cross-disciplinary work: it combines clinical, basic and computational research to arrive at a convincing mechanistic explanation for the observed electrical patterns, and proposes some hypotheses about epileptic seizures that, if correct, could lead to better treatments for patients [conflict of interest warning: one of the authors is my second supervisor].

The pattern described is an electrical rhythm that rapidly increases in frequency over a time span of about one second, which seems to happen shortly before a seizure occurs. The electrical signal recorded from the patient’s brain rapidly increases in frequency; in our musical analogy, this electrical pattern is similar to the sound pattern produced when you rapidly slide your hand up the notes of a piano, from low to high. In music, this slide is called a glissando, and the authors adopted this name to describe the pattern of brain activity. This pattern was noticed in electrocorticogram (ECoG) signals – recordings made by placing electrodes directly on the surface of the brain, usually used before epilepsy surgery to give the surgical team information on the location of the small region of brain tissue that initiates seizures (the seizure focus). After surgery, the tissue removed from the seizure focus was kept alive and studied in the lab. Recordings from this removed brain tissue also displayed the glissando pattern of activity.

The mechanism proposed by the authors is complex and derives from a lot of the previous literature on the subject of brain rhythms in epilepsy (for a technical review, see this book). Much previous work on fast rhythms, as seen in glissandi, has pointed towards the importance of electrical connections, called gap junctions, between neurons as being crucial, rather than more conventional chemical synapses. Specifically, the proposed mechanism requires gap junctions to exist between the axons of excitatory neurons (the axon being the part of the neuron that transmits electrical signals to other neurons, which then receive these signals via chemical synapses). When a neuron sends a signal down its axon in the form of an electrical spike and the axon is connected to other axons by gap junctions, under some conditions the spike can be transmitted into these other axons, causing a wide spread of excitatory signals in the neuron network. This spread is fast, as the direct electrical connection of a gap junction transmits signals more rapidly than conventional chemical synapses. A proposed property of the gap junctions between axons is that their electrical resistance decreases the more alkaline their surroundings are*. As shown in the paper, this decrease in resistance allows even faster propagation of electrical spikes between axons, so when the alkalinity of the tissue increases, the frequency of the activity increases, creating the glissando effect.

Previous clinical reports have suggested that increased alkalinity can contribute to starting seizures in some cases, and this paper proposes a mechanism for how this alkalinisation could contribute. Further experiments are required to establish this proposed mechanism as correct, but if it is proved right, it opens up some interesting possibilities for treatment in some patients. If suitable monitoring equipment could be worn, glissandi could be used to predict seizures, and local brain alkalinity controlled in response. Alternatively, drugs acting on gap junctions between excitatory cell axons could be used – though the significance of axonal gap junctions in healthy brain function is unknown.

You can read the Newcastle University press release about this research here.

*other types of gap junction apparently increase in resistance under alkaline conditions

On mud and blog titles

I was away this past weekend doing Tough Mudder in Scotland. It was fun, but I could barely move afterwards. We ran on Saturday and I’m still aching on Tuesday. Then again, I am very unfit. I would recommend it if you fancy a nice long run but find the thought of a marathon tedious, or if you are a masochist.

I’ll have something to post on our wonderful EURO 2012 league soon, complete with analysis of the non-linear complex scoring function, but I still need to make some graphs. In the mean-time, here’s a little something about autapses:  Massive Autaptic Self-Innervation of GABAergic Neurons in Cat Visual Cortex. It’s an oldish paper quantifying the number of connections that different types of neurons make back onto themselves (background: most current brain theories consider the brain to generate and process information in networks of neurons, which communicate by sending electrical and chemical signals to each other – more here. In most of the brain, neurons can be divided into two categories, excitatory and inhibitory, depending on whether they send signals that make other neurons more or less likely to send on signals of their own). The authors found that, in cat visual cortex at least, inhibitory (GABAergic) neurons made substantially more self-connections than excitatory neurons, meaning that when they “spike” and send inhibitory signals to other neurons, they also inhibit their own spiking, thus stopping themselves from sending out more signals. This provides another mechanism for inhibitory neurons to control their output, in addition to the inhibition provided to them by the many connections from other inhibitory neurons in the network, that is separate from the inhibition provided by these other neurons.

I’m unaware of how much work has been done on the functional significance of autapses, but they are a rather interesting concept and usually ignored in the kind of neuronal network research that I am involved in. More digging required.

Networking with myself

These are the papers I referred to at Bright Club:

The Web of Human Sexual Contacts: this paper was published in Nature in 2001 (the link is to a preprint version). The authors analysed a 1996 Swedish survey of sexual behaviour (2810 respondents) and found that the number of sexual partners reported, both in the short term (12 months prior to survey) and long term (lifetime), varied according to a power law. This means that most people haven’t had that many sexual partners, a few people have had a few more, but a very small number of people have had very many partners; in the picture on the Wikipedia page (showing an idealised power law distribution), the x-axis would represent number of sexual partners, and the y-axis the cumulative distribution (i.e. as you go up the y-axis, you see more and more people having had a smaller number of partners). When plotted on a log-log scale (linked-to graph shows example simulated data), the curve becomes a straight line with a negative gradient – the gradient is the exponent of the power law. This kind of network is called scale-free, because whatever scale you consider the network at, its statistics are similar.

The small number of people with a very large number of connections to others are referred to as network ‘hubs’, analogous to a transport hub, as disparate parts of the network are linked up through them. Knowing the structure of a sexual network is very important for targeting effective interventions dealing with the spread of sexually transmitted infections, so this research has serious implications for public health policy. An important feature of scale-free networks is their resilience against random ‘node deletions’: removing a random person from the sexual network (I know what you’re thinking – no, not in any sinister way) will have very little effect on how disease spreads. However, by specifically targeting the network hubs, disease spread can be reduced dramatically just by influencing a small number of hub people, simultaneously reducing cost and improving efficacy. The trick is successfully identifying your hub nodes…

Hubs are also a frequent (though not defining) feature in small-world networks.

Sexual network analysis of a gonorrhoea outbreak: analysis of a gonorrhoea outbreak using network theory. The authors trace the initial spread to patrons frequenting a certain motel bar in Alberta, which they don’t actually name in the paper presumably for legal reasons. The main interesting findings were that cheaper network analysis methods could be used instead of standard case-control analysis to arrive at similar results, including the identification of the causal link between several seemingly isolated disease outbreaks.

Chains of Affection: analysis of a high-school “romance network”. This revealed a very different network structure, with long chains of links between students rather than clear hubs, with obviously different implications for STI spread through the network. The authors suggest the different structure arises from the social rules that operate at high-school: not dating your friend’s ex, for example.

Finally I used this lovely picture from the Human Connectome Project. Yes, your brain is riddled with STIs*.

*not really. Probably.