2013 in metal

July 12th, 2014 § 0 comments § permalink

I totally forgot to post my 2013 Top 20 Heavy Metal Albums list earlier in the year (my lists for 2012, 2011 and 2010 are here)! Good year, was 2013. So, here it is:

1. Beaten to Death – Dødsfest!
2. In Solitude – Sister
3. Stomach Earth – Stomach Earth
4. Altar of Plagues – Teeth Glory and Injury
5. Ayreon – The Theory of Everything
6. Ulcerate – Vermis
7. The Ruins of Beverast – Blood Vaults
8. Oranssi Pazuzu – Vaonielu
9. VHÖL – Vhöl
10. Summoning – Old Mornings Dawn
11. Beastmilk – Beastmilk
12. Carcass – Surgical Steel
13. Coheed and Cambria – The Afterman: Descension*
14. Lychgate – Lychgate
15. Sahg – Delusions of Grandeur
16. Gorguts – Colored Sands
17. Alpha Tiger – Beneath the Surface
18. Power Trip – Manifest Decimation
19. Ihsahn – Das Seelenbrechen
20. Kvelertak – Meir

*I’m aware I lose metal points for this one.

Spotify playlist

Bonus albums:

Antigama – Meteor
ASG – Blood Drive
Author & Punisher – Women & Children
Autopsy – The Headless Ritual
Batillus – Concrete Sustain
Coliseum – Sister Faith
Convulse – Evil Prevails
Domovoyd – Oh Sensibility
Doomriders – Grand Blood
Hell – Curse and Chapter
Hybris – Heavy Machinery
Ghost – Infestissumam
KEN Mode – Entrench
Kylesa – Ultraviolet
Magister Templi – Lucifer Leviathan Logos
Monolithe – IV
Nails – Abandon All Life
Orchid – The Mouths of Madness
Portal – Vexovoid
Primitive Man – Scorn
Shining (NOR) – One One One
Sodom – Epitome of Torture
SubRosa – More Constant Than The Gods
Sulphur Aeon – Swallowed By The Ocean’s Tide
The Dillinger Escape Plan – One Of Us Is The Killer
The Meads of Asphodel – Sonderkommando
Toxic Holocaust – Chemistry of Consciousness
Twilight of the Gods – Fire On The Mountain
Uncle Acid & The Deadbeats – Mind Control
Voivod – Target Earth
Woe – Withdrawal

The Japanese have the best language logic

July 10th, 2014 § 0 comments § permalink

In Japan, they call a buffet バイキング – ba i ki n gu, which is as close as you can get to the word Viking using the sounds available. This is because at a buffet you can eat as much as you like. You can be a glutton. Just like the Vikings.

Japanese language logic is fabulous.

Are replication efforts pointless?

July 7th, 2014 § 5 comments § permalink

A couple of people have tweeted/blogged (EDIT: additional posts from Neuroskeptic, Drugmonkey, Jan Moren, Chris Said, Micah Allen EDIT 2: more, by Sanjay Srivastava, Pete Etchells, Neuropolarbear EDIT 3: more, by Sean Mackinnon) about a recent essay by Jason Mitchell, Professor of Psychology at Harvard, titled On the emptiness of failed replications. Its primary thesis is that efforts to replicate experimental results are pointless, “because unsuccessful experiments have no meaningful scientific value”. This is, of course, counter to the recent replication drive in social psychology – and to how I understand experimental science should be done (caveat: I am not an experimental scientist).

I disagree with the above quotation, and thought I would counter a couple of his arguments that stuck out to me as wrong or misleading:

…A replication attempt starts with good reasons to run an experiment: some theory predicts positive findings, and such findings have been reported in the literature, often more than once. Nevertheless, the experiment fails. In the normal course of science, the presumption would be that the researcher flubbed something important (perhaps something quite subtle) in carrying out the experiment, because that is far-and-away the most likely cause of a scientific failure.

In the case of a very well established result, the most likely cause of scientific failure would certainly be experimental error. But for most hypotheses and theories this is surely not true. The likelihood of each possible cause of a negative result would depend on the likelihood of the hypothesis (consider homeopathy: any number of “positive” results are much better explained through experimental error/bad design/bad analysis than by the hypothesis that homeopathy is effective at curing X – indeed, later in the essay Mitchell acknowledges that spurious positive results can and do come about through bad statistical practices) and potentially unidentified variables. In much “frontier science” the likelihood of the theory is not well known (or even slightly known) and the unidentified variables can be many as the theory is incomplete. We’re getting into experimenter’s regress territory.

… if a replication effort were to be capable of identifying empirically questionable results, it would have to employ flawless experimenters. Otherwise, how do we identify replications that fail simply because of undetected experimenter error?

Adversarial collaboration; alas this happens infrequently and apparently not in many current replication efforts. This is a legitimate criticism of the replication movement: collaboration is essential to avoid experimenter’s regress.

…And here is the rub: if the most likely explanation for a failed experiment is simply a mundane slip-up, and the replicators are themselves not immune to making such mistakes, then the replication efforts have no meaningful evidentiary value outside of the very local (and uninteresting) fact that Professor So-and-So’s lab was incapable of producing an effect.

*Why* they weren’t able to produce the result should be very interesting, but can only really be investigated with collaboration, data sharing etc.

There are three standard rejoinders to these points. The first is to argue that because the replicator is closely copying the method set out in an earlier experiment, the original description must in some way be insufficient or otherwise defective…

…there is more to being a successful experimenter than merely following what’s printed in a method section…Collecting meaningful neuroimaging data, for example, requires that participants remain near-motionless during scanning, and thus in my lab, we go through great lengths to encourage participants to keep still. We whine about how we will have spent a lot of money for nothing if they move, we plead with them not to sneeze or cough or wiggle their foot while in the scanner, and we deliver frequent pep talks and reminders throughout the session. These experimental events, and countless more like them, go unreported in our method section for the simple fact that they are part of the shared, tacit know-how of competent researchers in my field; we also fail to report that the experimenters wore clothes and refrained from smoking throughout the session…

But I can conceive of scenarios where all this pleading and pressure on the participant may in fact cause them to move differently in the scanner from other labs’ approaches to dealing with participant movement, or to perform differently on tasks because they are so distracted by not moving. However, wearing clothes and not smoking indoors is common in many societies. If the participants were naked, that should definitely be reported, as nakedness in front of strangers is often considered socially uncomfortable (perhaps the participants at Harvard have transcended cultural norms around nudity).

A second common rejoinder is to argue that if other professional scientists cannot reproduce an effect, then it is unlikely to be “real.”…

This is a slightly more seductive argument, but it, too, falls short. Many of the most robust and central phenomena in psychology started life as flimsy and capricious effects, their importance only emerging after researcher developed more powerful methods with which to study them.

I agree with this, but I would again suggest that if competent scientists are producing contradicting results, they should collaborate and run experiments together using protocols they both agree on.

A third rejoinder argues that the replication effort ought to be considered a counterweight to our publication bias in favor of positive results… if an effect has been reported twice, but hundreds of other studies have failed to obtain it, isn’t it important to publicize that fact?

No, it isn’t.

Eh?

Although the notion that negative findings deserve equal treatment may hold intuitive appeal, the very foundation of science rests on a profound asymmetry between positive and negative claims. Suppose I assert the existence of some phenomenon, and you deny it; for example, I claim that some non-white swans exist, and you claim that none do (i.e., that no swans exist that are any color other than white). Whatever our a priori beliefs about the phenomenon, from an inductive standpoint, your negative claim (of nonexistence) is infinitely more tenuous than mine. A single positive example is sufficient to falsify the assertion that something does not exist; one colorful swan is all it takes to rule out the impossibility that swans come in more than one color. In contrast, negative examples can never establish the nonexistence of a phenomenon, because the next instance might always turn up a counterexample…Thus, negative findings—such as failed replications—cannot bear against positive evidence for a phenomenon…Positive scientific assertion cannot be reversed solely on the basis of null observations.

But most experiments do not give us a “positive” result in this sense – they tell us the probability of a result given that the data were generated by a null distribution, not about the truth of our hypothesis. “Positive” experimental studies cannot be reasoned about in the same way as this illustration of the limits of induction.

Replications are not futile, but they are perhaps being conducted sub-optimally (and certainly ruffling some feathers). Adversarial collaboration and data sharing would maximise the benefit of replication experiments.

Says the non-experimentalist.

PhD done, moved to Okinawa

April 1st, 2014 § 1 comment § permalink

Special Breath Chewing Gum

Hello everybody!

I realise I haven’t done a post since last September, which I believe is long enough to declare this blog legally dead. Fortunately I am trained in internet CPR and am able to kickstart the heart of this here blogging enterprise using my natural guile, expert medical training, and the WordPress “add new post” button. In my defense, I have been finishing off my thesis, which has now been submitted, scrutinised, corrected, resubmitted, re-scrutinised, and finally deemed worthy by the Powers That Be, which means I am now officially Dr. Richard Tömsett, PhD.

In more interesting developments, I have moved away from the City of Dreams to the wonderful island of Okinawa to start a 1 year postdoctoral research thingy at OIST (many thanks to the Japan Society for the Promotion of Science for moneys). I’m lucky enough to have been to Okinawa before, in 2011 when I did the Okinawa Computational Neuroscience Course. They have a beer vending machine. Insanity. Anyway, a big attraction of Okinawa is the beaches and sunshine, but of course it’s been raining pretty much since I arrived so far. The university itself is pretty sexy though.

There are already plenty of pictures of how sexy and nice Okinawa is so I thought I’d mainly post pictures of things that tickled me about Japan. Behold, the chewing gum that, when you put it in your mouth, gives you “special breath”:

Special Breath Chewing Gum

Your breath, it will be special

As I haven’t posted for ages, you get the bonus treat of the magical toilet that has a sink on top of it, so when you flush, you can wash your hands and not waste any water! Ingenious

Wrong studies: is this how science progresses?

September 18th, 2013 § 6 comments § permalink

An article by Sylvia McLain in the Guardian’s Science Blogs section yesterday argued against John Ioannidis’ provocative view that “most scientific studies are wrong, and they are wrong because scientists are interested in funding and careers rather than truth.” The comments on the Guardian article are good; I thought I might add a little example of why I think Sylvia is wrong in saying that prevailing trends in published research (that most studies turn out to be wrong) just reflect scientific progress as usual.

There is a debate in the neuroscience literature at the moment regarding the electrical properties of brain tissue. When analysing the frequencies of electrical potential recordings from the brain, it is apparent that higher frequencies are attenuated more than lower frequencies – slower events show up with more power than faster events. The electrical properties of brain tissue affect the measured potentials, so it is important to know what these properties are so that the recordings can be properly interpreted. Currently, two theories can explain the observed data: the high-frequency reduction is a result of the properties of the space around neurons (made up mostly of glial cells), which result in a varying impedance that attenuates higher frequencies; or it is a result of passive neuronal membrane properties and the physics of current flow through neurons’ dendrites, and the space around neurons doesn’t have an effect. Both of these explanations are plausible, both are supported by theoretical models, and both have some experimental data supporting them. This is a good case of scientific disagreement, which will be resolved by further more refined models and experiments (I’ll put some links below). It could be that aspects of both theory become accepted, or that one is rejected outright. In that case, the studies will have been shown to be “wrong”, but that is besides the point. They will have advanced scientific knowledge by providing alternative plausible and testable theories to explore.

The kind of “wrong” study that Ioannidis describes is quite different. His hypothesis is that many positive findings are results of publication bias. High profile journals want to publish exciting results, and exciting results are usually positive findings (“we found no effect” is rarely exciting). Scientists are under pressure to publish in high profile journals in order to progress in their careers (in some cases even just to graduate), so are incentivised to fudge statistics, fish for p-values, or just not publish their negative results (not to mention the problems inherent in null hypothesis testing, which are often ignored or not known about by many study designers). Pharmaceutical companies have further obvious incentives only to publish positive results from trials (visit www.alltrials.net !). This doesn’t lead to a healthy environment for scientific debate between theories; it distorts the literature and hinders scientific progress by allowing scientists and doctors to become distracted by spurious results. It is not – or should not be – “business as usual”, but is a result of the incentive structure scientists currently face.

Hopefully it’s clear why the second kind of wrong is much more damaging than the first kind (the first is healthy), and that’s why I think Sylvia’s Guardian piece is a bit wrong. Changing the incentives is a tricky matter that I won’t go into now, but as an early career researcher it’s something I don’t feel I have a lot of power over.

REFERENCES
Note: this is far from comprehensive and mostly focuses on the work of two groups

References in support of the variable impedance of brain tissue causing the low-pass filtering of brain recordings:
Modeling Extracellular Field Potentials and the Frequency-Filtering Properties of Extracellular Space
Model of low-pass filtering of local field potentials in brain tissue
Evidence for frequency-dependent extracellular impedance from the transfer function between extracellular and intracellular potentials
Comparative power spectral analysis of simultaneous elecroencephalographic and magnetoencephalographic recordings in humans suggests non-resistive extracellular media

References in support of intrinsic dendritic filtering properties causing the low-pass filtering of brain recordings:
Amplitude Variability and Extracellular Low-Pass Filtering of Neuronal Spikes
Intrinsic dendritic filtering gives low-pass power spectra of local field potentials
Frequency Dependence of Signal Power and Spatial Reach of the Local Field Potential
In Vivo Measurement of Cortical Impedance Spectrum in Monkeys: Implications for Signal Propagation (this is, as far as I know, the most recent direct experimental study measuring the impedance of brain tissue, finding that the impedance is frequency-independent)

Bright Club audio

September 17th, 2013 § 0 comments § permalink

A little while ago I posted about doing Cambridge Bright Club – well here‘s the podcast from that event, which includes excellent pieces from the other performers. I’m on first.

Anti-optogenetics 2

September 7th, 2013 § 0 comments § permalink

This is a response to John Horgan’s response to the responses to his original anti-optogenetics-hype article what I blogged about. The comments section is worth reading, but I thought I’d respond to a couple of points here, too.

Neuroscientist Richard Tomsett says one of my examples of hype—a TED talk by Ed Boyden, another leader of optogenetics—doesn’t count because “the whole point of such talks is hype and speculation.” Really? So scientists shouldn’t be criticized for hyping their research in mass-media venues like TED—which reaches gigantic audiences–because no one is taking them seriously? Surely that can’t be right.

I perhaps wasn’t clear enough here – my point was that it seemed silly to refer to a TED talk as an example of hype when all TED talks hype their particular topics. Scientists certainly should be criticised for hyping research, but this is a problem with the TED format rather than optogenetics.

…the abysmal state of health care in the U.S. should have a bearing on discussions about biomedical research. I’m not saying that journalists, every time they report on a biomedical advance, need to analyze its potential impact on our health-care problems. But knowledge of these woes should inform coverage of biomedical advances, especially since technological innovation is arguably contributing to our high health care costs.

I agree, but again this is not a problem with optogenetics, or even the scientists that try to hype it.

John’s posts touch on an issue with the way that science is funded, which (in the UK at least, and I assume elsewhere) requires an “impact” assessment to try to ensure that research spending isn’t a waste of money. This is a big problem because it can be very difficult to predict what impact most research will have in the short term, let alone the long term. The most obvious way to demonstrate “impact” in neuroscience is to refer to potential treatments for brain disorders, though such treatments might be years or decades away. The brain is so complex that it’s impossible to predict how a particular piece of research might impact medical practice, but you are compelled to spin your case because of this demand for “impact” – hence why all neuroscience press-releases will refer to potential treatments, no matter how relevant the research is to medicine. I completely agree that if scientists want to justify receiving public money then they need to justify their research to the public, but the current incentives promote hype – particularly medical hype. Note that I don’t offer a solution to this problem…

As I said in the previous post, there are good points to be made about the hype surrounding optogenetics (as in this post), it’s just unfortunate that John instead went for criticisms that could be leveled at any hyped science. Rather than attacking a particular field with some quite shaky points, it would have been much more interesting to address why scientists feel the need to hype their work in the first place.

Anti-optogenetics

September 1st, 2013 § 1 comment § permalink

I read an article that annoyed me a bit. It’s a rant by John Horgan against optogenetics and why the author is vexed by breathless reports of manipulating brain functions using light (optogenetics is where you genetically modify brain cells to enable you to manipulate their behaviour – stimulating or suppressing their firing – using light. This is particularly cool because it allows much better targeted control of brain cells than using implanted electrodes or injecting drugs, the other most precise methods of controlling the activity of many cells). I love me a good rant, and here is a nicely considered article about the limits and hype over optogenetics [NB: I am not an expert in optogenetics], but this was neither good nor considered.

The first half of the article raises a complaint about the hype, which might have been legitimate if it had not misrepresented said hype. It grumbles that articles about optogenetics tout its therapeutic potential for human patients, but we don’t know enough about the mechanisms underlying mental illnesses to treat them with optogenetics. While this latter point is certainly true, it’s a straw-man: read the articles linked-to in the first half and see which ones you think are about human therapeutic potential (I’ve included the links at the bottom*). They all clearly report on animal studies, though of course make reference to the potential for helping to treat human illnesses (not necessarily using optogenetics directly, but by better understanding the brain through optogenetics). Indeed, this point was made to John on Twitter, so his article now includes a clarification at the end admitting as much, but still making some unconvincing points, which we’ll come to later.

The second part of the article addresses John’s “meta-problem” with optogenetics: he “can’t get excited about an extremely high-tech, blue-sky, biomedical ‘breakthrough’-involving complex and hence costly gene therapy and brain surgery-when tens of millions of people in this country [USA] still can’t afford decent health care.” Surely this is a problem with all medical (and, indeed, basic) research that doesn’t address the very largest problems in the health system? I agree totally that this is a massive problem, but it is entirely socio-political, not scientific. Moaning that optogenetic treatments will be expensive is like criticising NASA because only a few lucky astronauts get to go into space.**

John has been good enough to add some “examples of researchers discussing therapeutic applications” to his post. Briefly looking through these, we have a 2011 article in the Joural of Neuroscience, which uses optogenetics to study the role of a particular brain area in depression (doesn’t mention therapeutic optogenetics in abstract, only as a potential avenue for further research in the conclusion); a 2011 TED talk (the whole point of such talks is hype and speculation); this press release from the University of Oxford (which alludes to possible therapeutic use “in the more distant future” in one paragraph in a sixteen paragraph article); a 2011 article in Medical Hypotheses (a non-peer-reviewed journal whose entire point is to publish speculative articles that propose potentially fanciful hypotheses); and this article in the New York Times (I can’t argue with this – there is a fair bit on therapies for humans; John’s main gripe here, from his comments about this article, appears to be with the military funding that one of the several mentioned projects is receiving).

In the second amendment to the article – labeled “clarification” – John admits that he “overstated the degree to which coverage of optogenetics has focused on its potential as a treatment rather than research tool”, which is nice, but then criticises the potential insights from optogenetics research, saying:

But the insights-into-mental-illness angle has also been over-hyped, for the following reasons: First, optogenetics is so invasive that it is unlikely to be tested for research purposes on even the most disabled, desperate human patients any time soon, if ever. Second, research on mice, monkeys and other animals provides limited insights–at best–into complex human illnesses such as depression, bipolar disorder and schizophrenia (or our knowledge of these disorders wouldn’t still be so appallingly primitive). Finally, optogenetics alters the cells and circuits it seeks to study so much that experimental results might not apply to unaltered tissue.

Regarding point one: this is still about therapeutic, not research, uses of optogenetics; it also ignores that many patients undergo invasive surgery for epilepsy (which involves actually cutting bits of brain out – surely optogenetics could be a bit better here?) as well as for deep brain stimulation to treat severe depression and Parkinson’s symptoms. Regarding point two: this is a criticism of using animal models in any kind of research rather than optogenetic research in particular – it is valid, but totally besides the point. Regarding point three: if we’re looking to modify the cells in therapies anyway, why does this matter? Stimulating cells with electrodes or drugs changes the way they behave compared to “unaltered” tissue, too!

TL;DR – read this article instead, and don’t pay much attention to this one. It could have made some good points about optogenetics-hype, but didn’t.

*Links from the original article:

OCD and Optogenetics (Scicurious blog)

Implanting false memories in mice (MIT technology review)

Breaking habits with a  flash of light (Not Exactly Rocket Science blog)

Optogenetics relives depression in mouse trial (Neuron Culture blog)

How to ‘take over’ a brain (CNN)

A laser light show in the brain (The New Yorker)

** yeah I know, tenuous analogy – but let’s face it, all analogies are pretty shite

Policy and Bright Club Cambridge

June 13th, 2013 § 0 comments § permalink

Alas my neglect for this web-site has been wanton and unmerciful, sorry little autapses. I am currently in Cambridge (UK) doing a policy placement at the wonderful Centre for Science and Policy (CSaP). Part of what they do is help civil servants network with academics to provide government departments with direct access to the best available research. Before I joined I did wonder why they couldn’t just use Google Scholar, but I quickly learned this was a fabulously naive point of view. Civil servants are often very busy, inexperienced with research, unable to devote the time to finding the best and most relevant information in the mountains of muck that populate the literature, and as a result of moving between departments have great breadth but not so much depth of knowledge. The direct links to relevant academics that CSaP provides are really important for getting good research knowledge into government.

Anyway, evangelism section is now complete, advertising commences: I will be “doing a bit” at Cambridge Bright Club at the Portland Arms tomorrow (Friday 14th June) evening, I think there are still some tickets left (just checked, yes there are, BUT NOT MANY). Professional funny people will be there to make you laugh, and six researchers will be there to try to make you laugh. At the very least, you may learn something.

More soon…

Complexity

March 29th, 2013 § 0 comments § permalink

The other day I finished reading Complexity: The Emerging Science at the Edge of Order and Chaos by M. Mitchell Waldrop (hilarious back-page plug: “if you liked chaos, you’ll love complexity!”). I’m not much of a bookworm usually, but very occasionally I will relentlessly read something from cover to cover in a short space of time, which I managed to do in this instance (this also happened recently with Robin Ince’s Bad Book Club [which is hilarious, and you should buy it immediately], so it’s not just a science book phenomenon). Complexity was written in the early 90s, shortly after the Santa Fe Institute for complexity research was founded. It serves partly as a documentary of the founding of the institute and its early years, partly as a biography of the key figures involved in its development, and partly as an introduction to the ideas of the new kind of science that the institute was trying to pursue. This works remarkably well – you get a good overview of the relevant scientific ideas, but almost feel like you’re reading a novel. The constant breathless excitement in the tone does begin to grate a little, but it manages to convey the passion of the researchers and the thrill of scientific discovery. It certainly rekindled my excitement with some of the ideas that made me interested in computational modelling and biological systems in the first place.

As the book was written at the height of enthusiasm of the scientists involved, it unfortunately lacks a critical view – no stories of the less successful avenues of research that were pursued, and little mention of criticisms by “mainstream science”. Interestingly, the Santa Fe Institute’s own history page mentions this kind of criticism in passing:

As the Institute’s research interests and reputation grew, so did its list of detractors. Exploring new scientific territory meant that some lines of inquiry failed to live up to expectations. Some researchers in mainstream science felt that complexity science was long on promise but short on results. The criticism culminated in a June 1995 Scientific American article by senior writer John Horgan that openly mocked not only the science of complexity but the scientists doing it. The article, today regarded at the Institute as a wakeup call, caused many in the complexity community to do some soul searching about their field.

I am definitely sympathetic to the approaches pioneered by the Santa Fe institute (amongst other places, of course), though. I particularly love the emergence of complexity in the behaviour of cellular automata – such simple rules can lead to such interesting behaviours. You can’t prove truths about the universe from such simulations, but I think they can give you very deep insights into problems that you just can’t get from other approaches. But that’s just, like, my opinion, man. Go and have a play with a game of life simulator, like this one, to see what I mean.

Anyway, I recommend it, but bear in mind it is quite un-critical, and quite out-of-date. If you want to have a hands-on play with some of the ideas of complexity theory and know a bit of programming, you could do a lot worse than pick up Alan Downey‘s book “Think Complexity” (available for free from his website).

I’m now onto an old software classic, The Cathedral and The Bazaar, which actually has some surprising parallels – complex emergent behaviour in software development processes, defying standard theory through the collective behaviour of individual programmers…