Science writing update, May 2023 edition
Politicising Long COVID; Biden's cognitive abilities; the Hitler hoax; missing the point about the AI apocalypse; and how evil are the rich, really? Plus a collection of cool science links
Hello and welcome to your monthly update of my science writing! This has been quite the varied month at the i, as you can likely tell from the subtitle above. I’ve selected some of my favourite pieces below; you can read everything I’ve written at this link.
And as usual, at the end I’ve left a collection of interesting links to science-related stories I didn’t write, but that are still very much worth your while (click here to skip directly to that part).
Do remember to subscribe, if you haven’t already: just type your email below and you’ll get monthly updates just like this one.
In a Piff of smoke
You might remember the 2012 PNAS paper by Piff et al. that claimed that people from higher social classes were more likely to act unethically, across a whole range of different domains.
For obvious reasons this study became a massive blockbuster, reported in the media and tweeted regularly to this day - but I remember discussing it with colleagues right at the start and it all seemed a bit too good to be true.
Well, there have now been several replication attempts, and perhaps predictably none of them look particularly good for the original claim. I wrote about the latest one here.
Isn’t that the plot of The Terminator?
I regret to inform you that recent months have made me much more of an AI doomer. That is, I’ve raised my view of the likelihood of the worst-case scenario where a super-intelligent AI becomes a genuine threat to the continuation of the human race. I still don’t think it’s a high chance, by any means, but I’m now actively worried about it, in the same way I am about nuclear weapons or pandemics.
That’s quite a common view given the incredibly rapid AI progress in recent months. Indeed, as you’ll have seen, famous AI researchers like Geoffrey Hinton are coming out and saying it too.
Oddly though, many news outlets wrote up the story of Hinton leaving Google and warning the world about AI as one that was mainly about the risks of “misinformation” and job losses - not about the total extirpation of humanity by a rogue, un-aligned AI. Seems like quite a big thing to miss.
I wrote about the weird media coverage here. And I also went on TV for 5 minutes to talk about this last week.
Yong COVID
Here’s an article (£) where I took issue with Ed Yong’s recent Atlantic piece on Long COVID. I really think his transition into some sort of patient advocate over the past couple of years really hasn’t been helpful for his science writing, and this article illustrates that perfectly.
Don’t get me wrong - it’s admirable to want to help people who are struggling with a horrible illness. But as I say in the piece:
Simply agreeing with people when they tell you what they think caused their symptoms, and smoothing over all the complications and uncertainties about the condition, is not necessarily the best way to help them – and might be actively counterproductive.
We need a lot more research on Long COVID to properly understand it - and I worry that articles like Yong’s inadvertently stand in the way of that research being done.
The Mad King(s)
Through no particular design, I ended up writing several articles about heads of state this month. The first is one about Joe Biden, his age, and his cognitive abilities. Hopefully it serves as a useful little primer on the phenomenon of cognitive ageing, which was probably the subject I published most papers on back when I was a scientist.
The second one is about King Charles III. If you look back at the history of British Royalty then you’ll find many of them had a deep interest in science. But our current King has for decades been pushing homeopathy and “alternative medicine”, which is a bit of a bummer. Let’s hope we see less of that, and more support for real science, now he’s King.
And here’s one about a somewhat different head of state: the other week I noticed that it had been exactly 40 years since the fraudulent “Hitler Diaries” were first “discovered” - so I wrote an article about it. It’s a great little story if you don’t happen to know it, and I also included a brief detour into the relevant (dodgy) science on handwriting analysis.
Shooting Pisces in a barrel
There’s nothing wrong with doing a study that’s just on zebrafish - really! You don’t need to give in to the solipsistic desire (or, at least, the scientific-publishing incentive) to make everything about humans. But here’s a story about a perfectly interesting zebrafish study that was press-released as if it was relevant to the effects of intermittent fasting in humans.
And finally… after Hermione Emma Watson made a long, rambling comment about her “Saturn Return”, I wrote a piece asking why it seems so many people are into astrology now.
Things I didn’t write but that you might like anyway
My dear friend Saloni continues to write info-rich science Substacks, and her one on snakebites and what they tell us about missing data was particularly good.
Scientists behaving badly, Episode No. 41923957: A Japanese researcher has been busted by his university for faking data in a 2019 Nature Neuroscience paper. When asked for the raw data, he said that—oops!—the hard disks and paper copies had all been destroyed in an earthquake. Which is the best scientific dog-ate-my-homework excuse I’ve heard since the one where the data were apparently lost due to the 2016 military coup in Turkey.
By the way, at the time of writing, the Nature Neuroscience paper still doesn’t have a retraction, correction, or even an editorial “expression of concern”. It’s now been well over a month since the university provided unambiguous evidence of a great deal of scientific fraud. Oh well!
Yet another “Native American” academic has been revealed to be a white person just pretending to be a Native American. The reason it’s relevant is that it’s a new kind of research fraud I hadn’t thought of: misrepresenting your identity to colleagues, students, grant funders, research participants… all of whom would likely have treated you rather differently if they knew the truth.
Waaaaay back at the very start of the pandemic, I wrote an article called “Don’t Trust the Psychologists on Coronavirus”. It was about how various psychologists and behavioural economists had blundered into the media to tell people to just calm down about this whole “virus” thing. We know how that turned out. Anyway, there’s now a new paper that takes statements made by psychologists and members of the general public near the start of the pandemic about how society would change, then rates them for how accurately they turned out. You guessed it: the “experts on human behaviour” are no better than the average participant at predicting this stuff. Maybe I should’ve just called my article “don’t trust psychologists”, full stop.
NeuroImage is a scientific journal that I’ve read a lot and submitted papers to in the past - it’s a pretty staple journal if you do any kind of MRI or fMRI neuroscience. In April the entire editorial board resigned because the publisher, Elsevier, ramped up the “article processing charge”—the amount you’re forced to pay to publish there, since it’s an open-access journal—to $3,450 (£2,730). This is obviously an absurd amount and I was very glad to see the editors standing up to a publisher rinsing academics (and ultimately the taxpayer) in this way.
By the way, I don’t think you have to go “full commie” and object to the very idea of a profit-making motive in scientific publishing. That’s still an important spur to make things better. But pretty clearly what’s happening with Elsevier and many other publishers is rent-seeking, where someone demands more money without actually providing more worthwhile services. That’s what we should be objecting to very strongly.
To end on a depressing note: you know how the evolution-vs-creationism wars were pretty decisively won by the “evolution” side in the West, to the point that basically nobody ever talks about it any more? Well, in India the anti-evolution side is making big strides, literally rewriting school textbooks and gaining political support. Easy to see quite a grim future for Indian biology (and science in general) if this stuff really sticks.
Image credit: Getty
"you know how the evolution-vs-creationism wars were pretty decisively won by the “evolution” side in the West, to the point that basically nobody ever talks about it any more?"
The Creationism that says evolution had no impact above the human neck - far more practically damaging than the 7 day variety - is as strong as ever in the West.
Feel like I've gone through the same thing re: AI.