Science writing update, October 2023 edition
Ineffective drugs, amnesia about genetics, Dolly the sheep, and lies about vaping - plus a huge list of interesting links about bad science
Hello! Another month, another selection of science pieces I’ve written recently. And not only that, but as usual it’s followed by a veritable smörgåsbord - nay, a cornucopia! - nay, a plethora! - of interesting scientific links from the last month. Skip straight to those links by clicking here.
Let’s get on with it, shall we?
The Studies Show podcast
The Studies Show, my podcast with Tom Chivers, is now the place to listen to me talk about science for an hour every week (literally: there aren’t any other places. So it’s the place). We’ve covered all sorts of interesting topics recently, like nuclear power, football and dementia, and cash transfers. You might also be interested in our first paid-subscriber-only episode, where we review the science of diversity training, covering trigger warnings, unconscious bias, microaggressions, and stereotype threat. If any of that sounds interesting, consider taking out a subscription on the podcast’s Substack page!
Really gets up my nose
I wrote about phenylephrine, the “nasal decongestant” which, at least in its oral form, doesn’t work to decongest your nose - despite being in tons of medicines (in the UK it’s in Lemsip, Beechams, and several others; in the US it’s in Benadryl).
Phenylephrine is “Generally Recognised as Safe and Effective” by the FDA - that’s the classification they give to medicines that are well-used even if the evidence base comes from old, low-quality studies. The reason I wrote about it was that the FDA had decided to review that evidence, and put out a devastating report ahead of an expert panel to re-assess the classification.
The update since then is that the panel unanimously agreed that phenylephrine doesn’t work. The FDA will now have to decide what to do, policy-wise. Once they’ve decided what to do with this one, maybe they could go after some of the other ineffective drugs on the market to which they’ve nonetheless given approval (ahem, Aducanumab, ahem).
Genetic amnesia
There’s a funny phenomenon whereby on the one hand we all know that a parent passes on their genetics to their child, and yet we somehow forget this basic fact when discussing (or doing!) science. I’ve written about a couple of instances of this recently.
The first article was about a big new study claiming that a father’s involvement can make a difference to his child’s educational outcomes. Maybe it can - but this study, lacking any kind of genetic control, certainly can’t show that. Just a complete failure of the people setting up (and funding) the study; it cant possibly address the question it’s supposed to address. You might as well have thrown the money spent on the study (£243,000) into a big furnace.
The second time was about the flurry of headlines relaying the shocking information that vegetarianism is partly genetic. But… of course it is, because all human behaviours are partly genetic. This was not news - and yet the authors saw fit to put out a press release on their study anyway.
I swear to god, every day I get closer and closer to writing an article called “Abolish Scientific Press Releases”…
The noble lie about vaping
There’s a weird anti-vape panic gripping the UK media (and the UK in general) at present. One example was a paediatrics professor interviewed by the BBC who said that the message that vapes are “95% safer than cigarettes”—which he doesn’t deny is true—is a bad message because it’s made lots of children take up vaping.
This doesn’t seem right to me either empirically or as a matter of policy, as I argued in this piece (£). A “noble lie” is always going to backfire in the end when people find out they’ve been lied to - however nobly.
(You can also listen to our Studies Show episode on vaping).
Hello Goodbye Dolly (‘s creator)
Sir Ian Wilmut, the scientist most associated with Dolly the Sheep, died last month. As well as being near-synonymous with a massive scientific advance, he was also a controversial figure, and there were acrimonious disputes among his colleagues about the extent to which he deserved the credit for cloning Dolly. I wrote a piece about his career, why human cloning has never really become “a thing”, and why giving scientists proper credit is such a tricky issue.
Debunking as positive science
You don’t hear much about the “sceptics movement” these days. It still exists, to be sure - but it’s nowhere near as prominent as it was when I were a lad (i.e. in the early 2000s). And thus, I feel like I’m doing my bit to keep the tradition going when I publish sceptical takes on near-death experiences (£) or on whether you should be endlessly glugging gallons of water all day long.
Stephen Jay Gould, non-overlapping magisteria rest his soul, once wrote about “debunking as positive science” - you can learn a lot by working out where things have gone wrong. Anyway, in writing that latter article I learned that there are some proper RCTs—though still just preliminary ones—implying that drinking more water might actually help with some specific medical conditions. Huh!
Stuff I didn’t write but that you might like anyway
If you’re a scientist who worries that you’re doing research nobody cares about, this is for you: a new database made by the UK Government’s Office for Science where you can type in a relevant keyword and see what questions the Government/Civil Service want scientists to tackle on that topic to help them develop policies. Such a simple idea—basically a bulletin board to link up scientists and policy-makers—that it’s a wonder it took until now to make it. More of this, please!
Cool survey on the factors that relate to US liberals and conservatives supporting government funding for science. As we’ve discussed before on this Substack, aspects like the appearance of impartiality really seem to matter (and you might say “duh!”, but a lot of scientists these days will tell you that impartiality is just a naïve pipe dream…).
Talking about government funding: the US National Science Foundation have announced (in partnership with my pals from the Institute for Progress) that they’re going to be running formal experiments on their grant-giving process. Not too many details yet, but this is an extremely promising move.
A long, detailed, useful article from bioRxiv’s Richard Sever on the history and future of the scientific publishing process.
And relatedly, a new article on “replacing academic journals” (published in an academic journal! Ha ha, Alanis Morrisette, etc etc). How do we get over the inertia that stops scientists trying new ways of publishing their research?
Pre-registration is one of the many fixes that have been suggested for research, and I’m a big fan. But here’s evidence that about half of all pre-registrations ended up different, in terms of the hypotheses eventually tested, from the eventual published study. That’s bad, and undermines pre-registration. But in a world where everything’s pre-registered, we can see this happening - ordinarily hypotheses get changed all the time and we’ve no idea.
I dread to think how many hours of my life I’ve thrown away by reformatting study manuscripts that I’ve had rejected from journals, so they can be sent on to a different journal that has different formatting guidelines (I can guarantee that any scientist reading this is grimly nodding along). Now there’s an estimate of how much all this pointless effort costs, and it’s a lot.
Scientists, rise up! Stop doing this! Just submit the manuscript formatted however you like - if the journal wants to publish it, they can darn well format it themselves to whatever silly, arbitrary rules they prefer.
Mendelian Randomisation is an amazing study design when it works: you can use genetic information to magic causation from correlation. But it seems to have become something of a Frankenstein’s monster for one of its biggest advocates, who now says that “The vast majority [of studies using the technique] are at best non-contributory or at worst simply ludicrous”. Ouch!
A new (to me, at least) method called Z-curve that seems to predict almost spookily accurately which psychology studies will replicate.
Here’s a demonstration of a new tool that searches through thousands of scientific papers and extracts relevant information about their statistics and methodology. In this case, it’s used to see if anything substantive has changed in psychology studies since the “replication crisis” began over a decade ago (I wrote about this here, before this new tool appeared - it’s exciting that we’ll soon see more empirical analyses on whether the science-reform movement has been successful).
This is absolutely no surprise whatsoever, but with new generative AIs, you can produce entirely fake, but plausible-looking, scientific papers. Great - it’s not like we already had an issue with worthless studies polluting the literature.
I’ve been doing this (writing about dodgy scientists’ antics) for a long time, and yet still I’m surprised at the lengths to which some people go to game the publication system. The latest discovery is people manipulating the metadata of papers so they contain secret citations to a target paper. These get picked up by the systems that count references, inflating the target’s citation count. Cheeky rascals.
Sex differences in scientific fraud: women are underrepresented (in this case, the good kind of “underrepresented”).
Image credit: Getty
Good to see someone calling out the genetic confound issue. One of my bugbears with Emily Oster is that she ignores this point when it suits her.
Stuart, you should mention this blog in the Studies Show. I only found out about it by accident, and it’s great reading.
Pseudoephedrine - which I'm pretty sure is effective - is available behind the counter at UK pharmacies (Sudafed is the usual brand name). In case anyone wants an effective alternative.
I've heard it's behind the counter because it can be used to make meth (hopefully true because it's funny).
And has nobody made a tool to reformat papers according to the different requirements of journals? Could be a use for LLMs? Although a depressing number of uses for LLMs seem to be "intermediate between me and these dumb, kludgey systems some other people have built".