Science writing update, September 2023 edition
Should we fear ultra-processed foods? Is BMI useless? Do strip clubs reduce sex crimes? And tons of links about bad science
Welcome to the monthly science writing update! In terms of my own stuff, it’s been a quieter month than usual because I was on holiday for a while. But don’t worry - I made the “interesting links to things I didn’t write” section at the end longer than usual to make up for it - click here to go right to it.
Before we get started, just another little plug for The Studies Show, my new podcast with Tom Chivers. We talk about a controversial scientific question for an hour every week. In recent episodes we’ve covered the LK-99 superconductor story and the recent fight about growth mindset; the next one (coming on Tuesday) is about cash transfers to alleviate poverty. Hope you’ll have a listen!
I’m so pleased to say that, as of this week, 10,000 people(!!) subscribe to this Science Fictions newsletter. If you aren’t already one of them and you want a monthly update on all things bad-science, enter your email address below:
A fare question
The “ultra-processed food” panic, scare, thing, whatever it is, continues. You can’t avoid hearing about it, at least here in the UK.
I heard someone on the radio the other day saying that learning about ultra-processed foods (and the “lies” told by food companies) had him feeling extreme rage every time he went food shopping, obsessively checking labels for E-numbers and insisting he brought a banana with him everywhere so he wouldn’t have to buy anything processed while out. It doesn’t strike me as a particularly healthy attitude towards eating, and it would be a shame if this is what the UPF discussion encourages.
Anyway, in response to a couple of new studies on this topic, I wrote a(nother) piece summarising the evidence, which has been described variously as “the best and most reasonably minded thing I’ve read on this topic” and “balanced” (just like a good diet). You can also listen to our podcast episode on UPFs here.
You might remember the study from 2021 that claimed that opening a strip club reduces sex crime in the local area, by a remarkable 13%. It all seemed a bit too good to be true.
And according to a pretty devastating critique of the study, it was. They had used the registration date, not the opening date, of the strip club as their measure of when it opened, despite these often being many weeks apart. And the data on sex crimes were hopelessly flawed, too. I wrote about the whole thing here (£).
Since I wrote that, the original authors of the strip-club study have posted a rebuttal. But I don’t find it very convincing: for example, even if they’re right in saying that a lot of strip clubs don’t have alcohol licences, I don’t get why this makes it okay to use the registration date as the opening date. I’ll stick with my previous position that this was a great example of how bad data, even with a perfect analysis, leads to researchers making spurious claims.
BMI has been debunked! It’s misleading! It’s trash! It’s so common to hear people say this or something like it. I hear quite a lot of echoes of the things people say about, for example, IQ scores in discussions about psychology: “I know someone who got a very high IQ score but never did anything worthwhile with their life - so there!”. “You can have a very high BMI and still be perfectly healthy!”.
Well, yes, but that’s quite a basic misunderstanding of correlations - if a correlation isn’t exactly 1.00 or -1.00, you’re gonna get people who are off the diagonal and don’t perfectly conform to the prediction! It doesn’t mean the measure isn’t useful if you understand the limitations. Here’s my article on why BMI is (like every measurement) flawed, but still useful.
The IQ scale
Do music lessons make kids smarter? Here’s another area where there are duelling meta-analyses: one side says no, there’s no “far transfer” from one skill to another; the other can point to their own review that says there might be something going on. I rather doubt it, given what we know about all the previous interventions that are supposed to have made kids more intelligent.
But guess what? Although it would be cool if music lessons made your kids smarter, it doesn’t ultimately matter: you should do them anyway, because music lessons are great! Like so many questions where science overlaps with morality, politics, or (as here) aesthetics, there are good, entirely non-scientific arguments for the proposition, and we shouldn’t forget them.
Things I didn’t write but that you might like anyway
Perhaps predictably given I’m always talking about the replication crisis in science, I loved Anton Howes’s piece on the replication crisis in history.
The specific example Anton gives, about the “Cort process” and some highly questionable claims made about it recently, was also covered by Ian Leslie in his piece on how “stories are bad for your intelligence”…
…which itself fits nicely with an old post of mine from this Substack: “Science isn’t storytelling”.
A long but very worthwhile piece on the history of the malaria vaccine, and how we might speed up our vaccine-making efforts in future.
“Encouraging kids to drink more water at school helps prevent unhealthy weight gain! Randomised trial! Published in a prestigious journal!” Except the study didn’t show that, the reporting was highly misleading, and you can see that they changed their preregistration almost two years after the data were collected(!?). Great debunking thread here by Jon Baron.
And it’s not the only study that Jon Baron has “rekt” this month!
A while ago I wrote an article that, among other things, critiqued that well-known “Facebook rolling out to universities in the early 2000s caused a decline in student mental health” study. Now there’s a more in-depth critique of the study by Dean Eckles. He argues that baseline differences across the universities violate the assumptions of the model, and mean we can no longer be confident in the causal effects.
Dorothy Bishop looks into a crap review study on polyunsaturated fatty acids and children’s development and discovers “a huge edifice of evidence based on extremely shaky foundations”.
Should we abolish the Discussion section in scientific papers? Having thought about this proposal, I reckon ultimately it wouldn’t help that much. But it’s fun to consider, and useful to be reminded of how much bullshit scientists add to their research papers.
Just because your study is a simulation (maybe to check how a new statistical technique works) doesn’t mean it’s immune to all the usual tricks that scientists use to make their studies say basically whatever they want. Here’s a new paper on “questionable research practices” in simulation studies.
And just because your study doesn’t focus on p-values doesn’t make it immune to the usual tricks used in p-hacking. Here’s an article showing how “area under the curve” (important in many prediction models) can be hacked just like a p-value.
(Another) worrying study on publication bias in research on antidepressants.
Can you tell how strong a correlation is just from looking at a graph? A lot of people think they can, as I’ve previously documented here. A fun new study actually tests this, and looks at what other factors influence people’s judgements.
You know how you get “special issues” at scientific journals? Maybe a journal will have one or two a year, perhaps curated by a guest editor with all the papers focused on a specific question. Well, apparently the journals run by the publisher MDPI have had more than sixty-five thousand special issues just in 2023 so far! The International Journal of Molecular Sciences has by itself had 4,216 this year. Does this heavily imply that MDPI is a “predatory publisher”, and that the quality of all the articles in these journals must be at absolute rock-bottom? I couldn’t possibly say.
There we have it. If you haven’t already, please do subscribe below and you’ll get one of these update emails each month. I’ll see you in the next one!
Image credit: Getty.