Science Fictions links for May 2024
A delicious blend of honey, lithium, and magic mushrooms. And really quite a lot of scientific fraud
You know how there’s loads of stuff on the internet about how science is cool, and great, and awesome? Well, this is the opposite of that. Here’s your monthly collection of links about scientific fraud, retracted research, and erroneous studies. Hope you “enjoy” it!
The links
Fraudulent anaesthesiologist Joachim Boldt is not only the most retracted researcher of all time, but has now hit the milestone of two hundred papers retracted. In fact, he’s exceeded it - he’s up to 210! Congratulations…? Imagine faking that many studies! Just imagine. You almost have to respect it.
Talking about retractions: one particular Egyptian researcher in obstetrics and gynaecology has already had seven papers retracted, so some data-integrity researchers looked at all 263 papers he’s published (many within a very short period of time). They concluded that 43 had “impossible” results, 67 had “unlikely” results, and 11 had mistakes. Oh dear.
One amusing example: “…the authors claim that the mean age of the physicians surveyed was 42.6, and their mean number of years in practice was 26.4. Using these numbers, the average physician surveyed must have started practising at 16.2 years old.”
There was a TikTok influencer called “Lab Shenanigans” who did videos about being a neuroscientist and who had 525,000 followers. Turns out he falsified loads of data in his research. I mean… are we surprised by this? The clue was in the name…
I didn’t know there was a journal called Trial and Error that exists to publish null results. As with previous such things I expect it won’t solve the problem (i.e., that people are only interested in positive findings), but it’s a well-meaning idea. It’s covered in this Nature News article on negative results.
To err is human. To set up an error-checking service for science, which I mentioned here a few months ago but which now has its first report, is divine.
You might’ve heard that there’s been a huge increase in maternal mortality in the US over the past couple of decades. Horrible! In such a rich country, too! Except, the whole thing is due to changes in recording and maternal mortality hasn’t actually increased at all.
A meta-analysis on the effects of psilocybin (from magic mushrooms) on depression, published in the BMJ (formerly the British Medical Journal). Sounds good, right? Well, it looks like the authors mixed up the standard error and the standard deviation when coding some of the studies, leading them to grossly overestimate (like, by 500%) the effect of psilocybin. How can you not notice that you’re including effects that are bigger than basically anything else in the whole field of medicine?
You know I love sharing articles about how “hard” sciences like physics are just as subject to the sorts of replication-crisis problems that are known in the social sciences. Well, here’s another one, on the increasing awareness of dodgy research in condensed matter research.
(Just to be clear: I don’t “love” this, really. It’s very bad and depressing. But it’s always good to be reminded that it doesn’t just happen in psychology).
Useful Science “policy forum” article on the uncertainties surrounding the effects of preschool on longer-term outcomes. We need better research on this, ASAP.
See the letter at the bottom for an alternative perspective.
Critics of antidepressants publish a review that questions the biological basis of depression. They are reprimanded by psychiatrists, including for not citing a meta-analysis of depression and tryptophan. Someone re-analyses that meta-analysis. It’s crap.
Paper in the Journal of Environmental Psychology claims that the best place to relax is near water. A general finding about all humanity? Well… it turns out all the participants (and there were only 32 of them) were members of a swim team. Come on, man!
Adam Grant’s tweet where he promoted the study has, at the time of writing, 4.1m views, sadly not just because everyone is ridiculing it. Something something “truth gets its boots on” - you know the quote.
Authors of a review on lithium and suicide make an objective, study-ruining error where they mix up rates and ratios. There’s no argument about this: you can’t combine those. The results are complete nonsense. The paper needs to be corrected or (more likely) retracted. But the journal editor doesn’t seem to realise, politely thanks the critics for their letter pointing out the problem, and… leaves the study in place and doesn’t do anything about it. Jolly good!
The publisher Wiley just closed down nineteen journals (having already closed down four) because they’d become flooded with fake “paper mill” papers.
And, closer to home (for me), the Scottish Medical Journal retracted 13 papers at once because they were also the products of paper mills.
You know “cupping”? The weird alternative-medicine treatment that leaves people covered in round suction bruises? A recent paper claimed that your psychological state could help heal those bruises quicker. But the whole thing is a nice lesson in unreliable statistics and flawed readings of the scientific literature, according to Andrew Gelman and Nick Brown.
As if we wouldn’t already be highly sceptical of a paper that claims that a specific kind of Iranian honey is an “amazing preventive and therapeutic agent” against Alzheimer’s disease (that’s a quote from the title BTW), the paper had some strange-looking images and has now been retracted. Please enjoy this very funny exchange with the authors who describe the findings as “completely real” and thank critics for their “kindness and love”. Aww.
P.S. The Studies Show
Calling all podcast fans: this month you can hear me and Tom talking about Vitamin D, as well the effects of lead paint/pipes/petrol on your brain and on crime rates. And our paid episode this month was on the scientific writings of Johann Hari (you might be able to guess the conclusions we draw there).
Image credit: Getty
The ERROR folks describe what they're doing as a bug bounty programme for science. I work in tech and am quite familiar with such schemes (never won a bounty though). The analogy seems kind of dubious and unhelpful.
Tech companies use bug bounty schemes because it's a relatively cheap way to scale up skilled security labor, and for various reasons some security researchers like working that way. It's very culturally specific to security and originates in the somewhat dubious practice of paying obsessive hackers to avoid them selling their exploits on the black market instead. You don't find bug bounties for other kinds of bugs.
One thing that's really important is that tech firms obviously don't like paying out bounties, and so invest heavily in hiring full timers and doing their own audits. For years Apple didn't have a bug bounty scheme. When they eventually launched one, they explained the reason they'd lagged behind: their own internal audits were producing so many findings they didn't feel any need for outside help. Firms also put in place systems and rules designed to stop bugs happening at all.
This doesn't map well to what ERROR is doing. A real bug bounty scheme for science would look like universities all staffing up scientific integrity departments that systematically audit their own professor's work, and when the audits finally start to come back clean they begin payouts to third parties who find fraud/dishonesty/incompetence in their own work. They would explicitly compete on how much integrity their staff had.
A very small programme finding flaws in work by other institutions isn't going to help. There are already volunteers who do this and universities ignore them. Bounties work because the companies really do want to be secure, and the hacker types mostly just do it for the thrill and don't really care who gets their exploits as long as they get some recognition, so all the incentives are aligned.
The podcasts are enjoyable and informative. Thanks.