Science Fictions links for January 2024
Good and bad AIs in research; a scandal in Boston; science gangsters; loads of data on the screwedness of scientific publishing
It feels a little late to be saying “Happy New Year”, but - welcome to a new year of monthly bad science links!
The unscrupulous, sloppy, and low-quality researchers out there haven’t turned over a new leaf for the new year. In fact, not only do they continue to publish rubbish science at an alarming rate, but there have been some major high-profile screwups uncovered in the past month. So there’s plenty for us to talk about.
BTW, if you aren’t already subscribed to this newsletter, now’s your chance:
January’s best bad science links
If you follow stories on scientific fraud at all, you’ll already have seen the big scandal at the Dana-Farber Cancer Institute in Boston. It’s, shall we say, rather bad PR for them: as I write this, 6 papers published by their researchers are being retracted and 31 corrected after a data sleuth found tons of evidence of image manipulation. It’s mostly the photoshopping of images of western blots, and whereas some of it might be in error… lots of it is obviously fraud. There might be more to come. It is, to quote Derek Lowe, “a disgrace” - and that’s at one of the top research institutions in the US!
The latest on the long-running Dan Ariely “did-he-didn’t-he” story, and a profile of the man. You’ll likely have to sign up for a (free) account to read this article, but it’s worth it.
I think that AI holds big promise for scientific research (and fraud-spotting: it was used to help identify some of the faked images I just mentioned above). But as with every new tool, someone’s going to come along and fumble it. In this Twitter thread, a claim that AI had helped “revolutionise inorganic materials discovery” (the words of the scientists in question) and helped discover many new compounds in just a few days, is brutally taken apart. Doubtless we’ll see a lot more of this kind of AI-overclaiming (overclAIming?) in 2024.
Relatedly, here’s a new study showing how clinical prediction models can be amazingly predictive within the dataset they were trained on… but are then totally useless at predicting out-of-sample. It’s overfitting all the way down.
Now we have evidence that “paper mills” (shady groups who fill scientific journals with fake papers, presumably selling the authorship to scientists who want to boost their CVs with zero effort) are bribing journal editors to allow them access. As the guy (almost) put it in Season 1 of The Sopranos: “Bugging, bribes. I don’t know. Sometimes I think the only thing separating [scientific publishing] from the mobs is fuckin’ whackin’ somebody”.
Happily, there’s now a coalition of publishers and others who are going to campaign against the paper mills and help identify their shoddy products. The coalition is called “United2Act”, which is a bit cringe and makes me think of “Fired4Truth” (remember that guy?). But otherwise, this is a very good thing and I wish them the best of luck.
Just a study (from last year, but new to me) where 53.7% of medical residents surveyed in Southwest China said they’d committed some kind of scientific fraud. Seems… extremely bad? Even if it’s off by a factor of ten it’s still grim as hell.
Cool: the Institute for Replication teams up with the journal Nature Human Behaviour to systematically replicate/reproduce their papers published from 2023 onwards.
As a psychologist, it’s very amusing to see physicists—from the acme of exacting, precise, hard science!—grappling with uncertainty and measurement error.
And just as an interesting example of psychologists finally getting around to proper measurement validation on one of their common tests: the “Reading the Mind in the Eyes Test”, often used to assess autism-related traits, might not actually be very good.
A sad story of the reputational damage (to their co-authors) and general mess that scientific fraudsters can leave in their wake.
You might think that it would be really easy to replicate simulation studies. After all, it’s all just computer code, right? It’s all in silico! You don’t have to fiddle around with a pipette or, god forbid, deal with living human beings! Well - often it’s not that easy, according to this study. It includes some tips on how to make your simulation study better.
Great preprint with reams of useful data on “the strain on scientific publishing”. Ultra-crappy publisher MDPI takes a well-deserved kicking.
Huge review of decades of interventions in criminology concludes that basically none of them work, in the sense of having a long-lasting beneficial effect. “It suggests that a dominant perspective on social change—one that forms a pervasive background for academic research and policymaking—is at least partially a myth.” Gulp.
P.S. The Studies Show
If you’re a podcast listener and you haven’t yet checked out The Studies Show… where have you been? It’s me and Tom Chivers chatting every week about all kinds of scientific controversies - just this year we’ve covered personality tests, male vs. female brains, and the idea of statistical significance. We’re about to release one on “Is it the phones?” - that is, what’s the evidence that phones (and social media) are causing a mental health crisis? You can subscribe right here:
P.P.S. Science Fictions (the book) in Japanese!
私の本、「サイエンス・フィクションズ」は水曜日に日本語で発売されます!こちらで予約注文ができます。
P.P.P.S. New job!
You might’ve seen that I have a very exciting new job at the AI startup Anthropic. I’m working on research communications, helping to share their (really excellent, genuinely world-leading) science with the world. I’m sure you’ll hear more from me on that in future, but to those of you who’ve asked, never fear: I’ve no plans to change the Science Fictions newsletter (except, I guess, to note if it wasn’t obvious that everything that’s published on this Substack in the past or in future is my own opinion, not that of my new employer).
Image credit: Getty
Ironically enough, the huge review of criminology research appears to misunderstand p-values in the section where it's talking about common flaws in scientific research.
> With standard hypothesis testing methods, there will be one false claim of a relationship between a cause and purported effect for every nineteen times that it fails to find support
Congrats on the new job! Should I hold out hope you might still write a book on how to stay skeptical without spiraling into conspiracism? I’d sure love to teach with that book. I’m around if you ever want draft comments.