Science Fictions links for July 2024
The Marshmallow Test ROASTED; sneaky referencers CAUGHT RED-HANDED; plagiarists TOTALLY UNPUNISHED; and much more
Look, I know you’re not interested in reading this first paragraph. You’re here for the links, right? You know it, I know it. So let’s just get to it: July’s best bad science links are below.
The links
“A new type of fraud”! Sneaking citations to your own previous papers into a new paper’s metadata - they don’t appear in the text but still cause your citation count to go up. You’ve almost got to respect the level of ingenuity here.
“Can names shape facial appearance?”. I don’t wish to be dismissive, readers, but this is a result that just can’t possibly be true, and I knew that before reading beyond the title (it turns out to be full of totally unconvincing stats, so I was right to be sceptical). Still - it got into PNAS, so, uh, good on them…?
We’ve had evidence of this before, but here’s a new paper that appears to close the book on the whole thing: “Marshmallow Test performance does not reliably predict adult outcomes”. Quite embarrassing for psychology, one might suggest!
I can’t believe I’ve only just discovered the Twitter account of Mu Yang, another researcher with a great eye for spotting dodgy graphs in published scientific papers. I chuckled aloud at the impossible “back bend” in this one — and Yang has tweeted many many more examples.
As more information appears about the Francesca Gino case, it allows data sleuths like the guys at Data Colada to be even more forensic with their fraud investigations. Remember that the authors of this post are—absurdly—being sued for defamation by Gino.
Why zombie theories stick around: a nice summary/recap of the problems in the scientific system.
Another glaringly obvious AI-generated scientific diagram in a published paper, raising questions about what on Earth the reviewers and editors at this seemingly-respectable journal were doing. Looking out the window?
And remember: this stuff is bad, but the real problem begins when AI-generated pictures are good enough to be indistinguishable from real ones. It won’t be long!
Bizarre scenario where a scientist is sent a paper to review, and upon reading it notices that it’s “100% plagiarised”… from his own work. Several months pass, and then a respected journal in the field… publishes the plagiarised paper.
Worse, several other researchers pop up on Twitter to say the same thing happened to them. What on Earth is going on here?!
Here’s a “monkey’s paw” scenario. You want scientists to calculate statistical power for their experiments? Well, lots of them do! But [monkey’s paw curls] they calculate it using a totally shite piece of software called G*Power which is based on clicking buttons rather than writing code - and thus the power calculations can only rarely be reproduced (and the default settings make them prone to errors).
Paper claiming to do a “reversal of autism symptoms” in two kids goes viral. It’s in two kids. It doesn’t have a control group. It’s crap. It should be ignored.
A controversy over a well-known set of experiments on honeybees and their little “waggle dance”. It looks like the papers are riddled with strange-looking data and what are euphemistically called “irregularities” in this Science article. The original author denies it all, of course.
As always, you’ve got to wonder how much more stuff like this is, ahem, buzzing around in the scientific literature.
There’s a policy in the US (and elsewhere) called “ban the box”, where ex-convicts are helped to get jobs by stopping employers from requiring a “do you have a criminal record” checkbox on job applications. A prominent 2020 paper said this policy backfires and actually worsens the prospects of young black men (because employers just blanket-discriminate against them). A new reanalysis says no: actually, once you correct some errors in the 2020 paper, there’s no such backfire.
And talking about re-analysis: here’s a detailed critique of one of my own papers from 10 years ago. As I’ve said many times, I agree with the “stay in school” message, I believe that education raises IQ (indeed, I wrote the meta-analysis confirming this), and I think the author might just misunderstand my views. Still, there are some very valid technical criticisms here.
In genetics there’s the idea of “balancing selection” - genes for certain traits stick around even though they’re negative in some situations because they’re advantageous in others (e.g.: schizophrenia genes don’t disappear despite their obvious disadvantages, because they might cause creativity sometimes). Here’s a new and very strong critique of that idea for psychological traits.
And to come back, full-circle, to manipulated citation counts: here’s an article about the world’s highest-cited cat, called Larry.
P.S. The Studies Show
Our angriest podcast episode yet: on “misinformation”, and the shocking level of nonsense spoken about it (by people who should know better). That’s for paying subscribers; free listeners can hear some apocalyptic stuff on asteroids, nuclear winter, and air pollution.
I am interested in the part about education raising IQ. It sounds like you have extensively researched the subject and I have not, so I'm not looking for a debate. However, I found Bryan Caplan's position in The Case Against Education to be convincing and it fit with my intuitions.
Here is my position, which is basically the same as his:
1. In the early grades we learn important things like reading, writing, and basic math. However, by the time students get to high school and beyond they will not remember much of what they are taught, if they even learn it to begin with. Also, most of it is not very important from a practical standpoint.
2. The value of higher education is mainly signaling - a college degree looks good to employers, and also grants social status. However, I certainly don't remember anything I learned in college (I'm 35 now) and I doubt it would be very useful even if I did.
As for arguments in favor of education (in and of itself, not as signaling), it seems like they can either be direct or indirect. An example of a direct benefit would be taking French in high school and later being fluent in French as a result of the classes. This is almost never the case for any subject. An indirect benefit would be something like a long term increase in IQ, which seems to be the position taken here.
I would be interested in hearing counter-arguments against the position I summarized.
Wondering if the studies show might take a look at (what seems to me...) the amazing "make your own meta-analyses" — https://consensus.app ?
Recently just got some serious funding! I've been using it for a few years now.