Discover more from Science Fictions
Science Fictions links for November 2023
The most interesting, amusing, and depressing bad-science stuff on the internet
Hello! For a change this month, I’m just going to get right to the cool bad-science links from around the internet, rather than reposting all my own stuff.
I have some job news incoming that might mean that the Science Fictions Substack returns to its former not-just-linkposts glory very soon…
…but for the time being: a linkpost! If you like getting links to all the most interesting happenings in the world of scientific reform and scientific integrity in your inbox, do sign up below:
But first… The Studies Show
I’m really enjoying this podcasting lark. Our special spooky Halloween episode on parapsychology seemed to go down particularly well, and you might also be interested in our discussions on scientific fraud, attention spans, and (for paying subscribers only), long COVID.
We’ve been building up subscribers, both free and paid, very nicely and we’d love it if you signed up too. You can do so in the box below:
Your monthly bad science links
A Science Fictions Substack update: you might remember that last year I wrote about a really bad meta-analysis that claimed that homeopathy worked to help people with ADHD. Well, the meta-analysis has now been retracted, and the retraction note mentions some of the flaws I talked about in my post. Sometimes scientific journals do the right thing, and lots of credit is due to Pediatric Research in this case.
Chris Said brings together four recent stories of prominent Alzheimer’s researchers who have been credibly accused of scientific fraud. Is it just random that fraud seems to proliferate in some scientific fields like this one? Or does the sheer desperation for Alzheimer’s advances cause people to overlook dodgy data - or to produce it? Whatever the reason: it’s grim news. As Matt Patton puts it: “these ‘scientists’ squandered tax dollars that were invested to spare you and your family years of horrible suffering”.
While we’re talking about allegations of misconduct, that University of Rochester physicist who claimed he’d found a room-temperature superconductor (no, not that “LK-99” one. A different one) has just had a second paper retracted from Nature - his third overall.
“Psychological constructs and measures suffer from the toothbrush problem: no self-respecting psychologist wants to use anyone else’s”. There are a lot of psychological scales, measures, and questionnaires out there, and most of them are used only once or twice. This is quietly a disaster, because it makes it very hard to compare studies and adds a whole bunch of noise into the literature. And I bet it’s similar in other parts of science, too.
The same researchers, in another paper, find a verrrrry suspicious “bump” in the values of Cronbach’s alpha (a statistic used to assess the reliability of measures) that are just at the threshold that’s considered to be “acceptable”. It’s a bit like those school datasets that show an unexpectedly high number of kids who are just above the passmark in some teacher-rated test: are researchers (teachers) juking the stats to make their measures (pupils) appear good enough? Almost certainly!
Remember Brian Wansink? He’s the tragic Cornell food psychologist who exploded his own career by inadvertently admitting to serious p-hacking. After that, 18 of his papers were retracted, and he resigned from his job. Now, though, his most famous paper—the one with the self-refilling soup bowls, which had won an IgNobel Prize—has been successfully replicated! Funny old world.
Scientists are just like magpies who enjoy shiny things: it seems they pay more attention to studying “pretty” bird species than “drab” ones. Maybe that’s rational and there’s more to learn about sexual selection from the pretty ones - but the whole thing is a great metaphor for science in general, where shiny hypotheses get disproportionate attention.
Prof George Davey Smith, having done more than anyone else to popularise the method of “Mendelian Randomisation” (where you can use genetic variation to make causal estimates from observational data), is sort of like Dr. Frankenstein: his creation has come back to haunt him, in the form of a flood of crap MR studies. As previously mentioned on this newsletter, he’s doing his best to stem the flow - and now you can watch him doing a talk about it, too.
I keep seeing examples of scientists using legal threats against other researchers who’ve criticised them, and against journals that are taking action to flag or correct potential misconduct. It’s already the case that scientists find accusing others of bad behaviour incredibly aversive - and journals are often very reluctant to act when they do. This just makes the whole thing much harder. Anyone who uses vexatious legal threats to suppress scientific criticism of their work should be drummed out of science entirely.
Let’s end on a positive note for once. This new study shows that if you follow lots of the “Open Science” advice and do things like pre-registering your study, increasing its sample size, and being open and transparent with your data, you can get a very impressive replication rate. This happens to be good news, but regardless of the specific result, we need more meta-scientific tests like this - please keep them coming!
…and I’ll keep the interesting bad-science links coming, right to your inbox, if you join more than 10,000 others by signing up below:
Image credit: Getty