Science Fictions links for August 2024
"Honey, I spent three hundred million dollars on a fake scientific publisher" - and so much more
WHAT? It’s a list of links about bad science—fraud, error, poorly-designed studies—that I’ve collected over the past month.
WHY? If we don’t understand how bad things can get, we won’t be able to find realistic ways to make them better.
WHERE? It’s right below. Just scroll down a bit.
WHEN? In your own time. You’ve got T-minus one month before the next one.
WHO? I don’t really understand the question, but if you want to tell other people about this newsletter, I certainly won’t stop you. Look, there’s a special “share” button and everything. Go wild:
The links
A meta-analysis found that in those experiments where people are made to stay off social media, there’s not much evidence of improvements in mental health. Jonathan Haidt then did a re-analysis, claiming that the experiments do in fact show a causal (negative) effect of social media on wellbeing. But, as Matt Jané convincingly shows here, Haidt messed up the stats, and his re-analysis shows nothing of the sort.
On that same topic, here’s another critique of a bad “social media effects” study.
Richard Van Noorden’s stuff is always worth reading. This article, on the bizarre papers where up to 60% of the citations are to retracted studies, is no exception.
Another useful survey study where scientists are asked if they’ve ever used any “questionable research practices” like coming up with their hypothesis after analysing the data or selectively reporting data. Predictably, large numbers say they have.
Here’s a strange one. Turns out that a quarter of studies that use a scanning electron microscope misidentify the microscope—that is, they include the wrong manufacturer or model name (the study authors liken it to having done all your analysis in R, but writing in your paper “we did all our analysis in Python”). Why would this happen? Well, one possibility is that the studies were never done in the first place, so all the information in them is just garbled nonsense.
Another example where—perhaps for political reasons—peer-reviewers criticise a study because of its results, not its methods. Doing work on racial discrimination is a total minefield, and it seems to me that basically every study should be a Registered Report (journal commits to publishing the study on the basis of the methods before any data are collected).
TBH, I think basically every study in every field should be a Registered Report, so I’m not making an exception here really.
Part 2 of an ongoing series from James Heathers describing “the biggest screwup in the entire history of academic publishing”: how a big, “respected” publisher (Wiley) acquired a smaller publisher (Hindawi) for $298m… and didn’t notice for nearly two years that their journals were stuffed full of entirely fake, fraudulent papers.
They never learn: here’s a story of pretty much the exact same thing (though on a smaller scale) happening at Sage.
More good stuff from Chris Said, this time a pithy little post on compensating scientific whistleblowers.
In Vox, Kelsey Piper describes a case of scientific fraud from 2014 (which, I have to admit, I hadn’t heard of!) that might’ve killed many thousands of patients.
A nice reminder that it’s not just an epistemic insult when someone fakes research data: if those data start getting included in medical meta-analyses, the consequences become very real.
Nice to see even more pushback against what might be called the “naive view” of misinformation (held by many highly-credentialed researchers!).
A biology professor who was the head of a department at University of Maryland was found to have faked data in 13 papers and 2 grant applications. His punishment: he’s not allowed to work for the government or apply for grants for 8 years, and he has to retract the papers that haven’t already been retracted. Look, I don’t want to get into “cancel culture” or anything, but… fire this guy ASAP!
Causal estimation methods are taking something of a beating this month: a paper on the many issues with using the classic instrumental variable of rainfall; a paper discussing the many issues with difference-in-difference analysis.
And this isn’t formally a regression discontinuity analysis, but it is kind of, and it’s still fun to read Andrew Gelman’s post where he calls it “absolute crap”.
A paper complains that PubPeer comments about scientific misconduct are leaving universities with a “paralytic burden” of investigations. How about criticising the scientists, and not the people pointing out their fraudulent work?
This is kind of a positive retraction story: some quantum physicists had their own paper retracted from Science because they found that they’d inadvertently made errors in the analysis, and were honest enough to admit it and correct the record. Big respect for that—but the retraction note that explains the errors is extremely terse and I feel like readers deserve a much more detailed explanation of what went wrong.
And finally… if, like me, you enjoy the absurd things scientists write when their result doesn’t quite reach statistical significance, you’ll love this one.
P.S. The Studies Show
Look, if you don’t subscribe to my podcast with Tom Chivers, you’re missing out. There have been multiple reports that a certain moment in our recent episode on the marshmallow test caused listeners to literally “LOL”. Loads more good stuff on the site, too.
Image credit: Getty
Jané's post was pedantic. Jon didn't represent his post as a new meta-analysis. He just criticized problems in Ferguson's meta-analysis. I thought it was a valuable contribution that will improve the kinds of experiments people design next.
Stat bloggers seem less and less constructive lately. I guess people go viral for big take-downs more than improving the research process.
The misinformation paper is typical for the genre and doesn't say much. The core problem with all academic misinformation research is that it takes as axiomatic that misinformation academics are capable of figuring out what's true or false on a large scale across many different questions, and also selecting questions in a way that's representative of the full span of people's beliefs.
Neither belief is true and arguably these beliefs are obviously foolish. A typical failure mode of such papers is to compile a list of "untrue beliefs" that are simply things the researchers found by reading right wing media. No justification for the beliefs being untrue is ever provided, and if any left wing beliefs are included at all they are deliberately chosen to be as obscure as possible. And that's all it takes to conclude the problem of misinformation is "partisan bias" (read: non-leftists).
You can't draw any conclusions from a research foundation that weak. All such researchers deserve to be fired and banned from receiving money from the government ever again.