Discover more from Science Fictions
Science isn't storytelling
An article with dreadful advice for scientists reminds us that not everyone has learned the lessons from the replication crisis
And in conclusion, that’s why you should agree that science isn’t storytelling. Thanks for reading the Science Fictions Substack.
Oh, sorry - did you find that a little jarring? I started this article with its conclusion! That’s because I’ve just read an editorial—first published last year but currently getting some attention on Twitter—from the journal Marine Life Science and Technology, which offers just this advice to scientists: they should write their scientific papers backwards.
The editorial—entitled “Finding Your Scientific Story By Writing Backwards”—argues that a scientific paper needs to have “take-home messages”. These are the big points that conclude the “scientific story” told by the paper - a story which, developed correctly, will “increase the impact of your work and the likelihood of it being accepted in highly rated journals”.
The editorial goes on to draw an analogy between a scientific paper and a joke - they both need a punchline. “In fact”, the editorial-writers argue,
many comedians start writing their jokes with a punchline in mind—or at least a rough version of it—and then craft the setup… In other words, the joke is constructed backwards from the punchline, even though that’s not how you tell it. A scientific story is no different.
Having suggested that their ideal scientific paper is “no different” from a joke, the authors provide a diagram of how you should go about writing your scientific paper. The left-hand column is the standard way of structuring a paper, and the middle column is their suggestion for how you should do it:
“Write up only the Results that relate to your conclusions”; “write up only the Methods that relate to your results”; and write the Conclusion first and the Introduction section last. This, as was noted on Twitter, is a recipe for bad studies: it encourages the cherry-picking of results that went your way and the hiding of those that didn’t. It encourages Hypothesising After the Results are Known (HARKing), the kind of miniature historical revisionism where you rewrite your plan for the study to fit its outcome, rather than saying whether the results supported or went against your original hypothesis. We know how easily these practices can distort the scientific record and give us false-positive, nonsense results.
Having a pre-existing conclusion—a “take-home message”—in mind is one of the most reality-distorting influences on our thinking. Just look at how it affects political and social debates, where people defend their previous position (say, “the Conservative/Labour Party is correct”; “my favourite online guru can’t have meant that”) against even the most obvious disconfirming evidence.
Scientists are far from immune to this. Consider:
Reporting results accurately is the entire point of science;
It’s incredibly easy to analyse data or write a paper in such a way as to report the results inaccurately;
Statement 2 is especially true when you have a pre-determined conclusion in mind.
These mean that the mere idea of putting the conclusions up-front, or focusing on take-home messages, should be considered completely radioactive in science. Scientists need to be Odysseus, tying himself to the mast to resist the siren’s call. They need to be Gandalf, being offered the One Ring and exclaiming “DON’T TEMPT ME, FRODO” (and both the Odyssey and The Lord of the Rings really are stories, by the way, unlike scientific studies).
The idea of “telling a story” in a scientific study puts a premium on the aesthetics of results, as noted in one of my favourite ever papers, by Roger Giner-Sorolla:
The way in which we talk about data being “beautiful” and “neat” as opposed to “ugly” and “messy” shows that their content and presentation carry aesthetic value. Reality, however, should limit the influence of aesthetics on science… If empirical results consistently speak against it, it is the theory, not the results, that must be rejected or revised.
Explicitly arguing that scientists should pay attention to the aesthetics of their “story” just reinforces what’s already there implicitly in our broken scientific journal system. Giner-Sorolla again:
Admitting that you are wrong is part of science. But somehow the belief has taken hold that making such admissions in a research paper is a sign of weakness that muddies the story, eats up journal pages, and confuses the reader. Even being honest about an initial lack of theory or reporting a midcourse correction on the basis of a pilot study can be taken as a fatal flaw.
A scientific paper isn’t supposed to be a story. It’s not an opportunity for you to flex your creative muscles, or to craft a tale (it’s also not an opportunity for irritating attempts at comedy or jokes, as has been advocated on Substack recently and demonstrated in a tiresome “humourous” paper). It’s a sober, clear, accurate description of results whose content and context can be understood by as many people from as many scientific fields and backgrounds as possible. If it’s boring or doesn’t have a “narrative” - well, that’s sometimes just how it is, because in reality scientific studies often career off in unexpected directions that belie attempts to condense them into a clear, linear, account.
Science isn’t a story - and it isn’t even a scientific paper. The mere act of squeezing a complex process into a few thousand words—limited by a journal-imposed word count—is itself a distortion of reality. Every time scientists make a decision about “framing” or “emphasis” or “take-home messages”, they risk distorting reality even further, chipping away at the reliability of what they’re reporting. We all know that many science news articles and science books are over-simplified, poorly-framed, and dumbed-down. Why push scientific papers in the same direction?
Indeed, the fact that scientific papers make it so easy to report results with bias has made many researchers advocate for major changes to the publication system. See, for example, the idea of pre-registration or Registered Reports. Some (including me) have even suggested we should do away with scientific papers entirely.
There is, of course, a steelman version of the editorial. It is indisputably true that scientific papers are, for the most part, terribly written: filled with jargon, clunky phrases, endless sentences, and indecipherable acronyms. Writing more clearly, with less chance of misunderstanding, would make science better, and would perhaps encourage collaboration across different fields. Also, if you look at the right-hand column of the figure I reproduced above, there is good advice about how the Results section of a scientific paper should parallel its Method section to make it easier to read. There’s a happy medium between papers that nobody understands and those that tell a story that’s detached from what actually happened.
Now, the authors of the editorial haven’t been living in a cave (or, since they’re marine biologists, in some kind of submarine) for the last replication-crisis decade. They do know these criticisms - at least to some extent. Towards the end of the editorial there’s a “cautionary note” that reads:
Our advice above will be useful only if your underlying data are sound. The guidance we provide here is for writing up a study, not conducting a study. The advice must not be mistaken for guidance on experimental design or data analysis. We have assumed that your experimental design was sensible, your experiments were conducted correctly, your analysis was appropriate to address the questions you were asking, and you have arrived at logical take-home messages. In other words, we assume that there is an appropriate level of academic integrity and academic proficiency underlying your study.
It’s admirable this was included, but it still displays a terrible naiveté. That’s because the writing of the paper is where much of the bad stuff comes in. For so many scientific projects, the “doing the research” part and the “writing up the research” part are simultaneous - for example, running your statistical analysis as you write your Results section. If you’re doing that having already written your conclusions, it takes a strong will not to lean—consciously or unconsciously—in your pre-determined direction. The idea that all the low-integrity, low-proficiency stuff happens in the study before any writing takes place is just not an accurate description of how normal, everyday science is done.
Although it’s dreadful writing advice, this editorial is a useful reminder in another sense. Although those of us (perhaps especially psychologists) who’ve been immersed in this stuff for years might think it’s a bit passé to keep going on about “HARKing” and “researcher degrees of freedom” and “p-hacking” and “publication bias” and “publish-or-perish” and all the rest, the word still hasn’t gotten out to many scientists. At best, they’re vaguely aware that these problems can ruin their research, but don’t take them anywhere near seriously enough.
So one answer is simply: continue talking about this stuff.
And that’s just what I intend to do. See you in 2023.
Exciting personal news: As you may have seen me mention on Twitter, I’m quitting academia for a Science Writer job at the i newspaper. I’ll update you in early January about this newsletter, which will definitely continue, though likely not on Substack. Watch this space!
Image credits: Flowchart figure: From Montagnes et al. (2022), CC-BY licence. Lord of the Rings photo: Getty.