Everything you need to know about psychedelics and mental illness
The science of psychedelics is everywhere – but we should treat it with serious scepticism
Do you keep seeing mentions of psychedelics everywhere you look? Does the entire world suddenly seem alive with studies of magic mushrooms, LSD, and MDMA? Does it seem like huge numbers of scientists are telling you the cure for anxiety, depression, PTSD and much else might lie in the same drugs that helped The Beatles write Sgt. Pepper?
You’re not hallucinating. It’s real. At essentially any time, you can put “psychedelics” into Google and you’ll see the little News box pop up with new, exciting stories. Here’s just a sample from when I started writing this article – all published within 48 hours of each other:
“Psychedelics may ease cancer patients’ depression, anxiety” (Washington Post)
“How pro surfer Koa Smith overcame depression and trauma with psychedelic mushrooms” (The Independent)
“Psychedelic drug LSD may be effective as anxiety treatment” (Medical News Today)
Combine this constant drumbeat of news stories with more lasting contributions like the ultra-popular Michael Pollan book How To Change Your Mind (soon to be turned into a Netflix documentary), and this stuff is impossible to ignore. How should we think about it? That’s what this post is all about.
Why this gets people so excited
First things first. Why is the psychedelics-for-mental-illness field so often in the news, and why are people so eager to hear about it? Here are my best guesses:
We’re not very good at treating mental illnesses like anxiety, depression, and PTSD. Even with drugs that are better than placebo, the side effects are unpleasant. Discovering that a whole different class of drugs—drugs that people often take for fun!—relieves mental illness would be quite the breakthrough;
Psychedelics have a certain mystique: you can use them to “go on a journey” and “look deep inside” and “find yourself”. They reveal new, creative truths about your psychology and about the world (remember when Tony Soprano goes on a peyote trip in Las Vegas and ends up screaming “I GET IT!!” to the setting sun in the desert?). It seems plausible that drugs with this kind of power could disrupt people’s damaging thought patterns and get them on the road to recovery;
Quite a few psychedelic drugs are naturally-occurring, and have been used for hundreds of years (this doesn’t include synthetic ones like LSD, obviously). We all know there’s a temptation to think “it’s natural and traditional, so it must be good”. This is clearly part of the attraction of the online “wellness” and “spirituality” movements, and it’s no less true for psychedelics – which are increasingly a part of those “alternative” spheres;
There’s something attractively “counterintuitive”, or at least “countercultural” about psychedelics – those stodgy old folks who told you drugs were bad, mm’kay? They’re wrong! And in fact they’re wrong in the most embarrassing sense: they’ve been proved to be wrong by clinical trials!
For my part, I have no personal interest—in either the curiosity or the financial sense—in psychedelic drugs; I’ve never taken them and I don’t plan to. I’ve nothing against them, either: I’d think about taking them if I had a relevant condition and the evidence they worked was convincing. Oh, and politically, I’m sympathetic to the idea of legalising almost all currently-illegal drugs. Consider that my “conflict of interest” statement for this article. Which leads us nicely to the first big problem with the science of psychedelics.
Research is me-search
A few years ago, John Ioannidis (RIP) published an article on conflicts of interest in nutrition research. The conflicts you get in this field, he said, are different from those in, say, Big Pharma-funded trials of new drugs. Not only are there those usual kinds of financial conflicts—some research is paid for by the food industry; nutrition scientists have diet books to sell—but there are “nonfinancial” conflicts, too. If you’re a strong adherent to the particular kind of diet you’re researching (vegan, Atkins, gluten-free, etc.), Ioannidis argued, you should disclose this at the end of the paper, so readers can be fully informed about how the research was produced.
I’ve previously argued that scientists should consider doing this for membership of political parties, too, where it’s relevant. For instance, this paper argues that Margaret Thatcher was bad – but discloses that the lead author is a member of the Scottish Socialist Party, so, like, he would say that.
It’s the same for psychedelics. This is just an anecdotal account, but there’s an interview with the psychedelics researcher Manoj Doss, who says that he “only know[s] one psychedelic researcher who’s never done psychedelics”, and notes (in an encouragingly self-critical way) that this is a conflict of interest. He’s right! Just as you’d feel extra-sceptical if all the research showing that pork is unhealthy was written by Muslims who’d already decided for religious reasons not to eat pork, you should be worried about the sheer number of studies by psychedelic researchers who are themselves aficionados of the drugs.
(You might wonder if they’re into psychedelics precisely because the research shows such impressive benefits, switching around cause and effect. But as we’ll see below, that evidence doesn’t exist yet. That particular horse is coming way after the cart).
This isn’t just the view of one researcher. Reading the literature on psychedelics, you continually encounter concerns about the “over-exuberance” of some scientific advocates of the drugs. There are also discussions of conflicts of the financial kind: we know that there are a lot of psychedelic-drug companies springing up all over the place, and we know they pay consultation fees to psychedelics researchers because these are disclosed in research papers (see e.g. the section near the end of this paper, which we’ll discuss in more detail below).
There are all kinds of problems—from outright publication bias to more subtle “questionable research practices”—that can creep in when researchers have a conscious or unconscious bias in one particular direction. It doesn’t take much to push the statistical results in a field towards unreliability and false-positivedom. One 2020 paper suggests methods, many of them from Open Science, that psychedelics researchers should consider using to try and reduce these biases. I doubt many papers on psychedelics use them already.
So we could add this to our “why are people so excited?” list above: some scientists are into the idea that psychedelics might work for mental illness because they use psychedelics themselves. Psychedelic users are their ingroup. They want them to succeed, and they want the drugs they use to be more than just recreational. Planning studies, analysing their data, and writing them up while holding this kind of bias—even leaving aside any financial conflicts—is a recipe for producing misleading results.
Some psychedelic researchers don’t even try to hide their bias. During a podcast interview several years ago, the Imperial College London-affiliated psychiatrist Ben Sessa had this to say:
“All the major papers that've been published in the last 5 to 8 years, I’ve reviewed - all of them… I approved them all. I mean, I suppose maybe I should be less biased. But I approved them all... I think they're all great papers…”
Yikes. He even lists the papers he’s reviewed on his website, so we can have some idea of which ones we might wish to be extra-sceptical about (Sessa has also been criticised on social media for a graph he made that, without clearly informing the reader, combines the results of two studies as if they were one).
But even if the specific scientist running a psychedelics trial isn’t themselves a “psychonaut”, there’s still the more mundane kind of bias, a bias that almost all scientists have: towards finding positive, cool, encouraging, exciting results (as opposed to null, shrug-inducing, disappointing, boring ones). Obviously we all want to be able to help people who are suffering; sometimes that urge can overrule our more sober, scientific desire to run a completely fair test of our favoured treatment. Stack that more general bias on top of the specific ones we’ve already discussed, and you can see how things could go very wrong.
And as we’re about to see, it’s pretty clear how strongly some prominent psychedelics researchers want these drugs to work.
Telling on themselves
We all have a good laugh at scientists who find statistically non-significant results but write them up as if to imply they’re still significant anyway. They might use phrases like “trending towards significance”, or “approaching borderline significance”, or “verging-on significant” or even “well-nigh significant”.
The reason this is funny is that, even though the scientist has agreed to play the game one way (“the results are either statistically significant or not”), upon seeing their results they move the goalposts in a very transparent way (“actually results that aren’t significant still favour my theory!”). I don’t mean “transparent” in the good, commendable way. These researchers are inadvertently revealing their true thoughts to the reader – thoughts that are something like: “I know this result is true! Actual data be damned!”. In other words, they’re telling on themselves.
Another common form of scientific telling-on-yourself is to write about your research in public differently to how you write about it in scientific journals. Scientific journals, unreliable as they are, normally have some level of quality control: peer-reviewers will see your paper, as will an editor. On the other hand, if you’re writing in public your editor might not be scientifically trained; if you’re writing for your own blog or newsletter you can say whatever you like.
Here’s an example – and it happens to concern one of the most important psychedelic trials yet performed. In April 2021 the psychedelics researcher Robin Carhart-Harris (now at UC San Francisco; back then at Imperial College London where he still has an affiliation) wrote an article in The Guardian entitled “Psychedelics are transforming the way we understand depression and its treatment”. It was part of the publicity for a new randomised controlled trial (RCT) he’d co-authored, which had just been published in the world’s top medical journal, the New England Journal of Medicine.
The psychedelic in question was psilocybin, the main active ingredient from magic mushrooms. Before we get to the 2021 trial, it’s worth backtracking slightly to look at what evidence on psilocybin and depression existed beforehand. Here are the trials:
A “pilot study” from 2011 of psilocybin in advanced-stage cancer, including 12 patients, which found no statistically significant results on depression measures;
A “feasibility study” in 2016. This had 12 patients and no control group, so can only really be used to, as the authors put it, “motivate further trials”;
A double-blind RCT from 2016 in 51 patients with life-threatening cancer which found that psilocybin had very positive effects on mood that lasted for up to 6 months;
Another double-blind RCT, also from 2016, in 29 patients with life-threatening cancer that again found beneficial effects on depression symptoms over 6 months;
A final RCT from 2020, in 27 patients with major depression (of whom 24 actually finished the study) that found “large, rapid, and sustained antidepressant effects” over 8 weeks.
So, pretty tiny studies; some of them in very specific populations. The 2021 NEJM study was the first—and remains the only—study to compare psilocybin to an established, commonly-prescribed antidepressant drug – in this case the SSRI drug escitalopram. As with all the above, it was very small (59 people), but the comparison it looked at was a big deal. That’s because it’s not enough to show the psychedelic drug is better than placebo – after all, we already have treatments for depression. We need to know how it compares to those pre-existing treatments.
Here’s how Carhart-Harris described the results in his Guardian piece:
“Across four different measures of depressive symptoms, the average response rate to escitalopram at the end of the trial was 33%. In comparison, psilocybin worked more rapidly, decreasing depression scores as early as one day after the first dosing session. At the end of the trial, the average response rate to psilocybin therapy was more than 70%.
While we suspected that psilocybin might perform well compared to the SSRI, we had not expected it to perform as well as it did.”
Just a second. This rosy image isn’t what you’ll see if you read the study itself. Let’s look at the Abstract, which mentions the “QIDS-SR-16” (the Quick Inventory of Depressive Symptomatology), a depression questionnaire that patients use to report their own symptoms using 16 questions, and which the researchers selected as the primary, pre-registered outcome of the trial. Here’s what the Abstract says:
“The mean (±SE) changes in the [QIDS-SR-16] scores from baseline to week 6 were −8.0±1.0 points in the psilocybin group and −6.0±1.0 in the escitalopram group, for a between-group difference of 2.0 points (95% confidence interval [CI], −5.0 to 0.9) (P=0.17).
A QIDS-SR-16 response occurred in 70% of the patients in the psilocybin group and in 48% of those in the escitalopram group, for a between-group difference of 22 percentage points (95% CI, −3 to 48).
…
Other secondary outcomes generally favored psilocybin over escitalopram, but the analyses were not corrected for multiple comparisons.”
On the main outcome, the results weren’t statistically significant at the standard p < 0.05 criterion (the difference in baseline-to-week-6 change is p = 0.17; the p-value for the response rate isn’t reported but would be something like p = 0.09). And the response rate for this primary measure of depression was a fair chunk higher for the escitalopram group (48%) than the combined response rate across several tests that Carhart-Harris used in his Guardian article (33%). Nowhere in the paper is this combined score calculated, and in fact the paper downplays the other tests, which were “secondary analyses”.
So in the Guardian article, it sounds like the study was a huge victory for psilocybin; in the study itself, it looks like it produced null results on its primary endpoint. How do we reconcile these descriptions? Another article by Carhart-Harris, on the blog of the psychedelics app MyDelica, might help us understand. There, he describes how he felt pressured by one of the editors at the NEJM into writing up the findings in a very conservative way. He argued that because each of the secondary measures of depression and wellbeing showed more beneficial results for psilocybin versus escitalopram, the primary result (on the QIDS) might be a “false negative”, and he’s justified in saying that psilocybin might beat escitalopram. In frustration, he writes:
“...we predicted psilocybin’s superiority on well-being [as opposed to depression] ahead of the trial. In actual fact, although we selected the QIDS [depression] as our main outcome, we had not predicted psilocybin’s superiority on this measure. Psilocybin actually outperformed our pre-trial expectations!”
This is odd. Why would you select a measure as the primary outcome of your trial if you didn’t predict a difference on it? Why would you plan to do a test (as they do in their study protocol) between psilocybin and escitalopram if you didn’t expect psilocybin to be superior? Just like the “trending towards significance” language we saw above, surely it isn’t fair to write down a prediction about depression, see it not come true, and then move the goalposts to talk about wellbeing instead?
If you go beyond the Abstract, the paper itself is even stronger on those other, “secondary” outcomes:
Because of the absence of a prespecified plan for adjustment of confidence intervals for multiple comparisons of secondary outcomes, P values are not reported and no clinical conclusions can be drawn from these data.
It’s a fair point: they measured an awful lot of outcomes, at several different points in time. Adjusting these for multiple comparisons (another method that avoids false-positives) is not straightforward, mainly because they’re all correlated with each other, making the statistics way more complicated. Isn’t it a bit “not on” to say in the paper that no clinical conclusions can be drawn… and then go ahead and draw conclusions in the popular media anyway?
And that’s the main issue: none of the ambiguities made it to the Guardian article. It described uniformly encouraging results in favour of psilocybin, and didn’t say anything about the main outcome showing no statistically-significant difference or the fact they decided not to interpret the other measures due to statistical complications. To me, that’s revealing. It’s also needless, because a result where magic mushrooms do just as well as—even if not better than—a standard antidepressant is actually quite impressive in and of itself (or at least it would be, if it was confirmed in further, bigger studies).
Should there even have been a Guardian article or a press release for this paper in the first place? The fact there’s so much hand-wringing over what the results mean is a pretty good indicator that the answer is “no”. At the very least, anyone looking up the scientific paper after reading the glowing media article will be confused by how circumspect the paper is. You could even argue that in some sense it’s better to keep the speculation for the scientific journals—where it’s aimed at other scientists who are more likely to understand the uncertainty—and be over-conservative when communicating results to the public.
I got into “the weeds” a little with this specific study, but with good reason: it illustrates a broader point about the enthusiasm and hype surrounding psychedelics. Even if researchers and press officers are careful not to say anything outright incorrect about the studies they’re describing, the overall slant the public is getting is a very positive, very exciting one, rather than the much less certain one you find in the scientific literature.
It’s worse than that, though. Even if the results of this study had been far less ambiguous, I’d still have nixed the press release and the publicity. The next section explains why.
LSD (Lots of Scientific Difficulties)
Let’s imagine the “shrooms vs. SSRIs” study had come out with totally clear-cut results: a significant difference on the QIDS as well as everything else. I’d still urge caution. We noted above that it’s a small trial: 30 people in the treatment group, 29 in the control. As the statistician Kevin McConway notes in his long comment on the trial (which is well worth a read), this is a Phase II trial – it’s not supposed to be definitive, and much bigger Phase III trials would have to be done to pin down the effect of the drug.
But there’s an even bigger reason to take psychedelic drug trials with a pinch of salt: they’re really hard to run – harder than trials of many other kinds of medical interventions. This is nicely illustrated in a new paper called “Great Expectations: recommendations for improving the methodological rigor of psychedelic clinical trials” (given the subject, it is something approaching a crime that they didn’t go with “High Expectations”, but never mind). It’s a nice summary of the methodological issues surrounding psychedelic trials, which include many of the usual issues such as regression to the mean, getting the control group right, and the “observer effect” (or “Hawthorne effect”).
Chief among these issues—as you can tell from the paper’s title—is the problem of participant expectancy. Remember when your friend was in one of the COVID vaccine trials, and they told you they knew whether they’d gotten the placebo or the real thing because “I had a really sore arm” or “you wouldn’t get these kinds of side effects with a placebo injection”? Imagine that, but times a hundred: psychedelic drug trials are really hard to blind. That is, it’s very hard to hide from a participant in a trial that you’ve given them a psychoactive substance, especially when it’s being compared to a sugar-pill placebo – because the drug is going to have an obvious psychoactive effect.
This kind of “unmasking” (or “unblinding”) can do serious damage to a study. If you know (or strongly suspect) that you’re in the experimental group, you might be motivated to report stronger results on the questionnaires. You might feel very lucky that you’ve been included in a trial at all, so you might want to try and make the experimenters happy by letting them know their drug has an effect. Conversely, if you know you’re in the control group, your mood might slump, since you know that you’re taking a useless inert pill.
This is compounded because of self-selection: participants in a clinical trial aren’t a random cross-section of the population, and the kind of person who signs up to a trial of psychedelics is much more likely than average to be someone who’s experimented with these kinds of drugs before. They’re more likely to know what being high feels like, again threatening the blinding of the trial.
Not only that, but if a participant is already a part of psychedelics culture—or has had their expectations raised by reading hyped-up descriptions of the effects of the drugs in the media—it could make these effects even worse. One of the “Great Expectations” co-authors, who ran a study on the psychedelic drug ayahuasca, provides an anecdote:
“…one of the participants asked… if they should stop participating in the study because they did not have a mystical experience and did not want to ‘ruin the research.’”
This really isn’t like other kinds of medical research. Nobody expects a trippy, mind-altering journey in a trial of a statin or a hair-loss treatment or a vaccine. So what can be done to lessen these kinds of expectancies? The authors have a bunch of ideas. A better control in this kind of study is a drug that provides some of the same psychoactive effects, but is otherwise unrelated to the drug that’s at issue. That’s easier said than done, and very few studies include this kind of “active” placebo. Perhaps one of the most intriguing ideas is telling participants that they might receive one among several different psychedelic drugs (or a placebo), so even if they feel the effects, they’re less likely to know it’s the specific one that’s of interest in the trial.
Alas, in practice this doesn’t always work – the vast majority of participants in a couple of trials where this has been tried have still correctly worked out what the drug is. Selection effects happen here, too: it takes a particular, perhaps not very typical, kind of person to sign up to be given one of 4 or 5 different drugs. Nevertheless, the point is that with a bit of thought (and perhaps with the use of some of the more complicated modern randomised controlled trial designs), a lot more could be done to try and lessen—or at least understand—the effects of expectancies on psychedelic drug trials.
One response from psychedelics researchers to the worries about expectancy effects is that, well, doesn’t every kind of medical intervention involve expectancy to some degree? Shouldn’t we, to quote Carhart-Harris in the blog post I mentioned above, “view it [expectancy] as a potentially exploitable ally rather than a scientifically confounding foe?”.
But there are two problems here. First, at present the question is not whether psychedelics work really well and there might be some expectancy effects on top. The question is whether they only appear to work because of expectancy effects. That is, the positive effects in the trials could, potentially, be just due to broken blinding and participant expectations influencing their responses. Until we’ve definitively ruled that out (with, for example, a lot more studies that ask more detailed questions about what condition the participants thought they were in), we can’t know if it could be used to boost an already-existing effect.
Second, the argument is eerily similar to that made by alternative medicine proponents—or rather, people who sympathise with alternative medicine even if they don’t use it or believe in it themselves—who say “yes, it might all or mostly be due to placebo. But why not just let people believe, if they believe it’ll help? What’s the harm?”. Well, there’s at least debate about how strong placebo effects are, in reality: a classic 2001 review recommended against relying on them for anything outside of the design of a randomised controlled trial (and a 2010 follow-up agreed).
And what’s the harm? Let’s see.
The harm
Anyone who’s interested in psychedelics and mental illness should listen to the New York Magazine podcast series “Cover Story: Power Trip”. Among several other stories, the podcast reports investigative work around clinical trials of MDMA (ecstasy) therapy for Post-Traumatic Stress Disorder, run by an organisation called the Multidisciplinary Association for Psychedelic Studies, or MAPS.
The crucial thing here is that it’s not just MDMA: it’s MDMA plus therapy (by the way, MDMA is only sometimes classified as a “psychedelic drug”, depending on who you talk to; but MDMA therapy is almost always classed as “psychedelic therapy”, since some of the effects overlap with drugs that are definitely psychedelics). And as described in the podcast, that therapy can get very weird.
The danger is that psychedelics can make users pliant and suggestible, leading to obvious dangers if a patient is left in the hands of anyone with less-than-noble intentions (there’s a reason so many cults over the years have used psychedelics–including MDMA–to help keep their members compliant). The podcast relates the story of Meaghan Buisson, a PTSD sufferer who underwent intense MDMA therapy with two therapists in a MAPS trial. The therapy was filmed in its entirety – and it’s grim. If you can stomach it, you should watch the video, which was uncovered in the NY Magazine investigation. But if you can’t, here’s a description:
“The therapists, Richard Yensen and Donna Dryer, guide Buisson through three long sessions with follow-ups in between. They give her the drugs and, as she recalls, coax her to relive her sexual assaults. They ask her to spread her legs, and at several points, they lie on top of her and pin her down, sometimes holding her wrists. The two then comfort Buisson by stroking her face and climbing into bed with her. There are periods in the video when Yensen is in constant physical contact with her.”
At other points, Yensen and Dryer blindfold Buisson, gag her with a towel, and ignore her as she screams at them to get off her. Yensen also “admitted to having sex with Buisson after the experimental sessions ended but while she was still enrolled in the clinical trial”. This is a far cry from the friendly, comfortable, hand-holding sessions you see in photos from other MDMA (and psilocybin) therapy sessions. MAPS has said that Yensen and Dryer did not stick to their therapy protocol and that they won’t be working with them again.
Rick Doblin, Executive Director of MAPS—who, incidentally, has argued that psychedelic therapy could have prevented the War in Iraq and will produce a “spiritualised humanity” by 2070, and who encourages his employees to smoke marijuana while doing certain tasks at work—was asked about the potential for sexual abuse by therapists in a recent Q&A session. He had this to say:
“We’re trying to make it so that the source of the healing is inside the patient… that will hopefully make them stronger when there is, um, y’know, pressure perhaps from therapists to, y’know, engage in a sexual relationship.”
Perhaps not ideal.
Let’s be fair: abuse can and does happen in many (perhaps all) different kinds of therapy. But the addition of MDMA in this kind clearly makes the patient more vulnerable. It’s worth asking whether the people running the trials—who, if their Executive Director is at all representative, are very invested in finding positive results—are paying enough attention to the potential for abuse, even if it only occurs in a small minority of cases.
While we’re here, it’s worth talking about the MAPS MDMA-therapy trial itself. In May 2021, to much media fanfare, they published their Phase III clinical trial in Nature Medicine. The trial reported enormous, nearly unbelievable effect sizes: 0.91 of a standard deviation difference in score on a PTSD questionnaire between an MDMA and a placebo group (both groups got therapy). That’s the kind of effect medical researchers fantasise about.
Are these effect sizes plausible? Nature Medicine also published two critical commentaries on the paper. The first one points out the issue we discussed above: that the blinding was almost certainly compromised in the study, since the control group got a completely inert placebo and the treatment group surely could tell whether they’d been given ecstasy. Expectations could have played a major role, and might explain at least some of the huge difference between groups (the commentary also makes the interesting point that researchers used to be required to collect data on which condition patients thought they were in, but since 2010 this is no longer the case. It might be time to bring back this rule). The second commentary asked for studies with more relevant control groups and longer follow-up periods to check the safety of the treatment. The MAPS researchers also responded, somewhat limply.
The journal also published an entirely uncritical commentary co-authored by Imperial College London’s David Nutt. Remember him? He was Fired4Truth as a drugs adviser by the UK Labour government in 2009 for criticising government policy on drug classification; he’s now a major psychedelics researcher who co-authored the psilocybin trial we encountered above. The commentary calls MDMA “remarkable” three separate times in about 1,000 words (it’s true the effect sizes in the study are remarkable, but what if they’re due to expectancy bias?). It also says it’s likely that MDMA “will be an approved medication within a few years”.
So there’s a final potential harm that stems from the breathless hyping and cheerleading of psychedelic treatments way beyond the evidence: it could mean that therapies—which come with their own rare but real dangers—are approved and rolled out to large numbers of patients when we need a lot more evidence that they really are beneficial in the long term.
At the very least, I’d hope commentators should see some serious warning signs about hype—and, let’s be honest, general weirdness—here (the Canadian government’s health department is now looking into the complaints about MAPS; maybe other countries will follow suit). For instance, I wouldn’t mind seeing some replications of these studies by researchers who aren’t quite so convinced that what they’re doing is the next big health—and indeed global spiritual—revolution.
The end
This very long article could’ve been much longer. There’s a whole world of psychedelic research out there, and much of it looks, er, not very good. Overall, it’s hard to avoid the impression that this entire research field is aimed less at a dispassionate, hard-headed analysis of the effects of these drugs, and more at building up an image of psychedelics as friendly, safe, cuddly, and life-enhancing.
There’s also a lot of policy advocacy. A classic example is a 2015 paper in the Journal of Psychopharmacology which looks at an observational sample and finds no correlation between lifetime use of psychedelics and psychological stress over the previous year. At the end of the paper’s Abstract, the authors write:
“Psychedelics are not known to harm the brain or other body organs or to cause addiction or compulsive use; serious adverse events involving psychedelics are extremely rare. Overall, it is difficult to see how prohibition of psychedelics can be justified as a public health measure” [my italics].
This is activism, not science. As I said above, I actually support drug legalisation – but it’s far from appropriate to include this kind of thing so prominently in what’s supposed to be a neutral, factual scientific analysis.
Which leads me to the obligatory point that shouldn’t need stating, but which I’ll state anyway: I’m not saying psychedelics couldn’t possibly work for mental illness! This isn’t like ivermectin or vitamin D for COVID-19, where you could make a pretty good bet that the proponents—who also got way ahead of the evidence in both cases—would end up with their hopes dashed (there never was a plausible rationale for how these drugs would work against a respiratory virus). There is definitely a possibility that one or more of these psychedelic treatments, with or without the therapy, will end up promoting better mental health.
Perhaps the similar field that most rapidly comes to mind is that of the microbiome: it’s completely plausible to me that there’s a kernel of truth to many of the claims about how, say, the gut influences the brain – but the hype has vastly outstripped what’s actually been shown in the studies.
Hopefully this article has convinced you that there are a lot of red flags in the psychedelic scientific literature, and that you should perhaps set your standards higher than usual when reading about this area. To summarise everything I’ve said, we should bear the following in mind:
The reasons people are so excited about psychedelics for mental illness are not necessarily related to how much evidence there is for the treatment;
The conflicts of interest in the psychedelic research field go very deep. It’s completely plausible that these conflicts help bias the studies toward reporting more positive results than are actually true;
The discussion of psychedelic research in the media often bears only a passing resemblance to what’s actually in the trials – or at least it sounds a lot more certain and optimistic than is warranted;
Even the best-quality trials are flawed in important ways because—mainly due to problems of expectations and blinding—this is an incredibly difficult thing to study;
We shouldn’t allow our excitement for psychedelic drug trials to run roughshod over safety concerns, and we shouldn’t be so desperate for a breakthrough mental health treatment that we roll out these drugs before we’ve tested whether they work in high-quality, smartly-designed studies.
We’re nowhere near reaching peak interest in psychedelics. With all the scientific research programmes, and all the companies vying with each other to create the bestselling psychedelic product, maybe there’s a chance some of the above uncertainties will be ironed out in the coming years.
On the other hand, researchers might just continue doing low-quality, hard-to-interpret studies that back up their pre-existing beliefs, giving false hope to sufferers of mental illness. You don’t exactly have to be tripping to imagine that.
—
Acknowledgements: I’m extremely grateful to Ed Prideaux for pointing me towards this area and providing me with a great deal of background information, and to Alex Riley for extra discussion. Saloni Dattani, Anne Scheel, Robbie Bowman, and Sam Dumitriu all gave very helpful feedback on a draft. None of these people necessarily agree with anything I’ve said about psychedelics in this article.
Image credits: Getty
Certainly something I've noticed before, is that people who push psychedelics as a cure for mental illness are invariably way more interested in psychedelics than in mental illness. It's honestly really garbage behaviour to thrust questionable medical treatments on some highly vulnerable people for the sake of pushing for a completely unrelated political goal (ie the legalisation of drugs for recreational purposes).
If you think drugs should be legalised so that you can get high more easily, have the balls to make that argument. Don't make ill people into your puppets.
Great piece. One thing that I think of every time I read one of these pro-psychedelic pieces is "how much differently would this research be if the drugs were legal?" I suspect that psychedelics (and canibis before them) get so much positive press with the idea being "if it's medicine they HAVE to make it legal". I am curious if it will end up as transparent as canibis was - since recreational canibis passed in Canada basically nobody even pretends to think of it as primarily medicine after YEARS of it being framed as such - should a country journalists and academics actually care about (sorry Portugal) legalize it.