The NIH's misguided genetics data policy
Banning scientists from using data to research certain topics is a bad move for all sorts of reasons
Last week the behaviour geneticist James Lee wrote an article in City Journal about how the National Institutes of Health (NIH; the US government’s major funder of biomedical research) is blocking scientists from accessing certain kinds of data.
The NIH holds a lot of genetic data in its Database of Genotypes and Phenotypes, or dbGaP for short. To access it for research purposes, you have to send in an application (that is, you can’t just click to download the data). But according to Lee, if you’re doing research on the genetics of specific traits, you might be unlucky. He writes that:
My colleagues at other universities and I have run into problems involving applications to study the relationships among intelligence, education, and health outcomes. Sometimes, NIH denies access to some of the attributes that I have just mentioned, on the grounds that studying their genetic basis is “stigmatizing.” Sometimes, it demands updates about ongoing research, with the implied threat that it could withdraw usage if it doesn’t receive satisfactory answers. In some cases, NIH has retroactively withdrawn access for research it had previously approved.
And it’s not like these are studies on the ultra-controversial parts of intelligence research:
Note that none of the studies I am referring to include inquiries into race or sex differences. Apparently, NIH is clamping down on a broad range of attempts to explore the relationship between genetics and intelligence.
So, to summarise: the NIH allows researchers to use the genetic data they host to do research… but not research that might offend people. As the title of Lee’s article put it: “don’t even go there”.
By pure coincidence, I ran into something similar in my own research just two days before Lee’s article appeared.
But first: not everyone who reads this Substack knows about statistical genetics or its technical terms, so I’ve put the relevant ones in a little glossary thing below. If you already know this stuff, feel free to skip on past it.
Little glossary thing
Genome-Wide Association Study (GWAS): A study where researchers recruit (usually) a lot of people, measure some trait they have, or whether they have a particular disease, and link the trait or disease to variations in each person's DNA. It's used to find the specific points on the DNA strand that are associated (hence the name) with the outcome of interest. From a GWAS you derive...
GWAS Summary Statistics: These are what you get when you do a GWAS - basically a list of each measured genetic variant--often hundreds of thousands of them--and how much it was linked to a particular trait or disease in the GWAS study. You can download GWAS Summary Statistics and apply them to entirely new samples of people who weren't included in the GWAS, such as when you calculate...
Polygenic Scores: Take the GWAS Summary Statistics, and take your new sample of people, all of whom have also had their DNA measured. Then match each individual person's DNA to the Summary Statistics: do they have a lot of the genetic variants that are linked to the disease or trait you're interested in? Maybe the genetic variants they carry are linked to a lower risk of the disease, or a higher one. The extent to which this is true can be summarised in one number - their Polygenic Score for the trait or disease in question.
The details of our particular research project aren’t particularly relevant to the overall issue here, but the plan was to see how a variety of polygenic scores predicted a variety of traits between and within families. Normally when you use polygenic scores you look between families - that is, comparing the scores of unrelated people to one another. But you can also look within families: comparing the polygenic score of one sibling (in our case, one non-identical twin) to another, to see if genetic differences relate to traits or diseases.
This controls for a whole bunch of other non-genetic confounding factors, and you often end up getting a much smaller prediction of a person’s traits from their polygenic scores if you do it this way. It’s very interesting, if you like that sort of thing: within-family genetic studies are kind of all the rage at the moment because of their ability to rule out that additional confounding (this is following up on a previous paper of mine).
We thought we’d look at predicting people’s intelligence test scores (among many other outcomes) from a selection of polygenic scores: for intelligence, for educational attainment, and—since we were including older folks—for Alzheimer’s disease. Do people who have more of the high-risk genetic variants linked to dementia tend to have lower IQs even well before they ever get a dementia diagnosis? Is this correlation confounded in some way? All very relevant scientific questions, you’ll hopefully agree.
Except, the NIH don’t agree. When we tried to access GWAS summary statistics from a 2019 GWAS on Alzheimer’s disease which was stored on another NIH site, called NIAGADS (the National Institute on Aging Genetics of Alzheimer's Disease Data Storage Site), we were stopped in our tracks by the following rule:
Please note that these summary data should not be used for research into the genetics of intelligence, education, social outcomes such as income, or potentially sensitive behavioral traits such as alcohol or drug addictions.
Huh. I should note that there are many downloadable datasets on the same website that don’t have this restriction: it seems to apply to this one and a couple of others that I could immediately find (one that’s also about Alzheimer’s, and one that’s about brain MRI scan data).
No justification of this rule is given on the page, and I couldn’t find any more specifics anywhere. I emailed NIAGADS to ask what the rationale was, and they replied using similar language to that quoted in the James Lee article above:
…the association of genetic data with any of these parameters can be stigmatizing to the individuals or groups of individuals in a particular study. Any type of stigmatization that could be associated with genetic data is contrary to NIH policy.
This isn’t “here are a few things to bear in mind about the way you do the research with these data”, or “here are some ways we’ve found help communicate this type of study to the general public”. It’s "you’re not allowed to do this research, full stop”.
(Well, okay, it’s an American organisation, so it’s actually “you’re not allowed to do this research, period”.)
And, y’know, rules are rules, so we’ll find another set of GWAS summary statistics to use—ones that aren’t held by the NIH—and use them to do the study. But I do think this rule is myopic and misconceived. Here are four reasons why.
1. The kind of research that’s banned by this rule has scientific merit.
Forget my specific study and just think about this type of behaviour-genetics research in general. A lot of people wonder why anyone would be interested in the genetics of intelligence if you aren’t some kind of evil eugenicist freak. Others have explained this at length (if you want a book-length treatment of the argument, see Paige Harden’s The Genetic Lottery, or if you want something briefer see this New Yorker profile of Paige), but here’s a quick three-part summary of why it’s scientifically interesting to study intelligence in particular:
Intelligence as measured by scores on an IQ test is predictive of lots of important life outcomes: how well you do in education and at work, how physically and mentally healthy you are, how long you’ll live, among other things. The evidence for this comes from many enormous, high-quality studies. A lot of the correlation is surely due to confounding, but not all of it, and there’s evidence of causal effects of intelligence differences;
Intelligence, like all human traits, is heritable - that is, variation in intelligence is linked to genetic variation. Again, not all of the variation in intelligence is related to genetics: a good chunk comes from non-genetic sources too. But the heritability is beyond dispute;
Understanding the extent of those intelligence-linked genetic differences, how they differ across contexts—age, location, social class, and so on—is thus an important part of fully understanding why some people do well and others don’t in our society.
Sure, there are lots of pitfalls in interpreting the data in these kinds of studies (though where aren’t there lots of data pitfalls in science?), but given we have all this evidence, sticking our head in the sand and ignoring genetic links to intelligence seems like an unscientific course of action. By placing specific restrictions on using datasets for this purpose, that’s exactly what the NIH is encouraging.
But in this case it’s even worse than that. The fact that the genetic data in question here specifically concern Alzheimer’s Disease makes the restriction on research on intelligence particularly absurd. Alzheimer’s is “officially” diagnosed post-mortem: you do an autopsy and look to see if the person had plaques and tangles in their brain - if they did, you can confirm that they had the disease. But if the participants in your study are still alive, you tentatively diagnose Alzheimer’s using cognitive tests - normally the kind of cognitive test you might recall was given to Donald Trump during the 2020 election campaign. Many of the datasets that produced the NIA data in question relied on this kind of clinical, cognitive-test-based diagnosis.
Now, dementia screening tests are somewhat different from the IQ tests you’d give to people you didn’t suspect were suffering from cognitive impairments - the main difference being the dementia tests are much easier (which was why Trump’s crowing about his high score was so pathetic). But they’re all cognitive tests - the line between the easier and harder ones is blurry, and in any case, some of the people in the NIA dataset were given both dementia screening tests and standard IQ tests to allow their doctor lots of information to help with diagnosis.
So the genetic data that can’t be used to study the genetics of intelligence are, at least in part, already data about the genetics of intelligence. It doesn’t make any sense.
More worrying, though, is the fact that this rule could stymie potentially-important research on ageing. Alzheimer’s is all about your cognitive abilities—your memory, but more broadly your intelligence—declining to such a point that it causes you serious problems in your everyday life, causes you to lose your independence, and so on. The ageing of our cognitive abilities—whether or not this turns into diagnosable dementia—is going to become a bigger and bigger issue in ageing societies. NIH preventing scientists from using genetic data on Alzheimer’s to learn about the genetics of intelligence could easily slow down research into cognitive ageing - and into treatments for it.
And for what? If you think my points above are a bit vague (“there might be benefits in future”), then that’s in part because the benefits of science are often hard to predict - that’s why we give scientists freedom of inquiry. But few people ever question the precision of the language on the other side of the argument: should we just accept that (a) this research really is “stigmatising”; (b) that this stigmatisation causes actual quantifiable harm in the real world; and (c) that these harms outweigh the potential benefits of research on these topics going ahead? If there’s a case for any of these points, it’s never laid out with any accompanying evidence.
2. The rule is an overreaction
There’s a lot of very bad research on intelligence and genetics out there. There’s a small coterie of researchers who churn out low-quality studies on the most controversial questions—race differences, sex differences, and so on—either because they’re ideologically committed to certain results, or because they enjoy trolling and “owning the libs” (or both). They don’t take the research seriously, and nor do they try to anticipate potential misunderstandings or misinterpretations by writing “FAQ” documents to attach to their papers (as serious genetics researchers often do).
This NIH kind of rule is clearly, in part at least, a reaction to that kind of research. But it’s not clear that it’s required: to access the NIAGADS data, you have to submit a letter from your university’s ethics board saying they approve of the research, as well as a biographical sketch to show that the lead researcher has the experience necessary to handle the data, along with other documents. Only scholars from bona fide universities are going to even attempt to access these data - many of the internet-troll researchers don’t have a university affiliation in the first place.
Also, though, and as mentioned by James Lee in his article, the rule doesn’t specifically ban research that’s on race or sex differences. It doesn’t specifically ban research that’s poorly designed, or confounded. It bans all research to do with genetics, intelligence, or income that uses these data. Even a study that used these data to show that “actually it’s not possible to learn anything useful about race or sex differences using these data due to XYZ statistical and methodological problems” would be blocked. That’s obviously an overreaction, an overcorrection to the problem of low-quality research in the field.
If your response to a few crank researchers using particular data is to stop all researchers from using it, you’re creating very bad incentives. As I noted in my little 2015 book on intelligence, it’s already the case that the loudest voices on this issue are those from the extremes - researchers who massively overplay the role of intelligence in explaining society, or those who deny it’s a measurable or useful quantity in the first place. If you forbid people from doing the research, not only will it attract the attention of controversialist cranks—many of whom are right now taking to Twitter to say that “the NIH knows these data show we’re right, and are covering this up”—but it’ll make mainstream, non-crank researchers want to avoid it even more.
3. The rule is a clear case of mission creep
For the sake of argument, let’s grant that research on the genetics of intelligence—and even on education and income—is too controversial and might cause people to be stigmatised. The NIH’s list doesn’t end there. They also restrict research on “potentially sensitive behavioral traits such as alcohol or drug addiction”. These are certainly “sensitive” traits, in the sense that they’re really serious problems both for individual people and for society - but doesn’t that mean we should be leaving no stone unturned in trying to understand and explain them? Doesn’t their seriousness mean we need more research on them, not less?
Above, I made the argument that it’s scientifically worthwhile to study intelligence and its genetic underpinnings. It’s even easier to make that case for drug and alcohol addiction: these are disorders that we might be able to predict or treat more effectively in future, and genetic testing might be part of that. So it’s if anything even more senseless to stop researchers from using genetic data to look into them.
When I chatted to some colleagues about this issue, some suggested that the NIH rule might be due to consent forms: maybe the participants in the original GWAS filled in a form that said their data would only be used for “medical” research in future, and so the NIH is just following that rule by restricting research on stuff like intelligence, which isn’t a “medical” outcome. I disagree that intelligence isn’t a medical outcome, for the reasons discussed above, but even if you grant this, the idea that research on drug and alcohol addiction wouldn’t count as “medical research” is obviously untenable.
And take a look at the phrasing again: it’s written vaguely enough—“…potentially sensitive behavioural traits such as…” [my italics]—that it might not just extend to alcohol or drug addictions either. Who knows what might be the next trait, one that we consider perfectly viable to investigate today, that’ll become forbidden to research tomorrow? Which leads me to…
4. Restricting data because the topic of research might upset people is a bad precedent
I shouldn’t have to state this, but sometimes things can be true and also upsetting. A lot of people argue, for example, that doctors advising obese patients to lose weight is “stigmatising” - but obesity is a very well-established risk factor for all sorts of health problems. If you’re going to ban research that might be “stigmatising”, shouldn’t you also ban research on obesity - let alone on its genetics?
And why stop there? People often object to specific kinds of research for political or other partisan reasons. I often see Scientologists protesting outside the psychiatric hospital near where I work. They are—or they profess to be—very upset and offended by psychiatry, both in practice and research, and its “human rights abuses”. Obviously it would be a terrible idea to ban researchers from using certain datasets if they wanted to look at psychiatric outcomes - even if that kind of research really riled up the followers of L. Ron Hubbard.
This would be akin to the “heckler’s veto” that’s often mentioned in arguments about free speech - stopping someone from exercising their right to free expression because someone else react (or might react) loudly or violently to what they’re expressing. Are we really comfortable with saying that anytime anyone feels stigmatised by research—or just might feel that way—we should scrap the research, rather than making it clear that, following David Hume, we must rigorously separate the findings of research (the “is”) from the way we treat people or organise society (the “ought”)?
Of course, I say “precedent”, but actually the precedent has already been set. You might have seen that the journal Nature Human Behaviour recently published an editorial which stated that they’d be consulting “advocacy groups” about whether papers on controversial topics should be accepted - or even retroactively corrected or retracted. As Jesse Singal wrote at the time, this was worryingly vague, in exactly the same way as the “mission creep” I mentioned above. Could perfectly sound research be nonetheless pulled from the journal because one of the advocacy groups advocated particularly strongly against it? From the way the editorial was written, it seemed perfectly possible (the journal has more recently published some examples in an attempt to clarify that vagueness).
It’s not the end of the world (yet)
I’m not going to go all culture-war and say that this is going to lead to the collapse of science or anything like that. This is just a bad trend of overreaction that needs to be nipped in the bud now before it becomes more common - before more groups take the lead of the NIH and Nature Human Behaviour and start restricting research that might, potentially, be controversial to someone, somewhere.
I think that, with a bit of back-and-forth, we can get to a point where most people would agree with certain conditions on the use of data (we already all agree that we shouldn’t, for example, just post genetic data online if there’s a chance it could be used to identify individual people, for example), but also feel like there isn’t any specific research that’s being clamped down upon unnecessarily.
For example, I quite liked some of the suggestions in a recent piece co-authored by a geneticist who’d had her work cited in the racist manifesto of the Buffalo mass shooter. Making sure that, for example, graphs and infographics provide enough context to make them robust to misinterpretation or misrepresentation is something every researcher should be thinking about - especially in controversial areas of research, but more generally too. What the piece didn’t argue for was banning or restricting research on human genetics due to the way it might be misrepresented by political extremists (sadly I can’t say this for everyone).
When I criticised that Nature Human Behaviour editorial on Twitter, I said that I thought their guidelines, while bad, were also well-meaning. As well as the criticism from those who were defending the guidelines, the “well-meaning” thing garnered a lot of pushback: “are you so naïve? This is a co-ordinated attempt to shut down research that displeases one side of the political spectrum…”.
It’s certainly true that there’s a hard core of political-activist scientists and others who’d be delighted if all behaviour-genetics research disappeared tomorrow (it’s always the same five or six mostly-anon accounts on Twitter who obsessively reply to every tweet on this kind of thing - watch them tweet this Substack piece and say that I must be an evil eugenicist or something. Yawn). But I think that most of the people who write or support policies like the NIH’s, or like Nature Human Behaviour’s, are genuinely concerned about stigmatisation, racism, and other kinds of prejudice, and simply haven’t thought through the adverse consequences of their policies for academic freedom.
That’s not to say this isn’t dangerous - the unthinking promotion of a bad rule still promotes a bad rule. It’s just that I don’t think that, on the whole, we’re dealing with malign forces or anything like that.
Regardless, instead of passively watching as creeping regulations restrict their ability to do their jobs, now is a good time for geneticists and other scientists to make the case for why academic freedom is important, and to argue back.
Image credit: Getty
A topic I have never given much thought**, so my first reaction may be silly - but why does this data (or any data the NIH collects) have any gatekeeping to begin with?
Our tax dollars fund all of this in the first place, shouldn't it therefore be all open to the public in principle? Anyone care to steelman the counter argument?
I read this part and thought this seemed antithetical to the openness of science:
>"This NIH kind of rule is clearly, in part at least, a reaction to that kind of research. But it’s not clear that it’s required: to access the NIAGADS data, you have to submit a letter from your university’s ethics board saying they approve of the research, as well as a biographical sketch to show that the lead researcher has the experience necessary to handle the data, along with other documents. Only scholars from bona fide universities are going to even attempt to access these data - many of the internet-troll researchers don’t have a university affiliation in the first place."<
** It has annoyed me though that the CDC Wonder Database is constrained by cumbersome user interfaces, whereas I want direct SQL access to sift through the data quickly. I pay enough in taxes that isn't too much to ask.
Hey, Stuart, you have, imho, been on a roll lately. I might disagree about "malign" even though the "intentions" are clearly phenomenologically benevolent. Malign is as malign does, imho, regardless of intentions (and then there is that whole paving of the road to Hell business).
Are you still involved with SIPS? Once upon a time, they were the vanguard of the science reform movement in Psych, and probably still fancy themselves that way. Here is my question. Any sense of "allies" in SIPS or the wider psych science reform movement who see this sort of nonsense as nonsense? Or is it complete radio silence? (By "this sort," I mean the sorts of social justice infused/motivated assaults on scientific norms and practices that have earned science a special place of credibility in the wider world and that you have highlighted here and in several other essays).