11 Comments

Jané's post was pedantic. Jon didn't represent his post as a new meta-analysis. He just criticized problems in Ferguson's meta-analysis. I thought it was a valuable contribution that will improve the kinds of experiments people design next.

Stat bloggers seem less and less constructive lately. I guess people go viral for big take-downs more than improving the research process.

Expand full comment

No he didn't. You know we can read, right? He presented a re-analysis of a meta-analysis. If he thought the data from Ferguson were unreliable, then why would he reanalyse them to support his conclusions?

Expand full comment

“These studies are unreliable and shouldn’t be used to draw any firm conclusions. Unless your conclusion is that social media causes mental health problems, in which case these studies are certainly reliable.”

Expand full comment

I get the critique, I just didn’t think it decimated Jon’s argument. He eyeballed some effect sizes in a blog post. It's sloppy, but it's reasonable to argue over what kinds of experiments should be meta-analyzed to answer the research question.

Expand full comment

The issue is that he did not just argue over "what kinds of experiments should be meta-analyzed to answer the research question", he then proceeded to meta-analyze them incorrectly, and claimed the statistical evidence from his re-analysis supported his hypothesis. This is the part that Jané responded to. I don't see what is pedantic about this.

Expand full comment

The exchange did seem to get more constructive to me in the replies. But Jane's critique starts with "if you conduct a re-analysis of any study you should be certain that the re-analysis is of higher methodological quality than the original study." I felt like this made a statistical straw man of Jon's original blog post and then burned it. So that's why I read it as pedantic. It seemed like a teacher finding (real) grammatical mistakes in a student's paper but ignoring the substance of the student's argument.

Whether I'm right or not, there's a deeper question here about how effective we want methodological criticism to be. If critics can't situate their feedback in the context of their colleagues' research goals, meritorious criticisms are going to be dismissed as pedantic.

Expand full comment

Do you think it makes sense that a re-analysis is of lower quality than the original analysis? What would be the point?

Expand full comment

To suggest a fruitful direction for future research.

Expand full comment

The misinformation paper is typical for the genre and doesn't say much. The core problem with all academic misinformation research is that it takes as axiomatic that misinformation academics are capable of figuring out what's true or false on a large scale across many different questions, and also selecting questions in a way that's representative of the full span of people's beliefs.

Neither belief is true and arguably these beliefs are obviously foolish. A typical failure mode of such papers is to compile a list of "untrue beliefs" that are simply things the researchers found by reading right wing media. No justification for the beliefs being untrue is ever provided, and if any left wing beliefs are included at all they are deliberately chosen to be as obscure as possible. And that's all it takes to conclude the problem of misinformation is "partisan bias" (read: non-leftists).

You can't draw any conclusions from a research foundation that weak. All such researchers deserve to be fired and banned from receiving money from the government ever again.

Expand full comment

Interesting collection of links. I especially appreciated reading about the scrutiny of misinformation and the effects of social media. It's so easy for assumptions or hypotheses to suddenly become "well everyone knows..." statements.

Expand full comment