The Problem of Bad Articles

The following Tweet complains about a very common problem. An article looks suspicious, yet it attracts countless citations.

It’s hard to believe, perhaps, but even retracted articles get cited, sometimes hundreds of times.

The citing scientist might be at fault–not assessing the article clearly enough, not looking for reviews, etc.–but I believe the bigger problem is that it is just very difficult to know whether, among the many citations, someone expressed doubts about the validity of the paper. This is the sort of issue that is vulnerable to reputational cascades.

In law, we sort of solved this problem. To start, we care a lot about authority. We want to rely on past cases that are “authoritative,” which means several things, among them, that the precedent was widely accepted by courts. But reading many cases is extremely labor-intensive. So private companies came up with a nice solution: a product called Shepard or KeyCite. Here’s an illustration

These products show the valence of the various citations. How many supportive, how many critical, and how many balanced. The lawyer or judge can now make an informed decision whether the precedent is as strong as they thought it was. A side benefit of it is that it provides some incentive for people to express doubts about past articles, as it is a good way to attract citations (see Bob, 1980 but cfr Arbel, 2020).

Leave a Reply

Your email address will not be published. Required fields are marked *