The Avalanche of Low-Quality Research?

Do we scientist publish too much? I ran across a recent plea to stop the avalanche of low-quality scientific research papers by Mark Bauerlein, Mohamed Gad-el-Hak, Wayne Grody, Bill McKelvey, and Stanley W. Trimble that suggests we do.

The article takes aim at 'redundant, inconsequential, and outright poor research' papers:

… Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. … In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

The article goes on to cite the costs to the system:

  • increased referee workload
  • increased time to keep current
  • increased use of salami-slicing
  • increased financial burden on libraries
  • increased use of natural resources (energy, space, paper)

Errors of Omission

I disagree with both the analysis and conclusions of this article on a number of levels.

I'll first take issue with the lack of references in the piece, an irony that should be lost on no one. The quote above refers to two papers. I'm interested in verifying the authors' claims about them. How do I do this? In the case of the Science paper, I need to leaf through back issues of Science from 'two decades ago' - is that 1990 (or maybe 1991… 1989 perhaps?) to find a title that looks like it could be a hit. At least the year of publication of the article by Jacsó is given. But this author has published multiple papers in Online Information Review in 2009 - which one contains the work cited in the piece?

The works referred to in the piece are both guarded by paywalls, forcing me to actually buy an article before I can determine if it's the one Bauerlein et al are citing.

The Bauerlein piece also never clearly states what constitutes a 'low-quality research' paper. The implication is that low-cited research is low-quality, without actually supporting the argument. Anyone who has published a solid paper that turned out to be cited less often than they thought it would be would take issue with this characterization.

Citation metrics serve a useful purpose, but as a measure of quality of research they fall flat in my view. Just because a research area is not in fashion at the moment does not mean that the corresponding science being done is 'low-quality'.

Ironically, some of the least important kinds of science are done by opportunists chasing a wave. Citation count will be healthy during the fad, but when the wave passes we're left with work that will hardly ever be cited again.

Likewise, just because work is published by an outsider in a journal with a different audience than s/he normally writes for (and so is not as visible to those who might cite it) doesn't mean the research is 'low-quality'. If anything, this kind of cross-disciplinary publication should be encouraged.

At the risk of sounding obvious, I'll offer this definition of low-quality research: it makes few (or no) falsifiable claims and is overly difficult (or impossible) to independently verify. The fact that I cite a low-quality paper should not be interpreted as an endorsement.

The Long Tail

Finally, there's something else the authors of the piece fail to take into account: most scientific papers are pretty boring. This says nothing about the quality of the research contained in those papers, but does say a lot about what it means to be a scientist.

Chemistry is a long-tail activity. We're trained to go deep during our PhD work and stay deep.

Being deep means you have many more problems that you alone can investigate - your 'turf'. This discourages wasteful competition, increasing the value of taxpayer dollars. But it also means that few of your peers will ever actually understand or care about what you're doing. And they certainly won't bother to read (or cite) your published work.

Does this mean you're doing 'low-quality' research? Of course not. It just means that you're a specialist - an expert in a very narrow topic.

The Real Problem

The real problem, which the authors of the piece completely ignore, is that the existing system of scientific publication was created when dead trees were the medium of communication. Scarcity was enforced by resource constraints - shelf space, ink, paper, journals, and editors. These constraints no longer exist, yet we as individuals and institutions continue to act as if they do.

Far too many useful research results never get communicated because the publication system (i.e. game mechanic) can't cope with them. Fire has been discovered yet we huddle like helpless animals in the cold and dark.

The problem is not too many scientific papers. The problem is that scientific papers and the journals that publish them have become anachronisms.

Credit: Thanks to In the Pipeline