I’ve been the Editor-in-Chief of the American Naturalist for four years, and so impact factors have been a small but not insignificant consideration in my life for a while.  For 150 years, scientific journals lived by the reputations they garnered with the general scientific community.  Everyone knew the pecking order of journals in their field.  Given that people making funding decisions at granting agencies were run by the leading scientists in these disciplines, external justification for the quality of publications were unnecessary.  Scientists could also articulate that rank order to their colleagues and to administrators for promotion and tenure decisions in evaluating their junior colleagues.

However, in the last 10-15 years scientists and science administrators have abdicated these responsibilities to non-scientists and thus to the use of “quantitative” metrics, such as the impact factor.  (On the individual scale, see the H-index).  On its face, this seems like a much more rational, reasonable and responsible way to evaluate the quality of a scientific journal than the opinions of people. We’re scientists. Why wouldn’t we value numbers over opinion?

The reason is that once such metrics are in place and enormous weight is given to their values, the natural incentives kick in, and the games begin.  Some games are legitimate but make the metric irrelevant, some games are questionable, some are just plain silly, and some are simply unethical.  All are changing the fundamental nature of science and scientific publishing because of the perverse incentives these quantitative metrics create (see here and here for just two examples).  Here I want to enumerate some of the more obvious ways that many journals inflate their impact factors, and comment on each. 

Share