Mind Games 2.0

Bloggin' 'bout science and life

Want to increase your Impact Factor?

I’ve been the Editor-in-Chief of the American Naturalist for four years, and so impact factors have been a small but not insignificant consideration in my life for a while.  For 150 years, scientific journals lived by the reputations they garnered with the general scientific community.  Everyone knew the pecking order of journals in their field.  Given that people making funding decisions at granting agencies were run by the leading scientists in these disciplines, external justification for the quality of publications were unnecessary.  Scientists could also articulate that rank order to their colleagues and to administrators for promotion and tenure decisions in evaluating their junior colleagues.

However, in the last 10-15 years scientists and science administrators have abdicated these responsibilities to non-scientists and thus to the use of “quantitative” metrics, such as the impact factor.  (On the individual scale, see the H-index).  On its face, this seems like a much more rational, reasonable and responsible way to evaluate the quality of a scientific journal than the opinions of people. We’re scientists. Why wouldn’t we value numbers over opinion?

The reason is that once such metrics are in place and enormous weight is given to their values, the natural incentives kick in, and the games begin.  Some games are legitimate but make the metric irrelevant, some games are questionable, some are just plain silly, and some are simply unethical.  All are changing the fundamental nature of science and scientific publishing because of the perverse incentives these quantitative metrics create (see here and here for just two examples).  Here I want to enumerate some of the more obvious ways that many journals inflate their impact factors, and comment on each. 

Such lists have been made before (one of the funniest is here).  See how many you think are important to determining the quality of papers published in the journal!  (Please tell us which of these are important or not to measuring scientific quality by leaving a comment.)

1. Publish review papers.  Review papers are always much more heavily cited than original research papers.  Open up the list of impact factors for your own field, and scan the list. If they have “Review” in the title, they will be at or near the top.  In my own field of Ecology, Evolution and Behavior the top journals on the impact factor list are the Annual Reviews of Ecology, Evolution & Systematics and various journals in the Trends series (e.g., Trends in Ecology & Evolution and Trends in Genetics). These are excellent journals and deserve to be at the top of the list, and they serve an essential purpose to the scientific community.  However, many journals that traditionally published only original research papers have added 2-3 review articles per issue, most because this is the surest way to increase your impact factor in a completely legitimate manner.  However, the consequence is that we are now flooded with review papers, and reviews of review papers, and commentaries on reviews of review papers.  Isn’t the original research paper what the impact factor is supposed to be measuring?

2. Publish methods papers.  Papers that publish methods of various types are the second most effective way to legitimately increase a journal’s impact factor. If a paper presents a method that many people use, everyone who uses that method must cite that paper.  These papers are also central to the scientific process, but is publishing methods what makes the science in a journal stellar?

3. Distribute papers unevenly among issues across a year. The 2011 impact factor for a journal is the number of citations during 2011 for papers published in 2009 and 2010 divided by the number of papers the journal published in 2009 and 2010.  Consequently, papers published in the later issues of 2010 contribute to the denominator of this ratio, but they have little chance of being cited and so do not contribute to the numerator of the ratio.  Thus, journals have perverse incentives to make issues at the beginning of the year be larger than those at the end of the year.  Every managing editor thinks about this when they put together issues at the end of the year (delay a few papers until the January issue and your impact factor will go up).  This one is fairly innocuous, but can have a fairly large effect on impact factors.  Is this really a measure of the impact of a journal?

4. Force authors to cite recent papers in the journal.  Now we’re getting into the nefarious and immoral.  Famous examples of this type of immoral behavior never fail to shock – see here for the fairly famous case of the World Journal of Gastroenterology.  Luckily, such wanton behavior is uncommon.  However, more subtle forms of increasing self citation are  more common, nearly impossible to detect, and known only to a few.  For example, I have received the infamous letter from an Editor-in-Chief of a quite prominent ecology journal with a very high impact factor (you know who you are) saying that “we haven’t quite made a final decision about your paper, but I noticed that you haven’t cited these three recent papers”, and all those papers just happen to have been published in the Editor’s journal within the last year. (See here for another such story.)  As an author, the fate of your paper still hangs in the balance, and what are you going to do but submit to this extortion. This has happened to me only twice in 25 years of publishing papers, but both times happened with the same journal, and I now refuse to review for that journal or have any other dealings with them if at all possible, including citing as few of their papers as I can in my own papers.  Most members of the editorial board on this journal don’t even know this happens because it occurs only in the very final stages of acceptance.  Most Editors do not succumb to such reprehensible behavior, but it is common practice at a few journals.  Combine this with publishing review papers, and you’re getting pretty close to the World Journal of Gastroenterology.

5. Publish perspectives and summary papers that only highlight papers in the journal.  Did you ever wonder why a journal would ask somebody to write a paper describing another paper in the same issue of the journal?  Or why would a journal waste valuable pages to have the Editor-in-Chief describe the papers in that issue. This always seemed crazy to me.  But think about it.  They’re all free self-citations.  And if the advertisement paper also gets cited, all the better!  Sometimes these are written by invited authors (doesn’t hurt their CV either) or the Editor-in-Chief.   I also just heard of an up-and-coming (in terms of impact factor) journal in evolution that has its Editor also write a summary paper at the end of the volume to describe what great things had been published in that issue.  The only purpose of this paper is to ensure that every paper that year is cited.

6. Publish papers with mistakes.  Let me preface this by saying that I have never seen this in practice – or even noticed the whiff of it.  However, I have heard this as a joke many times, and I’ve repeated the joke myself.  And it’s an old joke.  Scientists will always catch mistakes in papers, and write rebuttals or corrections.  I hope you can see the black hole to which this would lead.  However, this is the perverse incentive that the impact factor raises.

Does any of this sound like it is measuring the quality and importance of the science being published in a journal?  Some of this is incidental, some is innocuous, and some is immoral.  Impact factors are now playing scientists for fools, and we seem to be willing participants in this fool’s game.

This is by no means an exhaustive list. What other ways have you seen or heard of impact factors being gamed?  Have you ever been pressured by a journal to cite their most recent papers just to crank up their self-citation rates?

And more importantly to the general issue of measuring scientific quality, which of these do you see as being the most important to determining which journals publish the most important original research papers?  Any?  If the answer is none, why do you pay attention to impact factors?

It’s quickly becoming a big and meaningless game.  And science is the loser.

Share

Previous

Real Intellect

Next

A Lesson Learned As A Pond Dries

5 Comments

  1. Very instructive!

    #3 might explain one of the great mystery of nature. Molecular Biology & Evolution issues are ~200 pages. It’s very constant… with three exceptions. The January 2011 issue is an incredible 871 pages long, and the first two 2012 issues are about 450 pages each.

  2. Ahmed Badar

    Only an editor (who has worked in both these ‘eras’ mentioned by you) can understand the broken heart with which you wrote this beautiful article. It somehow escaped my eyes for 2.5 years. Today when someone asked me to comment on some of these smart (read illegal!) means, I found ur article. I think in this era of media, marketing, pseudo standardization and profit making…we have (or will soon) become irrelevant.

  3. Indeed very educative and informative article. Scientists should take note this objective analysis and change this fools’ game of Impact Factor!

  4. Thank you very much indeed.

  5. James

    #5 – if the article is quoted by the comment the same year that is published, that quote does not count for the impact factor, so I don’t see why this should be considered a practice to increase it.

Leave a Reply

Powered by WordPress & Theme by Anders Norén