Thomson Reuters recently announced that the newest Impact Factors are available. Soon after, I received emails from journal publishers bragging that their Impact Factors had gone up this year compared to last year. My response to these emails was to roll my eyes. According to the documentation, "The journal Impact Factor is the average number of times articles from the journal published in the past two years have been cited in the JCR year." It isn't exactly that, because it only counts the times that the articles are cited by other sources in Thomson Reuters' database, but that's not my problem with bragging about an Impact Factor going up. My problem is that Impact Factors in general have been going up because reference lists have been getting longer. I would be much happier if journals would advertise that they'd gone up in the Impact Factor rankings for their subject areas than if they advertised something as empty as the raw number going up. Although it wouldn't solve all of the problems with interpreting what a boost in Impact Factor means, it would be an improvement over the raw number.
By the way, if you heard somewhere that citations were going down, you may have gotten it from the article in Science by James Evans. Its abstract certainly suggests that by stating, "I show that as more journal issues came online, the articles referenced tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles." The article itself makes it clear, though, that the number of citations went up over time. It says, "In each subsequent year from 1965 to 2005, more distinct articles were cited from journals and subfields. The pool of published science is growing, and more of it is archived in the CI each year. Online availability, however, has not driven this trend." In essence, Evans is saying that the growth in citations and in citation diversity was slowed rather than created with online availability, not that it was reversed.
No comments:
Post a Comment