Header



Critics line up to pour scorn on impact factor

Rinckside 2010; 21,3: 9-11.


he Guinness Book of Records is published annually and contains a collection of world records of human achievements and extremes in nature. Interestingly, the most consecutive catches from juggling two boomerangs, keeping at least one aloft at all times, is 555; the catcher was a Frenchman. The largest vegetable stew was cooked in Carmagnola in Italy this summer, and weighed 1190 kg.

There are records not listed in the book, and one record holder in the category of radiological journals is the RSNA’s Radiology. Its impact factor was 6.3 in 2009, while European Radiology follows further down the scale at 3.6.

The impact factor is calculated by dividing the number of citations of scientific papers in a journal in one year by the total number of articles published in the two previous years. The factor was devised by Eugene Garfield (not the cartoon cat), founder of the Institute for Scientific Information (ISI), which is now part of Thomson Reuters. Impact factors are calculated yearly for those journals indexed in Thomson Reuters’ Journal Citation Reports – except that only English-language publications are included in the counting, thus it is an Anglo-American index that is not necessarily representative for countries in continental Europe, Latin America, and Asia. According to themselves, Thomson Reuters is “the world’s leading source of intelligent information for businesses and professionals [1]." However, I doubt whether impact factors can be described as “intelligent” information since they are only based on counting articles and citations – not evaluating the quality of contents.

Another popular index, the H-number (or H-index), attempts to measure both the scientific productivity and the apparent scientific impact of a scientist. This index is based on the set of the scientist’s most cited papers and the number of citations received. It is named after its inventor, Jorge E. Hirsch.

In 1994, I wrote a Rinckside column entitled “Publish and you might perish anyway.” Today I would rephrase it: “Have yourself indexed and you will perish even sooner.”

Impact factor and H-number, as well as the five-year impact factor, the Article Influence Score, and the Eigenfactor Score all count among these indexes. Eigenfactor Score sounds very similar to Narcissism Score; i.e., self-love of the author. Don’t try to correct me, I know that the given meaning is different.

spaceholder red600   The general idea of finding out the impact, perhaps even the influence or popularity, of a paper is good. All the advantages are itemized in a recent 10-page article in Radiology [2]. However, even in this review, the last sentence makes the reader ponder:

“Finally, it is essential to simultaneously consider multiple indicators when evaluating the quality of any scientific output and to refer to expert opinion to interpret them.”

In many instances, impact factors have grown beyond any control and are abused and misused for drawing inappropriate conclusions about single scientists or institutions.

Although they are considered objective by many people, they are easily manipulated, because what is considered “citable” is largely a matter of negotiation between journals and Thomson Reuters. Editors and publishers of many journals plan and implement strategies to massage their impact factors, often in collaboration with the company producing them [2]. Offering a university position or allocating a research grant based on publications in high-impact journals is like playing roulette; it is plain gambling. It is similar to the attitude of some people who admire paintings not because they like them, but because they are expensive.


"Since everyone has learned to ‘play the system,’ bibliometrics is discredited as a measure of influence."



Indexing promotes mass production of mediocre papers that cite each other. Good articles published 10 or 20 years earlier are neglected, and preference is given to the latest publications. This goes hand-in-hand with the decline in quality of scientific journals. Most publishers have cut down on copy editors. This does not seem to matter much because, as a publisher once confided to me:

“We are not really producing journals for possible readers anymore but to offer authors a platform to publish their papers to bolster their CVs.”

spaceholder red600   There is no scientific proof that impact factors have any impact on the quality of science. It’s just a mathematical ranking, as are H-factors, serving the admiration and infatuation of some researchers for themselves and to ease the distribution of money by granting agencies and government or EU offices with limited scientific competence.

These numbers are a measure for university presidents and department heads who play the factor game to quantify and prove the elite character of their institutions to politicians and the media. Citation indexes are a kind of tabloid press of scientific life – not reflecting the reality of daily research, but rather providing a picture of who sleeps with whom.

spaceholder red600   2010 seems to be the year of massive attacks on scientific bibliometrics, statistical citations, and content analysis, especially the impact of published scientific literature. One major accusation against the bureaucratic and commercial use of bibliometrics is its threat to basic research and scientific education and teaching. Basic research and educational papers are hardly ever cited and do not contribute to increasing the impact factor of a journal. In other words, the impact factor mirrors mostly quantity, not quality. Moreover, it doesn’t say anything about the quality of an author. Good review papers are the most cited, but they do not necessarily present scientific novelties.

Earlier this year, Chimia, the journal of the Swiss Chemical Society, published two articles. One bore the title “Bibliometrics as Weapons of Mass Citation [3]," which says it all and was written by Antoinette Molinié and Geoffrey Bodenhausen. The author of the second, a brief commentary entitled “The Follies of Citation Indices and Academic Ranking Lists [4]," was Richard Ernst, who received a Nobel Prize in 1991 for his contributions to NMR spectroscopy.

Ernst’s comments on the topic are extremely combative and to the point. He describes bibliometry as a “pestilence.” It’s a one-page article, and even civil servants and politicians could read and understand the contents. Here are two extracts:

“Today, an erroneous conviction prevails that institutions and individuals of ‘value’ can be measured ultimately in terms of a single number that may form part of a competitive ‘ranking list’! Only nobodies and nameless institutions never ever appear in a ranking!

“Let us discredit specifically rating agencies and their managers that have established and regularly publish science citation indices and university ranking lists; agencies that enrich themselves on the account of science quality, and cause more harm than good.”

spaceholder red600   No doubt Thomson Reuters’ Citation Index is a purely commercial enterprise, collaborating with scientific publishers and editors in ways that create fear and competition. Among other books, I have written a handbook on magnetic resonance in medicine that has been translated into seven languages, and I am writing all these Rinckside columns that are read, discussed, and of course criticized extensively. They have quite an impact on the radiological community and in political circles. The impact factor, however, is zero. It doesn’t bother me. The book sold at least 10,000 copies, perhaps 20,000. No, I didn’t make a lot of money. However, every year the publisher sends me a Christmas card.


"When I have to evaluate and recommend somebody for a position, I don’t care about the impact factors of the journals he or she has published."



When I have to evaluate and recommend somebody for a position, I don’t care about the impact factors of the journals he or she has published in or how many papers I count in the list of publications, and I definitely don’t check the list of citations. I read some of the publications, from the introduction to the conclusion, and I check if there is any teaching material in the list of publications. I don’t count publication in mass circulation journals such as Science, Nature, Nature Biotechnology, Nature for Barbie Dolls, Time Magazine, The Economist, Der Spiegel, or Paris Match, all of which have or could have an impact factor higher than 30.

spaceholder red600   Three years ago, in a protest against the absurd use of impact factors, Folia Phoniatrica et Logopaedica cited all its articles from 2005 and 2006 in a very critical editorial that would have more than doubled its impact factor for 2008. In reaction, Thomson Reuters excluded the journal from the list of those counted for impact factor [5,6].

In February 2010, the Deutsche Forschungsgemeinschaft (German Foundation for Science) published new guidelines to evaluate scientific articles. It stated that no bibliometric information on candidates would be evaluated in decisions concerning “...performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, where increasing importance has been given to numerical indicators such as the H-index and the Impact Factor [7]."

Richard Ernst ended his commentary:

“There is indeed an alterative: Very simply, start reading papers instead of merely rating them by counting citations!”

I apologize for the many word-for-word citations in this column. I couldn’t have expressed them better – and, don't forget: What counts can't be counted.



References

1. Thomson Reuters. thomsonreuters.com/about/. Accessed 25 October 2010.
2. Durieux V, Genevois PA. Bibliometric indicators: quality measurements of scientific publication. Radiology 2010; 255 (2): 342-351.
3. Molinié A, Bodenhausen G. Bibliometrics as weapons of mass citation. Chimia 2010; 64 (1-2): 78-89.
4. Ernst RE. The follies of citation indices and academic ranking lists. A brief commentary to ‘Bibliometrics as weapons of mass citation.’ Chimia 2010; 64 (1-2): 90.
5. Schutte HK, Švec JG. Reaction of Folia Phoniatrica et Logopaedica on the current trend of impact factor measures. Folia Phoniatr Logop 2007; 59 (6): 281-285.
6. Journal Citation Reports notices; Title suppression: Folia Phoniatrica et Logopaedica. admin-apps.isiknowledge.com/ JCR/static_html/ notices/ notices.htm, accessed 26 October 2010.
7. Deutsche Forschungsgemeinschaft (German Foundation for Science). “Quality not quantity” — DFG adopts rules to counter the flood of publicationsin research. Press release 7; Feb. 23, 2010.



Citation: Rinck PA. Critics line up to pour scorn on impact factor. Rinckside 2010; 21,3: 9-11.

A digest version of this column was published as:
Rinck PA. Critics line up to pour scorn on impact factor.
Diagnostic Imaging Europe. 2010; 26,10: 9-10.


TurnPrevPage TurnNextPage

Rinckside • ISSN 2364-3889
is pub­lish­ed both in an elec­tro­nic and in a prin­ted ver­sion. It is listed by the Ger­man Na­tio­nal Lib­rary.


Cover-Vol21


→ Print version (pdf).


PAR

The Author

Rinck is my last name, and a rink is an area in which a com­bat or con­test takes place, rink­side means “by the rink”; in a double mean­ing “Rinck­side” means the page by Rinck.

Sometimes I could also imagine “Rinck­sighs”, “Rinck­sights”, or “Rinck­sites” ... More

Contact


Bulletin Board

00-f1

00-f2

00-f3

00-f4

00-f5

00-f6