[Insight-users] Article-Level Metrics and the Evolution of Scientific Impact

Luis Ibanez luis.ibanez at kitware.com
Tue Nov 17 18:09:49 EST 2009


http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1000242

"Article-Level Metrics and the Evolution of Scientific Impact"

<quote>

Formally published papers that have been through a traditional
prepublication peer review process remain the most important means of
communicating science today. Researchers depend on them to learn about
the latest advances in their fields and to report their own findings.
The intentions of traditional peer review are certainly noble: to
ensure methodological integrity and to comment on potential
significance of experimental studies through examination by a panel of
objective, expert colleagues. In principle, this system enables
science to move forward on the collective confidence of previously
published work. Unfortunately, the traditional system has inspired
methods of measuring impact that are suboptimal for their intended
uses.

...

Measuring Impact Top

Peer-reviewed journals have served an important purpose in evaluating
submitted papers and readying them for publication. In theory, one
could browse the pages of the most relevant journals to stay current
with research on a particular topic. But as the scientific community
has grown, so has the number of journals—to the point where over
800,000 new articles appeared in PubMed in 2008
(http://www.ncbi.nlm.nih.gov/sites/entrez​?Db=pubmed&term=2008:2008
[dp], archived at http://www.webcitation.org/5k1cbn1WX on 24 September
2009) and the total is now over 19 million
(http://www.ncbi.nlm.nih.gov/sites/entrez​?Db=pubmed&term=1800:2009
[dp], archived at http://www.webcitation.org/5k1crb7Pi on 24 September
2009). The sheer number makes it impossible for any scientist to read
every paper relevant to their research, and a difficult choice has to
be made about which papers to read. Journals help by categorizing
papers by subject, but there remain in most fields far too many
journals and papers to follow.

As a result, we need good filters for quality, importance, and
relevance to apply to scientific literature. There are many we could
use but the majority of scientists filter by preferentially reading
articles from specific journals—1those they view as the highest
quality and the most important. These selections are highly subjective
but the authors' personal experience is that most scientists, when
pressed, will point to the Thomson ISI Journal Impact Factor [1] as an
external and “objective” measure for ranking the impact of specific
journals and the individual articles within them.

Yet the impact factor, which averages the number of citations per
eligible article in each journal, is deeply flawed both in principle
and in practice as a tool for filtering the literature. It is
mathematically problematic [2]–[4], with around 80% of a journal
impact factor attributable to around 20% of the papers, even for
journals like Nature [5]. It is very sensitive to the categorisation
of papers as “citeable” (e.g., research-based) or “front-matter”
(e.g., editorials and commentary) [6], and it is controlled by a
private company that does not have any obligation to make the
underlying data or processes of analysis available. Attempts to
replicate or to predict the reported values have generally failed
[5]–[8].

Though the impact factor is flawed, it may be useful for evaluating
journals in some contexts, and other more sophisticated metrics for
journals are emerging [3],[4],[9],[10]. But for the job of assessing
the importance of specific papers, the impact factor—or any other
journal-based metric for that matter—cannot escape an even more
fundamental problem: it is simply not designed to capture qualities of
individual papers.

</quote>

Full Article at:
http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1000242


More information about the Insight-users mailing list