Complex Sciences. First International Conference, Complex 2009, Shanghai, China, February 23-25, 2009, Revised Papers, Part 2

Research Article

Exploring and Understanding Scientific Metrics in Citation Networks

Download
418 downloads
  • @INPROCEEDINGS{10.1007/978-3-642-02469-6_35,
        author={Mikalai Krapivin and Maurizio Marchese and Fabio Casati},
        title={Exploring and Understanding Scientific Metrics in Citation Networks},
        proceedings={Complex Sciences. First International Conference, Complex 2009, Shanghai, China, February 23-25, 2009, Revised Papers, Part 2},
        proceedings_a={COMPLEX PART 2},
        year={2012},
        month={5},
        keywords={Scientific metrics Scientometric Page Rank Algorithm Paper Rank H-index Divergence metric in ranking results},
        doi={10.1007/978-3-642-02469-6_35}
    }
    
  • Mikalai Krapivin
    Maurizio Marchese
    Fabio Casati
    Year: 2012
    Exploring and Understanding Scientific Metrics in Citation Networks
    COMPLEX PART 2
    Springer
    DOI: 10.1007/978-3-642-02469-6_35
Mikalai Krapivin1,*, Maurizio Marchese1,*, Fabio Casati1,*
  • 1: University of Trento
*Contact email: krapivin@disi.unitn.it, marchese@disi.unitn.it, casati@disi.unitn.it

Abstract

This paper explores scientific metrics in citation networks in scientific communities, how they differ in ranking papers and authors, and why. In particular we focus on network effects in scientific metrics and explore their meaning and impact. We initially take as example three main metrics that we believe significant; the standard citation count, the more and more popular h-index, and a variation we propose of PageRank applied to papers (called PaperRank) that is appealing as it mirrors proven and successful algorithms for ranking web pages and captures relevant information present in the whole citation network. As part of analyzing them, we develop generally applicable techniques and metrics for qualitatively and quantitatively analyzing such network-based indexes that evaluate content and people, as well as for understanding the causes of their different behaviors. We put the techniques at work on a dataset of over 260K ACM papers, and discovered that the difference in ranking results is indeed very significant (even when restricting to citation-based indexes), with half of the top-ranked papers differing in a typical 20-element long search result page for papers on a given topic, and with the top researcher being ranked differently over half of the times in an average job posting with 100 applicants.