In a recently published paper, Justin Flatt and his two co-authors proposed the creation of the Self-Citation Index, or s-index. The purpose of the s-index would be to measure how often a scientist cites their own work. This is desirable the authors believe because current incentive systems tend to encourage researchers to cite their own works excessively.
In other words, since the number of citations a researcher’s works receive enhances his/her reputation there is a temptation to add superfluous self-citations to articles. This boosts the author’s h-index – the author-level metric now widely used as a measure of researcher productivity.
Amongst other things, excessive self-citation gives those who engage in it an unfair advantage over more principled researchers, an advantage moreover that grows over time: a 2007 paper estimated that every self-citation increases the number of citations from others by about one after one year, and by about three after five years. This creates unjustified differences in researcher profiles.
Since women self-cite less frequently than men, they are put at a particular disadvantage. A 2006 paper found that men are between 50 and 70 per cent more likely than women to cite their own work.
In addition to unfairly enhancing less principled researchers’ reputation, say the paper’s authors, excessive self-citation is likely to have an impact on the scholarly record, since it has the effect of “diminishing the connectivity and usefulness of scientific communications, especially in the face of publication overload”?
None of this should surprise us. In an academic environment now saturated with what Lisa Mckenzie has called “metrics, scores and a false prestige”, Campbell’s Law inevitably comes into play. This states that “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
Or as Goodhart’s Law more succinctly puts it, “When a measure becomes a target, it ceases to be a good measure.”
However, academia’s obsession with metrics, measures, and monitoring is not going to go away anytime soon. Consequently, the challenge is to try and prevent or mitigate the inevitable gaming that takes place – which is what the s-index would attempt to do. In fact, there have been previous suggestions of ways to detect possible manipulation of the h-index – a 2011 paper, for instance, mooted a “q-index”.
It is also known that journals will try and game the Impact Factor. Editors may insist, for instance, that authors include superfluous citations to other papers in the same journal. This is a different type of self-citation and sometimes leads to journals being suspended from the Journal Citation Reports (JCR).
But we need to note that while the s-index is an interesting idea it would not be able to prevent self-citation. Nor would it distinguish between legitimate and non-legitimate self-citations. Rather, says Flatt, it would make excessive self-citation more transparent (some self-citing is, of course, both appropriate and necessary). This, he believes, would shame researchers into restraining inappropriate self-citing urges, and help the research community to develop norms of acceptable behaviour.
Openness and transparency
However, any plans to create and manage a researcher-led s-index face a practical challenge: much of the data that would be needed to do so are currently imprisoned behind paywalls – notably behind the paywalls of the Web of Science and Scopus.