The Impact Problem
The Flaws of ‘Publish or Perish’: Chasing Grants Over Groundbreaking Science
I’ve been thinking about the concept of impact a lot lately – specifically how do we quantify the impact of scientific work and by extension scientists themselves? Currently, the scientific community uses a few classical metrics to determine the worth of a scientist’s contribution, despite a growing awareness of their deep flaws.
The “publish or perish” paradigm has become the law of the land, and perceived ability to secure grant money is too often the driving force behind scientists’ decision-making. This attitude becomes especially problematic when compelling science is neglected because it doesn’t yield enough financial benefit to the sponsoring institution.
Brilliant scientist, Katalin Kariko, faced demotion several times during her career at UPenn – allegedly for failing to secure appropriate funding for her research; the same research that would be awarded the 2023 Nobel Prize and that led directly to the technology behind the COVID-19 vaccines.
The Hirsch Index – Is There a Better Way to Measure Success?
Unfortunately, the metrics commonly used to quantify impact regularly discount truly influential and groundbreaking scientific research. The Hirsch Index (h-Index), a citation-based measure of influence that is widely used to determine the quality of a scientist’s contribution, is particularly problematic.
A Nature article proposing an alternative metric highlighted the limits of H-index – most notably the fact that as the importance of H-index has grown so have the motivating factors that would encourage a scientist to manipulate their score.
Beyond Citation Metrics: The Search for a True Measure of Scientific Value
In his 2015 article The Slavery of the h-index—Measuring the Unmeasurable, Grzegorz Kreiner rails against the prevalence of h-index and points out that a surprising number of Nobel prize winners have relatively low h-indices. He writes, “it could be surmised that in many cases the moderate h-indices at this stage (e.g. Nobel Prizes) are simply correlated to the persistence in pursuing the hypothesis that ultimately was proven, thereby yielding a scientific breakthrough.” This seems particularly prescient in light of the recent news about Dr. Kariko’s persistence.
There are a number of reasons to believe that all citation-based metrics for determining scientific value may be fatally flawed. Foremost among them is the pressure that these metrics place on both scientists and journals to produce more content, regardless of quality; content that is designed to artificially boost metrics through circular or coercive citation practices. Scientific progress then suffers under the weight of the publication Ouroboros, with incremental breakthroughs being restated repeatedly, adding minimal value or, worse yet, drowned out by the noise of previous ‘discoveries’.
Scientific progress then suffers under the weight of the publication Ouroboros, with incremental breakthroughs being restated repeatedly, adding minimal value or, worse yet, drowned out by the noise of previous ‘discoveries’.
– Benjamin Verdoorn, Senior Data Scientist
Properly quantifying scientific impact is important and valuable, especially considering the current system of paper inflation. Getting up to speed in a new field can be an overwhelming task and finding the salient papers within the mass of publication debris would be extremely valuable.
Scientific Integrity in New Metrics
At IQuant we are working to develop a new metric for scientific impact that is independent from citation metrics.
This is critical, both for the practical reason of paywalled citation information (see my previous post about why we only use publicly available information) and for the far more important reason of championing scientific integrity. By tracking the uptake of ideas and keywords through time, we hope to accurately identify papers and scientists that profoundly impact the development of a particular field, shift paradigms and foster new directions.
With this new meme-based metric we will be able to more accurately measure scientific impact and hopefully provide more enlightening evaluation of ideas, papers and scientists that truly influence the progress of science.
Check back here soon for examples of the new metric in action, and comment below if you have experience with coercive publishing practices or are frustrated with how citation metrics are affecting the scientific publishing landscape.
Let us know how we can help enhance your research.
We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.
The Impact Problem
I’ve been thinking about the concept of impact a lot lately – specifically how do we quantify the impact of scientific work and by extension scientists themselves? Currently, the scientific community uses a few classical metrics to determine the worth of a scientist’s contribution, despite a growing awareness of their deep flaws.
The “publish or perish” paradigm has become the law of the land, and perceived ability to secure grant money is too often the driving force behind scientists’ decision-making. This attitude becomes especially problematic when compelling science is neglected because it doesn’t yield enough financial benefit to the sponsoring institution.
Brilliant scientist, Katalin Kariko, faced demotion several times during her career at UPenn – allegedly for failing to secure appropriate funding for her research; the same research that would be awarded the 2023 Nobel Prize and that led directly to the technology behind the COVID-19 vaccines.
Unfortunately, the metrics commonly used to quantify impact regularly discount truly influential and groundbreaking scientific research. The Hirsch Index (h-Index), a citation-based measure of influence that is widely used to determine the quality of a scientist’s contribution, is particularly problematic.
A Nature article proposing an alternative metric highlighted the limits of H-index – most notably the fact that as the importance of H-index has grown so have the motivating factors that would encourage a scientist to manipulate their score.
In his 2015 article The Slavery of the h-index—Measuring the Unmeasurable, Grzegorz Kreiner rails against the prevalence of h-index and points out that a surprising number of Nobel prize winners have relatively low h-indices. He writes, “it could be surmised that in many cases the moderate h-indices at this stage (e.g. Nobel Prizes) are simply correlated to the persistence in pursuing the hypothesis that ultimately was proven, thereby yielding a scientific breakthrough.” This seems particularly prescient in light of the recent news about Dr. Kariko’s persistence.
There are a number of reasons to believe that all citation-based metrics for determining scientific value may be fatally flawed. Foremost among them is the pressure that these metrics place on both scientists and journals to produce more content, regardless of quality; content that is designed to artificially boost metrics through circular or coercive citation practices. Scientific progress then suffers under the weight of the publication Ouroboros, with incremental breakthroughs being restated repeatedly, adding minimal value or, worse yet, drowned out by the noise of previous ‘discoveries’.
Properly quantifying scientific impact is important and valuable, especially considering the current system of paper inflation. Getting up to speed in a new field can be an overwhelming task and finding the salient papers within the mass of publication debris would be extremely valuable.
At IQuant we are working to develop a new metric for scientific impact that is independent from citation metrics.
This is critical, both for the practical reason of paywalled citation information (see my previous post about why we only use publicly available information) and for the far more important reason of championing scientific integrity. By tracking the uptake of ideas and keywords through time, we hope to accurately identify papers and scientists that profoundly impact the development of a particular field, shift paradigms and foster new directions.
With this new meme-based metric we will be able to more accurately measure scientific impact and hopefully provide more enlightening evaluation of ideas, papers and scientists that truly influence the progress of science.
Check back here soon for examples of the new metric in action, and comment below if you have experience with coercive publishing practices or are frustrated with how citation metrics are affecting the scientific publishing landscape.
Let us know how we can help enhance your research.
We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.