Benjamin Verdoorn

A Novel and Objective Measure of Scientific Impact

Introduction

Two months ago, I wrote that Intuitive Quantitation was hard at work building a new metric for measuring scientific impact – one that worked independently from traditional citation metrics. I am pleased to report that those efforts have been largely successful. This post presents some of our results when applying our new methodology to several disease topic areas.

Previously, I teased the idea of tracking the uptake of ideas and keywords as a potential method for calculating impact. We were inspired to develop a system based on memes, hearkening back to the work of Richard Dawkins who posited that ideas propagate through cultures much as genes do through biological systems. Through quantitative tracking of idea uptake within a topic, we could theoretically identify and track specific instances where new, and critically successful, concepts originated and began to spread.

From Title, Abstract, and Keywords to score

Our analysis relies on tracking concepts in publications through unique words and phrases (n-grams) appearing in the title, abstract and keyword list. Our hypothesis was that the frequency of an impactful publication’s n-grams would measurably increase in subsequent published papers. To prepare the data for analysis, each title and abstract was put through a process of tokenization where trivial words were discarded and duplicate words removed. This generated sets of single words (monograms), two-word phrases (bigrams) and three-word phrases (trigrams). For any given paper each n-gram was compared to all other papers in the search set to determine the frequency each term appears prior to publication, versus the frequency that the term appears post publication. Given by the equation:

$$ \text{TermScore} = \left( \frac{\text{OccurancesFuture}}{\text{#Papers in Future}} \right) - \left( \frac{\text{OccurancesPast}}{\text{#Papers in Past}} \right) $$

A paper’s overall score was then calculated by averaging the individual term scores.

We also applied a concept-based impact analysis to measure the uptake of MeSH Keywords. Following similar logic, we hypothesized that the first paper to use a highly successful keyword within a topic area should be considered more impactful than a paper whose keywords are not well adopted by future publications.

Combining these data using statistical tools and analysis we generated scores for each paper based entirely on the usage of n-grams and uptake of keywords.

We were inspired to develop a system based on memes, hearkening back to the work of Richard Dawkins who posited that ideas propagate through cultures much as genes do through biological systems..

– Benjamin Verdoorn, Senior Data Scientist

Data

Our initial data suggests that these new metrics add critical information to the evaluation of paper impact.  We performed IQuant Engine searches on three specific disease areas and analyzed the results.

Search

Number of Results

Range of Scores

Average Score

Parkinson’s and Dementia [Title/Abstract]

9903

-3.89 – 11.22

-0.004

Restless Leg or Restless legs

6358

-3.01 – 16.25

0.016

Charcot Marie Tooth [Title/Abstract]

5479

-5.11 – 7.78

0.031

The following papers from the search topics scored highly using our new metrics.

The new metrics appear to generate results that are independent from citation metrics while still highlighting papers that could reasonably be seen as genuinely impactful. While we recognize that no metric can fully capture a concept as complex as scientific impact, we posit that this novel analysis not only identifies papers that may otherwise have been overlooked, but can also tag papers in which the citation metrics alone may have exaggerated their true impact.

While the results so far have been informative and robust, we have identified certain situations in which this score may be less accurate. The most glaring of these is in the subset of papers published within the last 1-2 years. While it can be interesting to look at the data for these papers, our language-based metrics can be skewed by the small number of post-publication papers available for analysis, sometimes producing artifactual results.  Additionally, we have found that some of the oldest papers with very few MeSH keywords, n-grams, or preceding papers can generate anomalous scores.

Caveats aside, these new metrics provide critical additional data that will help us provide useful insights to our partners and collaborators, as well as prompting several new ideas for our own research. Our current focus is on developing a comprehensive impact score using a combination of all the available metrics (language based and traditional citation based). We believe this is likely to provide the best possible coverage and the most complete picture possible. Check back next time for more data as I explore these exciting possibilities.

Let us know how we can help enhance your research.

We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.

A Novel and Objective Measure of Scientific Impact

Introduction

Two months ago, I wrote that Intuitive Quantitation was hard at work building a new metric for measuring scientific impact – one that worked independently from traditional citation metrics. I am pleased to report that those efforts have been largely successful. This post presents some of our results when applying our new methodology to several disease topic areas.

Previously, I teased the idea of tracking the uptake of ideas and keywords as a potential method for calculating impact. We were inspired to develop a system based on memes, hearkening back to the work of Richard Dawkins who posited that ideas propagate through cultures much as genes do through biological systems. Through quantitative tracking of idea uptake within a topic, we could theoretically identify and track specific instances where new, and critically successful, concepts originated and began to spread.

From Title, Abstract, and Keywords to score

Our analysis relies on tracking concepts in publications through unique words and phrases (n-grams) appearing in the title, abstract and keyword list. Our hypothesis was that the frequency of an impactful publication’s n-grams would measurably increase in subsequent published papers. To prepare the data for analysis, each title and abstract was put through a process of tokenization where trivial words were discarded and duplicate words removed. This generated sets of single words (monograms), two-word phrases (bigrams) and three-word phrases (trigrams). For any given paper each n-gram was compared to all other papers in the search set to determine the frequency each term appears prior to publication, versus the frequency that the term appears post publication. Given by the equation:

$$ \text{TermScore} = \left( \frac{\text{OccurancesFuture}}{\text{#Papers in Future}} \right) - \left( \frac{\text{OccurancesPast}}{\text{#Papers in Past}} \right) $$

A paper’s overall score was then calculated by averaging the individual term scores.

We also applied a concept-based impact analysis to measure the uptake of MeSH Keywords. Following similar logic, we hypothesized that the first paper to use a highly successful keyword within a topic area should be considered more impactful than a paper whose keywords are not well adopted by future publications.

Combining these data using statistical tools and analysis we generated scores for each paper based entirely on the usage of n-grams and uptake of keywords.

Data

Our initial data suggests that these new metrics add critical information to the evaluation of paper impact.  We performed IQuant Engine searches on three specific disease areas and analyzed the results.

Search

Number of Results

Range of Scores

Average Score

Parkinson’s and Dementia [Title/Abstract]

9903

-3.89 – 11.22

-0.004

Restless Leg or Restless legs

6358

-3.01 – 16.25

0.016

Charcot Marie Tooth [Title/Abstract]

5479

-5.11 – 7.78

0.031

The following papers from the search topics scored highly using our new metrics.

The new metrics appear to generate results that are independent from citation metrics while still highlighting papers that could reasonably be seen as genuinely impactful. While we recognize that no metric can fully capture a concept as complex as scientific impact, we posit that this novel analysis not only identifies papers that may otherwise have been overlooked, but can also tag papers in which the citation metrics alone may have exaggerated their true impact.

While the results so far have been informative and robust, we have identified certain situations in which this score may be less accurate. The most glaring of these is in the subset of papers published within the last 1-2 years. While it can be interesting to look at the data for these papers, our language-based metrics can be skewed by the small number of post-publication papers available for analysis, sometimes producing artifactual results.  Additionally, we have found that some of the oldest papers with very few MeSH keywords, n-grams, or preceding papers can generate anomalous scores.

Caveats aside, these new metrics provide critical additional data that will help us provide useful insights to our partners and collaborators, as well as prompting several new ideas for our own research. Our current focus is on developing a comprehensive impact score using a combination of all the available metrics (language based and traditional citation based). We believe this is likely to provide the best possible coverage and the most complete picture possible. Check back next time for more data as I explore these exciting possibilities.

Let us know how we can help enhance your research.

We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.

The Impact Problem

The Flaws of ‘Publish or Perish’: Chasing Grants Over Groundbreaking Science

I’ve been thinking about the concept of impact a lot lately – specifically how do we quantify the impact of scientific work and by extension scientists themselves? Currently, the scientific community uses a few classical metrics to determine the worth of a scientist’s contribution, despite a growing awareness of their deep flaws.

The “publish or perish” paradigm has become the law of the land, and perceived ability to secure grant money is too often the driving force behind scientists’ decision-making. This attitude becomes especially problematic when compelling science is neglected because it doesn’t yield enough financial benefit to the sponsoring institution.

Brilliant scientist, Katalin Kariko, faced demotion several times during her career at UPenn – allegedly for failing to secure appropriate funding for her research; the same research that would be awarded the 2023 Nobel Prize and that led directly to the technology behind the COVID-19 vaccines.

The Hirsch Index – Is There a Better Way to Measure Success?

Unfortunately, the metrics commonly used to quantify impact regularly discount truly influential and groundbreaking scientific research. The Hirsch Index (h-Index), a citation-based measure of influence that is widely used to determine the quality of a scientist’s contribution, is particularly problematic.

Nature article proposing an alternative metric highlighted the limits of H-index – most notably the fact that as the importance of H-index has grown so have the motivating factors that would encourage a scientist to manipulate their score.

Beyond Citation Metrics: The Search for a True Measure of Scientific Value

In his 2015 article The Slavery of the h-index—Measuring the Unmeasurable, Grzegorz Kreiner rails against the prevalence of h-index and points out that a surprising number of Nobel prize winners have relatively low h-indices. He writes, “it could be surmised that in many cases the moderate h-indices at this stage (e.g. Nobel Prizes) are simply correlated to the persistence in pursuing the hypothesis that ultimately was proven, thereby yielding a scientific breakthrough.” This seems particularly prescient in light of the recent news about Dr. Kariko’s persistence.

There are a number of reasons to believe that all citation-based metrics for determining scientific value may be fatally flawed. Foremost among them is the pressure that these metrics place on both scientists and journals to produce more content, regardless of quality; content that is designed to artificially boost metrics through circular or coercive citation practices. Scientific progress then suffers under the weight of the publication Ouroboros, with incremental breakthroughs being restated repeatedly, adding minimal value or, worse yet, drowned out by the noise of previous ‘discoveries’.

Scientific progress then suffers under the weight of the publication Ouroboros, with incremental breakthroughs being restated repeatedly, adding minimal value or, worse yet, drowned out by the noise of previous ‘discoveries’.

– Benjamin Verdoorn, Senior Data Scientist

Properly quantifying scientific impact is important and valuable, especially considering the current system of paper inflation. Getting up to speed in a new field can be an overwhelming task and finding the salient papers within the mass of publication debris would be extremely valuable.

Scientific Integrity in New Metrics

At IQuant we are working to develop a new metric for scientific impact that is independent from citation metrics.

This is critical, both for the practical reason of paywalled citation information (see my previous post about why we only use publicly available information) and for the far more important reason of championing scientific integrity. By tracking the uptake of ideas and keywords through time, we hope to accurately identify papers and scientists that profoundly impact the development of a particular field, shift paradigms and foster new directions.

With this new meme-based metric we will be able to more accurately measure scientific impact and hopefully provide more enlightening evaluation of ideas, papers and scientists that truly influence the progress of science.

Check back here soon for examples of the new metric in action, and comment below if you have experience with coercive publishing practices or are frustrated with how citation metrics are affecting the scientific publishing landscape.

Let us know how we can help enhance your research.

We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.

The Impact Problem

The Impact Problem

I’ve been thinking about the concept of impact a lot lately – specifically how do we quantify the impact of scientific work and by extension scientists themselves? Currently, the scientific community uses a few classical metrics to determine the worth of a scientist’s contribution, despite a growing awareness of their deep flaws.

The “publish or perish” paradigm has become the law of the land, and perceived ability to secure grant money is too often the driving force behind scientists’ decision-making. This attitude becomes especially problematic when compelling science is neglected because it doesn’t yield enough financial benefit to the sponsoring institution.

Brilliant scientist, Katalin Kariko, faced demotion several times during her career at UPenn – allegedly for failing to secure appropriate funding for her research; the same research that would be awarded the 2023 Nobel Prize and that led directly to the technology behind the COVID-19 vaccines.

Unfortunately, the metrics commonly used to quantify impact regularly discount truly influential and groundbreaking scientific research. The Hirsch Index (h-Index), a citation-based measure of influence that is widely used to determine the quality of a scientist’s contribution, is particularly problematic.

Nature article proposing an alternative metric highlighted the limits of H-index – most notably the fact that as the importance of H-index has grown so have the motivating factors that would encourage a scientist to manipulate their score.

In his 2015 article The Slavery of the h-index—Measuring the Unmeasurable, Grzegorz Kreiner rails against the prevalence of h-index and points out that a surprising number of Nobel prize winners have relatively low h-indices. He writes, “it could be surmised that in many cases the moderate h-indices at this stage (e.g. Nobel Prizes) are simply correlated to the persistence in pursuing the hypothesis that ultimately was proven, thereby yielding a scientific breakthrough.” This seems particularly prescient in light of the recent news about Dr. Kariko’s persistence.

There are a number of reasons to believe that all citation-based metrics for determining scientific value may be fatally flawed. Foremost among them is the pressure that these metrics place on both scientists and journals to produce more content, regardless of quality; content that is designed to artificially boost metrics through circular or coercive citation practices. Scientific progress then suffers under the weight of the publication Ouroboros, with incremental breakthroughs being restated repeatedly, adding minimal value or, worse yet, drowned out by the noise of previous ‘discoveries’.

Properly quantifying scientific impact is important and valuable, especially considering the current system of paper inflation. Getting up to speed in a new field can be an overwhelming task and finding the salient papers within the mass of publication debris would be extremely valuable.

At IQuant we are working to develop a new metric for scientific impact that is independent from citation metrics.

This is critical, both for the practical reason of paywalled citation information (see my previous post about why we only use publicly available information) and for the far more important reason of championing scientific integrity. By tracking the uptake of ideas and keywords through time, we hope to accurately identify papers and scientists that profoundly impact the development of a particular field, shift paradigms and foster new directions.

With this new meme-based metric we will be able to more accurately measure scientific impact and hopefully provide more enlightening evaluation of ideas, papers and scientists that truly influence the progress of science.

Check back here soon for examples of the new metric in action, and comment below if you have experience with coercive publishing practices or are frustrated with how citation metrics are affecting the scientific publishing landscape.

Let us know how we can help enhance your research.

We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.

Illustration of many colorful faces coalescing into a single face.

Challenges in Authorship Attribution: The “Name Problem”

Illustration of many colorful faces coalescing into a single face.

I’ve encountered many challenges during the development of the Iquant Engine, but none has been more frustrating than what I’ve dubbed the “Name Problem”. In programming it is often the simplest of errors that are the most difficult to figure out and a misplaced comma has devoured my entire day more than once. However, I wasn’t expecting the same to be true on the data analysis side as well.

Simply put – how do I ensure that the bibliography of each author is both fully correct and exhaustive? Relatedly, how do I differentiate authors with often very similar names? 

Early on in development I foresaw aspects of this problem and included checks to verify authors with multiple or similar names included in their publication list – combining authors who publish a portion of their papers with a middle initial and some without, for example. These initial rules worked for relatively small sets of publications in which there were fewer authors and consequently fewer possible matches. Unfortunately, I started encountering more errors when confronting larger topic areas as there were a surprising number of examples that defeated even my best algorithms. Undaunted, I set to work developing more algorithms – confident that I could develop a set of functions that would permanently solve the problem for me.

I can only plead ignorance to excuse my hubris.

Affiliation comparison and co-author analysis partially minimized the problem. Techniques like algorithmic string comparison also helped further address the issues. Yet no matter what I did, I always found exceptions to my carefully constructed and increasingly complicated system. After many months of trying new approaches, I had to accept that perfection was not achievable. As frustrating as it was, some level of error had to be accepted in the author accounting process.

I eventually learned that I was far from the first person to encounter this problem. In fact, this appears to be a widespread problem and one that doesn’t have a clear solution even given significantly more resources than I was able to devote. In 2010 the National Library for Medicine announced a project to assign authors a unique ID, but four years later the project was scrapped (Pubmed press release) and it was announced that Pubmed would instead rely on third parties like ORCID.

Third party solutions will never be the answer simply due to lack of uptake. The Name Problem will persist unless a universal system with full adoption is instituted. Until then, if you are publishing it is critical to put careful thought into how your name is displayed on papers. Unfortunately, differing conventions in how journals display author names and what information is accessible to databases like Pubmed will trip up even the most diligent of scientists, especially if you have relatively common last name.

The Name Problem will persist unless a universal system with full adoption is instituted. Until then, if you are publishing it is critical to put careful thought into how your name is displayed on papers.

– Ben Verdoorn, Senior Data Scientist

The Name Problem has real consequences, beyond my data-nerd frustration. Notably, it fosters inequality in our publication-based merit systems, at least partly because of culture-specific naming conventions. Without a unique author identifier many scientists will not be able to be properly credited or correctly sought out when their expertise is required. For now, we will account for The Name Problem in our analysis and accept the error it injects into our data. We will not be rid of it without concerted universal effort, which is likely out of reach for our shambolic scientific publishing landscape. Therefore, I must adjust to this reality despite the errors it propagates in my data and the sleepless nights spent worrying away at different possible solutions – like an ever-present popcorn kernel I just can’t get out of my tooth.

Let us know how we can help enhance your research.

We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.

Challenges in Authorship Attribution: The “Name Problem”

Illustration of many colorful faces coalescing into a single face.

I’ve encountered many challenges during the development of the Iquant Engine, but none has been more frustrating than what I’ve dubbed the “Name Problem”. In programming it is often the simplest of errors that are the most difficult to figure out and a misplaced comma has devoured my entire day more than once. However, I wasn’t expecting the same to be true on the data analysis side as well.

Simply put – how do I ensure that the bibliography of each author is both fully correct and exhaustive? Relatedly, how do I differentiate authors with often very similar names? 

Early on in development I foresaw aspects of this problem and included checks to verify authors with multiple or similar names included in their publication list – combining authors who publish a portion of their papers with a middle initial and some without, for example. These initial rules worked for relatively small sets of publications in which there were fewer authors and consequently fewer possible matches. Unfortunately, I started encountering more errors when confronting larger topic areas as there were a surprising number of examples that defeated even my best algorithms. Undaunted, I set to work developing more algorithms – confident that I could develop a set of functions that would permanently solve the problem for me.

I can only plead ignorance to excuse my hubris.

Affiliation comparison and co-author analysis partially minimized the problem. Techniques like algorithmic string comparison also helped further address the issues. Yet no matter what I did, I always found exceptions to my carefully constructed and increasingly complicated system. After many months of trying new approaches, I had to accept that perfection was not achievable. As frustrating as it was, some level of error had to be accepted in the author accounting process.

I eventually learned that I was far from the first person to encounter this problem. In fact, this appears to be a widespread problem and one that doesn’t have a clear solution even given significantly more resources than I was able to devote. In 2010 the National Library for Medicine announced a project to assign authors a unique ID, but four years later the project was scrapped (Pubmed press release) and it was announced that Pubmed would instead rely on third parties like ORCID.

Third party solutions will never be the answer simply due to lack of uptake. The Name Problem will persist unless a universal system with full adoption is instituted. Until then, if you are publishing it is critical to put careful thought into how your name is displayed on papers. Unfortunately, differing conventions in how journals display author names and what information is accessible to databases like Pubmed will trip up even the most diligent of scientists, especially if you have relatively common last name.

The Name Problem has real consequences, beyond my data-nerd frustration. Notably, it fosters inequality in our publication-based merit systems, at least partly because of culture-specific naming conventions. Without a unique author identifier many scientists will not be able to be properly credited or correctly sought out when their expertise is required. For now, we will account for The Name Problem in our analysis and accept the error it injects into our data. We will not be rid of it without concerted universal effort, which is likely out of reach for our shambolic scientific publishing landscape. Therefore, I must adjust to this reality despite the errors it propagates in my data and the sleepless nights spent worrying away at different possible solutions – like an ever-present popcorn kernel I just can’t get out of my tooth.

Let us know how we can help enhance your research.

We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.

The Publisher Problem

asaf-gazit-nlsY0hEMOcI-unsplash crop

Why do we rely solely on publicly available data at Iquant? As Iquant’s senior data scientist, I’ve spent a lot of time reflecting on the source of the information that’s available to us. The choice to use only publicly available data isn’t one that we took lightly. The high cost of premium data services might be the most obvious reason, but on a deeper level we are fundamentally opposed to the excessive, profit-driven model employed by both scientific publishing and other data services companies.

These complaints are by no means revolutionary. For upwards of 20 years there has been a growing movement towards open access to publications along with louder criticism of the pay-per-view science and institutionally priced subscription services models. Open access could be a solution, but unfortunately, open access publishers continue to collect their toll. They simply have shifted the costs from consumers to those submitting their work through increasingly exorbitant processing fees. In the quest for greater scientific understanding, this becomes an unethical roadblock.

These publishing business models have unfortunately created a scientific information landscape that resembles the rest of our society. One where the ‘haves’ maintain their status without impediment, while the ‘have-nots’ continue to struggle for the recognition and impact they clearly deserve. This inequality within the scientific community poses all the same problems that it does within society as a whole. New, exciting thinking could be lost – whether merely by circumstance and location, or more nefariously by publishers enforcing their own bias. Can we trust the impartiality of a system when the driving motivation seems not to be advancement of knowledge, but rather, ever-increasing profits?

At Iquant, we’re simply not able to justify supporting such a system – not as some grand gesture (our size precludes anything using the word grand) – but simply because we view ourselves first and foremost as scientists, who fundamentally believe in the principles of transparency and diversity of thought.

This inequality within the scientific community poses all the same problems that it does within society as a whole.

– Ben Verdoorn, Senior Data Scientist

Publishers would have you believe that all the answers are waiting to be accessed behind their paywalls, but data means very little without careful analysis. I believe that there exists a vast trove of knowledge and insight to be found in public, open access, sources. While the information is freely accessible, at Iquant we have the skills necessary to synthesize it into valuable results. By extracting and carefully analyzing all the information available we are in a position to create novel insights and draw compelling conclusions from freely available data. This is why we built the Iquant Engine, to help other scientists reach their goals without contributing to a system that we profoundly disagree with.

Let us know how we can help enhance your research.

We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.

asaf-gazit-nlsY0hEMOcI-unsplash crop

The Publisher Problem

Why do we rely solely on publicly available data at Iquant? As Iquant’s senior data scientist, I’ve spent a lot of time reflecting on the source of the information that’s available to us. The choice to use only publicly available data isn’t one that we took lightly. The high cost of premium data services might be the most obvious reason, but on a deeper level we are fundamentally opposed to the excessive, profit-driven model employed by both scientific publishing and other data services companies.

These complaints are by no means revolutionary. For upwards of 20 years there has been a growing movement towards open access to publications along with louder criticism of the pay-per-view science and institutionally priced subscription services models. Open access could be a solution, but unfortunately, open access publishers continue to collect their toll. They simply have shifted the costs from consumers to those submitting their work through increasingly exorbitant processing fees. In the quest for greater scientific understanding, this becomes an unethical roadblock.

These publishing business models have unfortunately created a scientific information landscape that resembles the rest of our society. One where the ‘haves’ maintain their status without impediment, while the ‘have-nots’ continue to struggle for the recognition and impact they clearly deserve. This inequality within the scientific community poses all the same problems that it does within society as a whole. New, exciting thinking could be lost – whether merely by circumstance and location, or more nefariously by publishers enforcing their own bias. Can we trust the impartiality of a system when the driving motivation seems not to be advancement of knowledge, but rather, ever-increasing profits?

At Iquant, we’re simply not able to justify supporting such a system – not as some grand gesture (our size precludes anything using the word grand) – but simply because we view ourselves first and foremost as scientists, who fundamentally believe in the principles of transparency and diversity of thought.

Publishers would have you believe that all the answers are waiting to be accessed behind their paywalls, but data means very little without careful analysis. I believe that there exists a vast trove of knowledge and insight to be found in public, open access, sources. While the information is freely accessible, at IQuant we have the skills necessary to synthesize it into valuable results. By extracting and carefully analyzing all the information available we are in a position to create novel insights and draw compelling conclusions from freely available data. This is why we built the Iquant Engine, to help other scientists reach their goals without contributing to a system that we profoundly disagree with.

Let us know how we can help enhance your research.

We work with scientists, drug discovery professionals, pharmaceutical companies and researchers to create custom reports and precision analytics to fit your project's needs – with more transparency, on tighter timelines, and prices that make sense.