Introduction to research metrics

If you’re not sure what you, as a researcher, can do to promote or use responsible metrics, a good place to start is The Leiden Manifesto, which was first presented in Nature in 2015*, This brings together accepted but disparate principles of good practice in research evaluation. The manifesto represents the “distillation of best practice in metrics-based research assessment so that researchers can hold evaluators to account and evaluators can hold their indicators to account”:

1) Quantitative evaluation should support qualitative, expert assessment

Numeric indicators may be used in a variety of processes, but such indicators will not supplant expert assessment (peer review) of both research outputs and their environment.

2) Measure performance against the research missions of the institution, group or researcher

Many research outputs are aimed at a specific, and often non-academic, audience. We can measure the value of research outputs in terms of public engagement, creating impact, and disseminating research findings to research users. Aim to consider research outputs within the context of the centre, school, faculty, and University research environment and the original goals of the researcher(s).

3) Protect excellence in locally relevant research

We should celebrate the diversity of our researchers, and the international nature of much of our research. Encouraging publication in the appropriate language for the research users, is an excellent way to de-colonialise decolonise research, as well as helping to make metrics fairer.

4) Keep data collection and analytical processes open, transparent and simple

Where quantitative research quality indicators are used, aim for a balance between simplicity and accuracy, using tools with published calculation methods and rules.

5) Allow those evaluated to verify data and analysis

Encourage colleagues and students to question the indicators used in relation to research outputs and to be empowered, both with the necessary understanding and appropriate processes to offer alternative positions.

6) Account for variation by field in publication and citation practices

Researchers should not promote the use of one measure over another and the availability or otherwise of bibliometric or other data should not drive decision making about research activities and priorities.

7) Base assessment of individual researchers on a qualitative judgement of their portfolio

When assessing the performance of individuals, we should consider as wide a view of their expertise, experience, activities and influence as possible.

8) Avoid misplaced concreteness and false precision

Quantitative research metrics should be used as guides, not as decisive measures of the quality of a research output. Multiple sources should be used to provide a ‘more robust’ and broader picture. Researchers should establish the context for the metrics used, highlighting the factors considered, and avoid using over-precise numbers that give an illusion of accuracy.

9) Recognise the systemic effects of assessment and indicators

In order to account for the incentives established by measuring aspects of research, a range of indicators should be used, and researchers should aim to be transparent in relation to the biases associated with particular sources.

10) Scrutinize indicators regularly and update them

As the range and appropriateness of quantitative research indicators evolve, we should revisit and revise the ones we use.

 

Thanks to Sarah Slowe (University of Kent) for her help with this section of the module. This section is based on her blog post Responsible Metrics at Kent – So What?

* Hicks, D., Wouters, P., Waltman, L., de Rijcke, S. and Rafols, I. (2015) ‘Bibliometrics: the Leiden Manifesto for research metrics’, Nature, 520(0), pp. 429-431. doi: https://doi.org/10.1038/520429a