Use these ten principles to guide research evaluation, urge Diana Hicks, Paul Wouters and colleagues.
Quantitative evaluation should support qualitative, expert assessment.
Measure performance against the research missions of the institution, group or researcher.
Protect excellence in locally relevant research.
Keep data collection and analytical processes open, transparent and simple.
Allow those evaluated to verify data and analysis.
Account for variation by field in publication and citation practices.
Base assessment of individual researchers on a qualitative judgement of their portfolio.
Avoid misplaced concreteness and false precision. Science and technology indicators are prone to conceptual ambiguity and uncertainty and require strong assumptions that are not universally accepted.
Recognize the systemic effects of assessment and indicators.
Scrutinize indicators regularly and update them.
Abiding by these ten principles, research evaluation can play an important part in the development of science and its interactions with society. Research metrics can provide crucial information that would be difficult to gather or understand by means of individual expertise. But this quantitative information must not be allowed to morph from an instrument into the goal.
The best decisions are taken by combining robust statistics with sensitivity to the aim and nature of the research that is evaluated. Both quantitative and qualitative evidence are needed; each is objective in its own way. Decision-making about science must be based on high-quality processes that are informed by the highest quality data.