Research Impact Assessment: An Overview
Introduction: What are the best metrics or benchmarks for evaluating the impact and relevance of academic research? Our analysis into metrics – particularly bibliometrics – has revealed weaknesses within prominent quantitative indicators meant to capture the impact of academic research.
Moreover, because of the quantitative nature of many prominent evaluative indicators, often the qualitative assessment of important consequences of academic research, such as informal influences of research on policy decisions (such as white papers or other grey literature) or public engagement and education activities, is neglected. As a result, the impacts that count tend to be narrower impacts within the academy, rather than broader societal impacts.
In March 2011, CSID participated in the Health Economics Research Group (HERG) workshop on research impact assessment at Brunel University (UK). Participants debated the relative merits of several models and conceptual frameworks for assessing the broader societal impacts of federally funded academic research. Discussions focused primarily on two such frameworks: the Payback model, developed by Martin Buxton and Stephen Hanney of HERG; and Productive Interactions, represented by researchers at the international Social Impact Assessment Methods: Productive Interactions (SIAMPI) consortium.
Both approaches to impact assessment are detailed below, as are some other conceptually related projects and models. These approaches represent distinctive responses to pressures for research accountability in academia and the desire to make academic research relevant to broader social and societal issues.
Payback Framework: The ‘Payback Framework’ can be characterized both as a conceptual tool for understanding and generating hypotheses about the research process and as a methodology for evaluating the impact(s) of research at multiple stages of a project. Evaluating the impact of research using this methodology is essentially a twofold process:
- Multidimensional Categorization of Benefits – These five categories of benefits are intended to capture the potential paybacks from academic research at each stage of the research process: inputs, outputs, outcomes, and impacts.
- Logic Model – expected or identified benefits are mapped onto a model depicting the research process itself and the stages at which each category of benefits would be realized. This model provides the context for how and when research benefits should be measured.
Below is an illustration of the payback framework, first published in Hanney, Packwood, and Buxton (2000) and revised in Hanney et al. (2004). See here for more details on the Payback Framework, including recent publications of case studies implementing the framework for research evaluation.
Productive Interactions (PI): “Productive interactions” are defined as “exchanges of knowledge and expertise between sciences and society that result in behavioral change.” Social impact, then, is construed as the consequences of these behavioral changes that result from social interactions between academics and non-academic stakeholders. Social impacts can be positive or negative, but in order to be accounted for, they must be productive – some behavioral change must actually be affected. These impacts fall under three categories:
- Direct, ‘face-to-face’ encounters
- Indirect interactions through physical media, such as journal articles, conference papers, or posters
- Financial or material support, such as funding or equipment sharing.
Interactions can and are often reciprocal – impact does not necessarily flow only from the academic to the stakeholder – and case studies by the Social Impact Assessment Method through Productive Interactions (SIAMPI) consortium focus on contextualizing these interactions as a means of illustrating the complexity of tracing the consequences of reciprocal knowledge exchanges. See here for the latest summaries and preliminary results of SIAMPI case studies.
Impact of Social Sciences (ISS) Project: Researchers at the London School of Economics’ Public Policy Group direct this network of social scientists that seeks to measure the knowledge contributions of social science research to social sectors outside of the academy. Accounting for impact and providing recommendations for maximizing impact to other academics (a form of reflexive learning) are the primary project objectives. Research impact in the ISS project is defined as “a recorded or otherwise auditable occasion of influence from academic research on another actor or organization,” and it has two components:
- Academic impact – the actor or organization being influenced lies within the university sector
- External impact – the actor or organization being influenced is part of the broader public or private sectors.
ISS views external impact through a relatively narrow lens, focusing most of its attention on the ‘primary’ indicators that research has been disseminated, referenced, or used (see below) rather than broad social movements or trends (secondary impacts) potentially spurred by academic research. ISS maintains a blog where relevant material or discussions of research topics related to the project are held, and their major publication, Maximizing the Impacts of Your Research: A Handbook for Social Scientists, is available for free download.
Societal Quality indicator: The development of this indicator was instigated by a statement from the Royal Netherlands Academy of Arts and Sciences (KNAW) in 2001 proposing an evaluation indicator parallel to scientific quality – ‘societal quality’ – that which would represent the Academy’s commitment to rewarding socially relevant research. Mostert et al. (2010) proposed a quantitative methodology for calculating ‘societal quality’ based on a set of 23 indicators covering three relevant social sectors – public, professional, and private – across four stages of the research process:
- Knowledge production – indirect communication through physical documents
- Knowledge exchange – direct, face-to-face communication
- Knowledge use – demonstrated uptake of research in other sectors
- Earning capacity – ability to attract external funding based on perceived quality of research products.
These indicators are then weighted according to the relative objectives of the research group being evaluated (i.e. commercialization, public education and outreach, etc.) and aggregated into a final value. The paper covers the results of a case study on 19 participating research groups at the Leiden University Medical Center (Netherlands).
But while use of the term ‘quality’ in this context is conveniently symmetrical, is a misnomer. Metrics alone cannot account for the relative excellence of research dissemination activities (quality is a normative term) or the societal impacts that result from them.
Sci_Quest Method :The goal of Sci_Quest is to design a ‘fourth-generation’ evaluative framework specifically for multi-, inter-, and transdisciplinarity (MIT) research that avoids major errors characteristic of previous generations of evaluative schemes, such as managerialism and value monism.
The Sci_Quest method is a bottom-up, mission-oriented approach that recognizes the highly iterative and co-constituted nature of interactions between researchers and stakeholders. A four-step process was devised to construct the social context in which individual research groups are imbedded, and in which they are subsequently evaluated:
- Mission and self-image of the group, constructed through interviews and surveys of group members
- Research Embedment and Performance Profile (REPP), which gauges the connectedness of the group to its stakeholders/social context and how well the group serves the interests of these stakeholders.
- Stakeholder analysis, or how the group cooperates with intended users of their research
- Feedback phase, which asks whether or not the group’s mission is being fulfilled.
Thus, as detailed in the most recent Sci_Quest publication, the nature of this approach is an account of activities and performance rather than a numbers-based longitudinal tracking of causal relationships. This research is supported by the European project group Evaluating Research in Context (ERiC).
Altmetrics – short for “alternative metrics” – began in 2010 as a challenge to the research community concerning the means for assessing the importance of academic scholarship. Authors of The Altmetrics Manifesto – Jason Priem (University of North Carolina-Chapel Hill), Dario Taraborelli (Wikimedia Foundation), Paul Groth (VU University Amsterdam), and Cameron Neylon (Public Library of Science) – define altmetrics as “the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship.” Traditional measures for filtering what scholars should read from what they don’t need to, such as peer-review, citation metrics, and the journal impact factor, suffer from slowness and narrowness of breadth. Research on the use and prevalence of these measures, most notably the journal impact factor, also shows that they are often used incorrectly or inappropriately. By contrast, altmetrics purport to widen the scope of academic reward systems to account more accurately for the diverse ways in which scholars actually communicate. Blog discussions, social media mentions (e.g. Twitter citations or Facebook re-posts), and open dataset repositories are just a few examples.
Since the manifesto’s publication in 2010 , altmetricians have focused both on designing or refining new alternative metrics that attempt to capture the dissemination of scholarly work beyond citation counts – the current ‘gold standard’ for determining the impact of academic research. Others have worked on developing digital infrastructure to support harvesting or curating relevant data and on creating visualization programs to interpret, for example, bookmarks, download counts, and usage statistics. New altmetrics created by the community have largely been article-level metrics. These measures follow the traces of particular articles or essays as they are disseminated through online and web-accessible media, including digital pre-print or working paper repositories as well as customary academic journals.
While there is little consensus among those working on altmetrics as to how they ought to be used, altmetrics potentially offer a better means of assessing the scope of many different types of scholarly communication. Certain desirable outcomes of scholarly work, such as public discussions that reference academic research, or the incorporation of research into governmental white papers and policy documents, are far more likely to be captured by altmetrics than by citations. Additionally, altmetrics promise a quicker turnaround between publication and the assessment of impact or influence, as well as offering a more complete picture of the diversity of ways in which scholarship is communicated and, therefore, ways in which it can be influential.
A recent special bulletin of the Association for Information Science and Technology (ASIS&T) devoted to altmetrics highlights current developments in the altmetrics community, including work to incorporate altmetrics into academic CVs and into the activities of institutional repositories. Open access advocates are typically well-disposed to altmetrics as a means of informing their arguments for the superiority of open access publishing for the dissemination of scholarly research, especially for developing countries and those without access to institutional libraries. Online resources utilizing altmetrics to tracking the dissemination of academic scholarship are currently being offered by ImpactStory, figshare, and Altmetric.
Public Value Mapping Under the direction of Barry Bozeman and Daniel Sarewitz, based at Arizona State University’s Consortium for Science, Policy & Outcomes (CSPO), the project seeks to:
- Formulate a new theory of public value, one which incorporates societal qualities (i.e. public goods) that economic theory, the dominant mode of evaluating such qualities, strains to encapsulate – ex. government-funded research and the resulting knowledge
- Devise a new framework based on this theory with which to evaluate the contribution of academic knowledge in bringing about these public goods.
Science policy, in this context, is conceived as the conduit through which societal impacts or public goods are realized through publicly funded research, and so rather than developing a definitive methodology, Bozeman and Sarewitz constructed a set of guiding assumptions and questions for case study evaluations of science and innovation projects, with achieved public value (instead of economic value) as the measure of success. Thus, “the most suitable object of evaluation is not discrete knowledge units (if such things can be said to exist), or their market value, but the social [including policy] configurations that enable the production of scientific knowledge” (Bozeman and Rogers, 2002). Outcomes and impacts are evaluated with respect to specific implicit or explicit policy goals, rather than simply evaluating the resulting research and its uptake (or lack thereof).
In other words, PVM seeks to answer the question of whether science and innovation policy is appropriately formulated or socially situated not just to reward quality or innovative research, but to also achieve the public value results for which the research receives funding.
The project began around 2002 with Bozeman’s work at the Center for Science, Policy, and Outcomes, then a partnership between Columbia and Georgia Tech’s School of Public Policy. Soon after, the Center (renamed the ‘Consortium’) moved to Arizona State University, where Sarewitz joined the project.