, Volume 114, Issue 3, pp 1227–1250 | Cite as

Assessing the health research’s social impact: a systematic review

  • Matteo Pedrini
  • Valentina Langella
  • Mario Alberto Battaglia
  • Paola Zaratin


In recent year, a growing attention is dedicated to the assessment of research’s social impact. While prior research has often dealt with results of research, the last decade has begun to generate knowledge on the assessment of health research’s social impact. However, this knowledge is scattered across different disciplines, research communities, and journals. Therefore, this paper analyzes the heterogeneous picture research has drawn within the past years with a focus on the health research’s social impact on different stakeholders through an interdisciplinary, systematic review. By consulting major research databases, we have analyzed 53 key journal articles bibliographically and thematically. We argued that the adoption of a multi-stakeholder could be an evolution of the existing methods used to assess impact of research. After presenting a model to assess the health research’s social impact with a multi stakeholder perspective, we suggest the implementation in the research process of three practice: a multi-stakeholder workshop on research agenda; a multi stakeholder supervisory board; a multi-stakeholder review process.


Impact assessment Medical research Multi-stakeholder 


The most recent OECD report (2016) revealed that, following the global financial crisis, the total governments’ spending on research in the member countries has been in decline since 2009. The reduction in available public resources has consequently increased competition for grant funding (Moed and Halevi 2015), which in turn has resulted in a greater insistence on results-based accountability and on assessment of research impact (Evans et al. 2009). The growing expectation is that research holds promise for influencing and improving future policies and practices (Ernø-kjølhede and Hansson 2011; Grimshaw et al. 2012), and it is expected to have a positive impact on society (Denholm and Martin 2008; Martin 2011). The growing focus on impact is evident in the increasing number of public and private grants that require applicants to assess the social benefits that might flow from the studies (Henshall 2011; Holbrook 2012; Healthcare Industries Task Force 2004; Cooksey 2006). Thus, the need to improve the assessment of research’s social impact is a shared priority among all actors and communities, and the availability of reliable framework is a driver of effective future research (Moed 2007; Nallamothu and Lüscher 2012).

Taking these considerations as a starting point, studies on research assessment were initiated not only aimed at measure research output but also at assess the social outcomes (Hanney et al. 2000; van der Meulen and Rip 2000; Mostert et al. 2010; Holbrook and Frodeman 2010; United States Government Accountability Office 2012). For example, many studies have explored the methodology to assess single research outcomes (including, among others, Luukkonen 1998; Maredia and Byerlee 2000; Furman et al. 2006; Bozeman and Sarewitz 2011; Czarnitzki and Lopes-Bento 2013; Guthrie et al. 2013; Morgan and Grant 2013; Bloch et al. 2014), whereas others have presented frameworks to assess the impact of research in general (Bornmann 2013a, b; Council of Canadian Academies. Expert Panel on Science Performance and Research Funding 2012) or in specific field of research, such as agriculture research (Horton et al. 2007), environmental research (Boaz et al. 2009) and health research (Hanney et al. 2000, 2003; Holbrook and Frodeman 2010; Banzi et al. 2011; United States Government Accountability Office 2012).

One of the fields of research with a growing attention to the assessment of social impact is the health research, in which the investigation should be highly focused on the generation of benefits for final patients. To achieve this goal Patients Organisations have started adopting strategies characterised by increasing involvement in projects’ governance, development, and advancement (the so-called project management) and by proactive decision making about the projects that are funded (the so-called portfolio management) (Zaratin et al. 2014).

Despite earlier research, there is a general call for new research to clarify the existing literature (Bell et al. 2011, p. 227) and address the current problems in assessment of health research’s social impact (Bensing et al. 2003; Niederkrotenthaler et al. 2011). In particular, the assessment of social impact still suffers from some problems (e.g. Leduc 1994; Martin 2007; de Jong et al. 2011; Molas-Gallart and Tang 2011; Spaapen and van Drooge 2011; Bornmann 2013a, b). Martin (2007), for instance, distinguishes major difficulties inherent in assessing research social impact: ambiguity issues, as it is not always clear which impact can be attributed to which causes; attribution issues, as it is not always clear what portion of an effect should be allocated to specific research or to other inputs (van der Meulen and Rip 2000; Nightingale and Scott 2007); and context specificity issues, as the impact assessment must be adapted to the institution’s specific features (Göransson et al. 2009; Molas-Gallart et al. 2002; Rymer 2011; van der Meulen and Rip 2000). Among the various challenges in assessing the research’s social impact, the spotlight has recently been placed on the fact that the existing systems are limited to the consideration of just one group of stakeholders into the evaluation process (Milat, Bauman and Redman 2015). In fact, health researchers often assume that other scientists’ perspective is the unique, consistent criteria for assessing research and they rely on blind peer-reviews as the unique method to assess research’s social impact (Stein et al. 1999). Following this assumption, generally in the evaluation process of research for publication the patients and governments are not directly involved.

Adopting Freeman’s (1984) stakeholder theory perspective, we suggest that only considering the view of a particular group of stakeholders is not sufficient, and that assessment of health research’s social impact would be significantly improved by involvement and collaboration of multiple groups of stakeholders. In this sense, acknowledging the role of stakeholders in the assessment of health research’s social impact is one of the most important future challenge (New Philanthropy Capital 2010) and specific multi-stakeholder assessment tools must be developed. To start addressing this strategic objective, we carried out an interdisciplinary and systematic review on existing models used to measure impact to research.

Therefore, in this paper, we analyse health research’s social impact assessment focusing on different steps of the research process. The central questions of our research are as follows: Which are the main steps of health research’s social impact? Which stakeholders are involved in each of these steps? Furthermore, we will explain the most widely used analytical and methodological assessment approaches, highlight well-explored topics, describe the relevant stakeholders and propose avenues for future research. Ultimately, this research intends to propose a framework for extending the health research’s social impact evaluation to include stakeholders, and it suggests practical implications.

To answer these central questions, we carried out an interdisciplinary and systematic review of journal articles published between 2000 and 2016 on assessment of health research’s social impact. To the authors’ knowledge, no systematic review focusing on the topic exists. Studies on the topic are strongly scattered across disciplines, research communities and journals, and very few have attempted to aggregate this knowledge systematically. While some earlier literature reviews cover many studies (OECD 2008; Banzi et al. 2011; Boyd et al. 2013; Morgan and Grant 2013; Penfield et al. 2014), none use the systematic review methodology, which aggregates knowledge with clearly defined processes and criteria (Tranfield et al. 2003). Starting from results of the literature review on assessment of social impact and involvement of different stakeholders, we develop an integrated framework for assessing the health research’s social impact, through which we delineate how to expand stakeholder groups involved in the process and move toward a multi-stakeholder evaluation process.

The remainder of this article is structured as follows. “Background and terminology” section explains our understanding of health research’s social impact assessment and the present study’s research context; this serves as a basis for the systematic review in the sense of a real inclusion list. “Methodology” explains the working methodology used for systematic literature reviews. In “Search process: Steps 1 through 4” and “Descriptive and thematic analysis: Steps 5 to 6” sections, we show the results of the systematic review. In “Results of the descriptive analysis” section, we conclude with a discussion of the results, the study’s limitations and suggestions for future research.

Background and terminology

Earlier literature on research evaluation have introduced specific terms to define concepts related to assessment of research’s social impact, such as third-stream activities (Molas-Gallart et al. 2002), societal benefits, societal quality (van der Meulen and Rip 2000), usefulness (Department of Education, Science and Training 2005), public values (Bozeman and Sarewitz 2011), knowledge transfer (van Vught and Ziegele 2011) and societal relevance (Eric 2010; Holbrook and Frodeman 2011). Thus, to conduct a systematic literature review, it was necessary first to specify the language used to deduce relevant keywords for the systematic review. When discussing the assessment of research’s social impact, it means consider two different concepts: first the social impact and following the research’s social impact.

Social impact

The concept of social impact refers to “the social consequences that are likely to follow from specific policy, action or project development, particularly in the context of appropriate national, state or provincial environmental policy legislation” (Burdge and Vanclay 1995, p. 31). However, there is diversity in the conceptualisation and definition of social impact, and this is evidenced by the debate on different meanings given to the terms “social” and “societal” impact. On the one hand, social impact is used to describe individual-level effects on benefits to traits or behaviours (e.g. Godin and Dore 2005); on the other hand, societal impact refers to broader community-based phenomena, such as demographic changes, human rights, social cohesion, economic cohesion, employment, human capital formation, public health and safety, social protection and social services (e.g. European Commission 2011; Technopolis 2009). Even though these terms are sometimes used interchangeably (Bornmann 2013a, b, p. 218).

However, the wide use of terminologies is reflected in the lack of commonly accepted domains included in the concept of social impact (e.g., van der Meulen and Rip 2000, p. 11; Bornmann 2013a, b, p. 220). For instance, Helming et al. (2011) suggested an extensive list of domains to be considered in social impact investigations such as employment and labour markets; standards and rights related to job quality; social inclusion and the protection of groups and gender equality. Other authors limited the domain of social impact to only a few items pertaining to the living conditions of people: welfare, well-being, quality of life, customs and life habits, including, for example, consumption, work, sexuality, sports and food (e.g. Godin and Dore 2005). However, for the scope of this article, we identified with the term social impact all the potential changes (negative or positive) that an action/project have on the other parties, without any specific limitation in terms of domains.

Assessment of research’s social impact

Even if different definitions of the domain of social impact exist, the social impact assessment is commonly identified in the processes of analysing and monitoring the intended and unintended social changes, both positive and adverse, of planned interventions, including policies, programmes, research and projects (Vanclay 2003). In contrast to the abundant scientific literature on the impact of technological and economic factors, few investigations have focused on the health research’s social impact (Brewer 2011). This gap stands in sharp contrast to the widespread interest in the practice of and documentation on the health research’s social impact on academic and scientific knowledge (e.g. Leduc 1994; Penfield et al. 2014). Nevertheless, in recent years, health research funding agencies have put a substantial amount of effort into creating tools for identifying and measuring the social impact.

Bozeman and Sarewitz (2011, p. 8) describe the assessment of research’s social impact as “any systematic, data-based (including qualitative data) analysis that seeks as its objective to determine or to forecast the social (or economic) impact of research and attendant technical activity”. Hence, the research’s social impact is identified in the contributions of research activities to the advancement of scientific and scholarly knowledge, as well as societal benefits (e.g. improving quality of life; stimulating new approaches to social issues; changing community attitudes; influencing societal developments or ideas), cultural benefits (e.g. supporting greater understanding of where we come from and who and what we are as a nation and society), environmental benefits (e.g. improving the environment and lifestyles; reducing waste and pollution; improving natural resource management; reducing fossil fuel consumption; adopting recycling techniques; reducing environmental risk; preservation initiatives; biodiversity conservation; enhancement of ecosystem services; improving plant and animal varieties; and adapting to climate change) or economic benefits (e.g. adding to economic growth and wealth creation; enhancing the skills base; increasing employment; reducing costs; increasing innovation capabilities and global competitiveness; improving in service delivery; and un-quantified economic returns resulting from social and public policy adjustments) (Donovan 2008; Moed and Halevi 2015).

Most of the study that refers to the assessment of health research’s social impact is concerned with the measurement of economic changes (impacts) that stem from health research (Donovan 2011; European Commission 2010; Lähteenmäki-Smith et al. 2006). These changes are not a short-term phenomenon, and they are mostly concerned with intermediate (e.g. improved clinical practices) or ultimate (e.g. health benefits and economic benefits) outcomes (Lähteenmäki-Smith et al. 2006; United States Government Accountability Office 2012).

Previous studies have demonstrated that from an academic perspective, there are many ways to assess the health research’s social impact, for example, by recording the ongoing consultation, consideration, citation, discussion, referencing or use of a piece of the investigation. Over time, studies have developed different tools that “now allow us to very promptly trace outflows of ideas and expertise in detail, down to the level of an individual researcher or her portfolio of works” (Harzing 2010, p. 2). In this sense, the term assessment of health research’s social impact refers to the identification of changes in individual outcomes because of the existence of a health research project and of the reporting of the outcomes for participants in an activity (e.g. monitoring patient performance through a clinical survey after treatment) (Inglesi-Lotz and Pouris 2011).

When it comes to the evaluating the health research’s social impact, ex-ante evaluation examines the potential social impact, and ex-post evaluation monitors the actual social impact that has already been completed (Holbrook and Frodeman 2011; Potì and Cerulli 2011; Social Sciences and Humanities Scientific Committees 2013; Bornmann 2013a, b). Furthermore, there are two major groups of assessment methods: qualitative methods, including peer review, case studies and surveys; and quantitative methods, including the development and use of statistical indicators, in addition to advanced mathematical models (Bornmann 2013a, b; Gibbons et al. 1994; Newby 1994; Buxton et al. 2000; Hessels and Van Lente 2010; Holbrook and Frodeman 2010; de Jong et al. 2011; United States Government Accountability Office 2012).

One method to run an ex-ante assessment of health research’s social impact is the use of journal impact factors (JIFs), which measures how many academics cite a journal’s output of papers on average, or (even worse) subjective lists of ‘good’ and ‘bad’ journals (or ‘good’ and ‘bad’ book publishers) to evaluate ex ante the impact of contribution made by researchers. As Harzing (2010, p. 3) points out, using JIFs or such lists is an attempt to apply a ‘proxy indicator’ for quality by assuming that a health research results are as good as the average of other health research in the journal in which she or he publishes. However, all journals publish rather varied work, and while some have a social impact, much of the work does not. Notwithstanding the fact that a peer review is often perceived as more complex than a review of scientific quality, and that this kind of peer review has met resistance within the scientific community, Holbrook and Frodeman (Holbrook and Frodeman 2011, p. 240) argue that “there is little evidence to suggest that peer review is any less effective at ex-ante assessments of societal impact than it is at ex-ante assessments of scientific, technical or intellectual merit”. There are, of course, several other criticisms regarding peer review; for example, its subjective and contingent nature, its time-consuming procedure and its high costs (Pontille and Torny 2010; van den Besselaar and Leydesdorff 2009). However, overall, peer review remains one of the major cornerstones of a comprehensive and integrated assessment process, especially in the domain of social impact (Barker 2007; Ernø-Kjølhede and Hansson 2011).

Qualitative methods are also useful for understanding the health research social impact. Several authors have discussed the advantages of case studies (e.g. Martin 2011; Bornmann 2013a, b), whereas others have criticised their lack of objectivity and quantification, as well as their labour-intensive nature. The unique advantage of qualitative approaches lies in their ability to accommodate complexity, especially for a phenomenon such as societal impact. Case studies and surveys are not strictly or purely qualitative, and for this reason, they are particularly valuable in assessing the interactions between scientists and stakeholders (Bornmann 2013a, b, p. 226). In the case of surveys, although they usually combine the application of qualitative procedures (e.g. questionnaire construction) and quantitative procedures (e.g. statistical analyses of survey results), surveys are particularly valuable for measuring perceptions among various stakeholder groups. The consensus in the social impact literature holds that neither qualitative nor quantitative methods alone suffice to meet the social impact assessment goals related to research. David Roessner (2000) calls the choice of quantitative versus qualitative measures in research evaluation a false one, ‘especially for evaluators isolated from the real world’. Hence, in recent years, a shift in assessment methods can be observed, moving from the application of simple to combined methods (e.g. Penfield et al. 2014; Donovan 2007, pp. 592–593). As has been pointed out, several authors have addressed the methodological problems of involving various groups of stakeholders in the process of analysing health research’s social impact (e.g. Spaapen et al. 2007; de Jong et al. 2011; Bornmann 2013a, b). For example, authors have argued that scientists alone should not conduct qualitative assessments of social impact, as they often appear to have difficulties doing so. Interactions with non-academic stakeholders are needed to transfer knowledge between science and society, and the involvement of carefully selected stakeholders in this process can be valuable in evaluating societal impact. For this reason, we would like to understand the level of inclusion of different stakeholders in the assessment of health research social impact.


The present research is based on systematic review aims to structure studies related to the assessment of health research’s social impact and thus contribute to theory development. A systematic review includes both quantitative bibliographical analyses and more qualitative thematic analyses (Tranfield et al. 2003) and it is characterised as transparent, focused, equal and accessible, as it provides clarity, allows for the unification of research and practitioner communities and, in the end, leads to synthesis (Fink 1998; Thorpe et al. 2005). Drawing upon the process used by Seuring et al. (2005) which was based on Mayring’s (2003) work, our literature review followed six procedural steps, which are illustrated in Table 1. Each step is described below.
Table 1

Steps of the systematic literature review


Individual steps

Resulting analysis

No. articles

Search process

Step 1: Identification of keywords (13 keywords)

Previous research and reviews

Step 2: Development of exclusion and inclusion criteria

Step 3: Specification of relevant search engines and execution of search (4 engines)

Title and abstracts (automated based on keywords)


Step 4: Development of A, B and C lists





Title and abstracts (manual)



Full text


Narrative inclusions (e.g. Anderson 1998; Moore and Manring 2009; Pastakia 1998; Schaltegger 2002; Hall and Wagner 2012)

Full text


Descriptive and thematic analysis

Step 5: Descriptive categories (e.g. journals covered, methodologies applied)



Step 6: Deductive and inductive categories for identifying central themes and interpreting results



Search process: steps 1 through 4

The first step of the analysis is the identification of keywords to be used in the research on the basis of the literature review. The keywords for the search (see Table 2) were deduced from the previously discussed definitions related to health research’s social impact. The domain was operationalised through three clouds of keywords, including impact (e.g. impact, outcome, change, quality), measurement (e.g. assessment, evaluation, measurement, effectiveness) and health research (e.g. health, medical, clinical). In the end, 13 keywords were used. Target articles needed to match at least one keyword in each cloud. These clouds reflect our objective of covering articles dealing with measurement practices in health research that include social impact assessment. By no means does this review claim to include all publications dealing with the analysis of the assessment of health research’s social impact.
Table 2

Keywords operationalised for research

Search clouds

Sample search strings





Research (es)

Impact (s)












TI(Research*) AND Y(Impact OR Outcome OR Change OR Impact OR Outcomes OR Changes OR Quality) AND ti(Assessment OR Evaluation OR Measurement OR Assessments OR Evaluations OR Measurements OR Effectiveness) AND (Health OR Medical OR Clinical OR Basic)

TI = (Research*) AND TI = (Impact OR Outcome OR Change OR Impact s OR Outcomes OR Changes OR Quality) AND TI = (Assessment OR Evaluation OR Measurement OR Assessments OR Evaluations OR Measurements OR Effectiveness) AND TS = (Health OR Medical OR Clinical OR Basic) AND PY = (2000 OR 2001 OR 2002 OR 2003 OR 2004 OR 2005 OR 2006 OR 2007 OR 2008 OR 2009 OR 2010 OR 2011 OR 2012 OR 2013 OR 2014 OR 2015 OR 2016)

TITLE(Research*) AND TITLE(Impact OR Outcome OR Change OR Impact s OR Outcomes OR Changes OR Quality) AND TITLE(Assessment OR Evaluation OR Measurement OR Assessments OR Evaluations OR Measurements OR Effectiveness) AND ALL(Health OR Medical OR Clinical OR Basic) AND PUBYEAR AFT 2000

Through systematic reviews can also include other types of publications, to guarantee quality and reduce the sample to a manageable number of articles, following Seuring and Müller (2008) the analysis concentrated on peer-reviewed academic journal articles published in English. Regarding the timeframe covered, stakeholder has received prominent attention in the last decade. Therefore, this review includes academic papers published between 2000 and 2016. This timeframe was placed into the research strings.

This study searched the following major research databases: ProQuest ABI/Inform, ISI Web of Science Core Collection, Scopus and Wiley Online Library. Each database and related search engine uses a different syntax, and thus, adapted search strings were necessary in many cases. In our case, we used variations of the search strings, but all variations remained within the range of the key terms. Using our strings of research, initially, we identified 793 articles related to the assessment of health research’s social impact. These articles were progressively categorised into three lists. The C-list, which included 793 articles, was first reduced through title and abstract analysis to 90 related (B-list) articles. 703 articles were excluded because they were not relevant to the domain of research. The B-list was analysed in-depth by title, abstract and full text in an iterative process, through which we eliminated 34 publications. The most common reasons for excluding articles from the B-list included that they were focused on the impact of the trials and therefore were interested in care rather than research. The resulting A-list included 56 articles were considered relevant for the analysis of the theme and they were used in both the descriptive (quantitative) and thematic (qualitative) analyses.

Descriptive and thematic analysis: steps 5 to 6

For the study, we selected categories that describe the articles, such as the journal of publishing or methods used (Seuring and Müller 2008) and also, other classifications were developed based on the research questions. For example, we added the journal type ranking to assess different diffusion between medical journals and journals that we have classified as ‘performance measurement’ because they focused more generally on accountability and performance assessment. Another classification we used was the year of publication to show the growing interest in the issue over time. We also included a stakeholder ranking, which was considered in the measurement process. The quantitative thematic analysis used deductive categories that we borrowed from prior literature reviews and inductive categories that emerged during the evaluation (Reichertz 2010). The aim of this review was to systematically categorise the papers’ content and identify relationships (Lane et al. 2006). This synthesis process was inductive and interpretative, but as Thorpe et al. (2005: 261) point out, “the adoption of an explicit and rigorous approach to reviewing allows others to understand how and why studies were selected and themes built up”.

The results and analysis will be structured in two parts: the first part provides a descriptive (quantitative) analysis that presents an overview of the research agenda, and the second presents a qualitative thematic analysis that provides an in-depth evaluation towards a new framework.

Results of the descriptive analysis

The publishing of papers on the impact of health research appears widespread. The most relevant journals (in the sense that they published more than one article on this topic) were those with a medical focus with a leadership of the journal Scientometrics, which published the most articles (five articles). Figure 1 shows that there are also articles in journals with management and performance measurement focus, but these are in total less in comparison to medically oriented journals.
Fig. 1

Articles per journal type

The studies focused on diverse countries. Most studies concentrated on a single country, and only six took a cross-country focus (‘Multiple’). Figure 2 shows that the best-represented countries were individual Anglo-Saxon countries, including the United Kingdom with 8 articles and the United States with 23 articles.
Fig. 2

Articles per focused countries

Our analysis shows that the field of research on health research’s social impact is still young. As Fig. 3 shows, it started to evolve at the beginning of this century; the number of publications surged between 2010 and the present. The trend quickened rapidly starting at the end of 2012, demonstrating a significant increase of nine publications in 2015 alone.
Fig. 3

Articles per publication year

Looking more specifically at the content of individual articles, as Fig. 4 demonstrates, the sample included 45 published papers on highly discussed journals in the medical research context. A total of 29 articles in our sample explicitly addressed macro-level research evaluation, and two emphasised the assessment of research’s social impact.
Fig. 4

Articles per research context

Results of the thematic analysis

The purpose of the thematic analysis was to identify the elements should be included in the assessment of health research’s social impact to develop a multi-stakeholder perspective. We employed both deductive codes and inductive (sub)codes that emerged from the data.

Figure 5 demonstrates that the most widely adopted research methodology (12 articles) was the development of a position paper followed by the analysis of case studies (a combination of single, illustrative and multi-case studies). Following this, the second widely adopted research methodologies were literature reviews (nine articles) and papers that proposed new models and frameworks (seven articles). These works represent the effort to develop a standard set of measures to inform decision-making in health research, practice and policy and discuss their need for better instruments within their discipline and describe current or future initiatives for exploring the benefits of these technologies. Together, these perspectives underscore the importance of developing new multidimensional metrics to measure research’s social impact. Furthermore, initiatives like the Patient-Reported Outcomes Measurement Information System, to create a health-related quality of life item banks, represent an effort to develop a standard set of measures for informing decision-making in clinical research, practice and health policy. Additionally, five papers in the sample adopted a multi-method approach, four used surveys, three used bibliometric analyses and three papers that present results of a workshop. In some cases, the papers were not concerned with research but with single-trial applications.
Fig. 5

Articles per methodology

Fig. 6

Articles per involved stakeholders

Figure 5 shows the results of the thematic analysis. This analysis indicates that most of the papers included in our sample were addressed to researchers (including academic researchers, biotech researchers and pharma researchers) as stakeholders to turn, both as subjects of analysis for studies involving more reviews, both as subjects to turn in research offering approaches and methodologies. This observation shows that there is still a need for further studies to identify the literature’s most salient findings. There is a strong emphasis on the researchers and the beneficiaries, namely, the patients, but there is still a dearth of research conducted from a multi-stakeholder point of view that involves all subjects in all research phases. Many studies show the comparative effectiveness of research that analysed groups of patients and looks for associations between medical treatments and patient outcomes. Together, the multi-stakeholder perspective underscores the importance of developing valid, precise and efficient measures to capture the logical links between diseases, researchers, patient treatments and impact (Fig. 6).


Based on the analysis of articles on the basis of previously offered systematic literature review, it was possible to develop an integrated framework that describes the emerging steps useful for evaluating the social impact of health research considering two interrelated loops. This model is presented in Fig. 7 and discussed in the next sections.
Fig. 7

The double loops of research

An integrated framework for assessing the impact of health research

Using an inductive method to identify elements that should be included in the assessment of health research’s social impact, the existing heterogeneous studies on assessment of health research’s social impact can be explained by an extended taxonomy grouped in two loops: the knowledge development loop and the knowledge exploitation loop. The final social impact of health research is determined by the different dimensions in every single loop and the interaction between them. Both of the loops are based on an input–output model, representing a peculiar situation in which the output of the knowledge development (the first loop) is the input of the knowledge exploitation (the second loop). In other words, the integrated model assumes that the health research can have a considerable social impact only if the outputs of knowledge development loop effectively feed into the knowledge exploitation loop as inputs and only if the two loops are managed in an integrated way. Articles we considered in our review, starting from different perspectives, focus on one or more of the steps of this integrated framework (see Table 3), but none of that draws a clear picture of the overall process. Most of the considered articles discussed the assessment of health research’s social impact focusing on outputs of the knowledge development loop such as publications, patents, the diffusion of knowledge and capacity-building.
Table 3

The double loops of health research’s social impact


No. of articles

Exemplary authors

Knowledge development loop


Research founding


Inglesi-Lotz and Pouris (2011); Tremblay et al. (2010); Drew et al. (2016); Ekboir (2003); Liebow et al. (2009); Cohen et al. (2015)

Research capacities


Manion et al. (2012)


Basic research


Franceschini et al. (2015); Cousins et al. (2015); Morton (2015); Taylor and Bradbury-Jones (2011)

Applied research


Stryer et al. (2000); LaKind et al. (2015); Fusco et al. (2012); Kovacs et al. (2016); O’Connor and Brinker (2013); Yiend et al. (2011); Ahmed et al. (2012); Bridges and Buttorff (2010); Punt et al. (2011)

 Primary output



Moed and Halevi (2015);

Wu (2015); Moed and Halevi (2015); Inglesi-Lotz and Pouris (2011); Jammer et al. (2015); Castelnuovo et al. (2010); Bornmann and Marx (2014); Franceschini et al. (2015); Sombatsompop et al. (2005-1); Sombatsompop et al. (2005-2); Drew et al. (2016); Kryl et al. (2012)



Moed and Halevi (2015);




Colugnati et al. (2014); Bornmann (2013a, b); Cyril and Phil (2009); Zaratin et al. (2014); Castelnuovo et al. (2010); Bornmann and Marx (2014); Cousins et al. (2015); O’Connor and Brinker (2013); Yiend et al. (2011); Ahmed et al. (2012); Gibson et al. (2014); Gershon et al. (2010); Zelefsky et al. (2013); Proctor et al. (2011); Sarli et al. (2010); Tremblay et al. (2010); Drew et al. (2016); Liebow et al. (2009); Dannenberg et al. (2006)

Follow-up research


Colugnati et al. (2014); Jammer et al. (2015); Morton (2015); LaKind et al. (2015); Fusco et al. (2012); Kovacs et al. (2016); Figueredo and Sechrest (2001); Cohen et al. (2015); Milat et al. (2015)

 Secondary output



Fox et al. (2012); Ippoliti and Falavigna (2014); Westrich et al. (2016)

Knowledge exchange


Wu (2015); Bornmann (2013a, b); Cyril and Phil (2009); Fox et al. (2012); Zaratin et al. (2014); Morton (2015); Taylor and Bradbury-Jones (2011); Stryer et al. (2000); Schulz et al. (1995); Bridges and Buttorff (2010); Mullins et al. (2010); Proctor et al. (2011); Punt et al. (2011); Sarli et al. (2010); Cohen et al. (2015); Adam et al. (2012); Milat et al. (2015); Zaratin et al. (2016); Cohen et al. (2015)

Media engagement


Moed and Halevi (2015); Bridges and Buttorff (2010)

Knowledge exploitation loop

 Informed decision-making

Policymakers’ behaviour


Nightingale and Scott (2007); Guinea et al. (2015); Adam et al. (2012); Haigh et al. (2012); Westrich et al. (2016); Willis et al. (2016); Reeve et al. (2007)



Guinea et al. (2015); Haigh et al. (2012); Ippoliti and Falavigna (2014); Reeve et al. (2007)



Ekboir (2003); Kryl et al. (2012); Cohen et al. (2015); Guinea et al. (2015); Ippoliti and Falavigna (2014); Willis et al. (2016); Reeve et al. (2007)

 First outcomes

Clinical practices


Jette and Keysor (2002); LaKind et al. (2015); Fusco et al. (2012); Kovacs et al. (2016); O’Connor and Brinker (2013); Yiend et al. (2011); Ahmed et al. (2012); Gibson et al. (2014); Gershon et al. (2010); Zelefsky et al. (2013); Figueredo and Sechrest (2001); Schulz et al. (1995); Bridges and Buttorff (2010); Mullins et al. (2010)

 Secondary outcomes

Health benefits


Perrin (2002); Jette and Keysor (2002); Brody et al. (2015); Dannenberg et al. (2006)

Economic benefits


Moed and Halevi (2015); Liebow et al. (2009)

The starting point in the knowledge development loop is the availability of the financial resources (Inglesi-Lotz and Pouris 2011; Tremblay et al. 2010; Drew et al. 2016; Ekboir 2003; Liebow et al. 2009; Cohen et al. 2015) and skilled employees (Manion et al. 2012) needed to conduct basic (Franceschini et al. 2015; Cousins et al. 2015; Morton 2015; Taylor and Bradbury-Jones 2011) or applied health research (Stryer et al. 2000; LaKind et al. 2015; Fusco et al. 2012; Kovacs et al. 2016; O’Connor and Brinker 2013; Yiend et al. 2011; Ahmed et al. 2012; Bridges and Buttorff 2010; Punt et al. 2011).

Typical examples include the employed staff such as the competitive project funding needed to conduct research. The primary goal is to produce activities and generate new scientific knowledge that can contribute to the social progress (Martin and Irvine 1983). These activities include health basic research (improving human understanding of phenomena) and health applied research (useful information that satisfies societal needs) (Frey and Rost 2010; Lee 2007; van Raan 2005). The outputs of these activities are tangible and easily measured because they are defined as the products of health research studies and disseminated by investigators who discuss or interpret the study’s findings (for example, through publications, supplemental materials, conference materials and patents) (Sarli et al. 2010).

The knowledge development loop outputs could have two referred outcomes: on the one hand, they could reinforce the available resources for future health research development, and on the other hand, they could be an input for the knowledge exploitation loop. The two reinforcing effects of the knowledge development loop include health research capacity-building and opportunities for future studies. Capacity-building informs other investigators, clinical trial participants, grant funding agencies and the public on the health research efforts and the findings generated by the project; this also refers to the measurement of individual and group changes regarding knowledge, abilities and skills. Moreover, follow-up health research, where the results of earlier study allow for expansion of further health research in related areas.

The emerging integrated model assumes that the health research’s social impact does not end with the publication of scientific reports or papers. This model supports the idea that “research that is highly cited or published in top journals may be useful for the academic discipline but not for society” (Nightingale and Scott 2007, p. 547). Naturally, in a university medical centre, scientific quality prevails and is a prerequisite that cannot be replaced by aiming instead for high social quality (Mostert et al. 2010). However, health research with a high scientific quality is not necessarily communicated to society, and to increase the social impact of research, additional activities are needed. Therefore, social impact is not necessarily a consequence of high scientific quality, as its primary outputs, such as publications, the acquisition of new health research funding and health research capacity-building, could not be considered a direct social impact (Wooding et al. 2011; Cohen et al. 2015).

In a broad sense, the knowledge exploitation loop should start with readers’ feedback, which may transform self-referential judgments on the merit of a work based on people’s use of health research outcomes, as reflected in their behaviour, such as knowledge exchange through translation (Bollen et al. 2008) and readings (Darmoni et al. 2002). Translation is defined as an activity that occurs beyond health research publication and is designed to facilitate the uptake of study findings in real-world settings (Cohen et al. 2015) by disseminating health research outputs through, for example, lobbying, knowledge exchange and social media. These activities may be undertaken by researchers, their affiliated institutions or government programmes in place to promote knowledge uptake through the implementation of protocols, training workshops and information exchange meetings. These activities may also be part of a general dissemination strategy conducted after health research.

Depending on whether these activities are taken up by policy makers, they may or may not lead to policy or clinical practice impact. The shift from primary outputs to applications and relevant outcomes depends on lobbying and dissemination efforts, which trigger a series of standardization and regulation mechanisms managed by a greater mix of policymakers, “the market and the state, private and public sectors, science and values, producers and users of knowledge” (Barré 2005, p. 117). Moreover, the effect of health research on policies, regulations and clinical practices may take place alongside the influence of independent factors whose impact may take years and may manifest in unexpected ways (Amara et al. 2004; Davies 2004; Nutley et al. 2007).

Policymakers at the intermediary or government level manage the adaptation of new policies and guidelines to support their platforms and facilitate the transfer of knowledge from science to society (Spaapen et al. 2007). Policy and practice impact are demonstrable changes, which often take the form of benefits to products, processes, policies or practices, that occur after a health research project has concluded. These impacts are concrete, and they include stopping or changing existing interventions following the demonstration of ineffectiveness.

Society can reap the benefits of successful health research studies only if the results are converted into commercial and consumable clinical practices (e.g. medicaments, diagnostic tools, machines and devices) or services (Lamm 2006). Clinical implementation refers to the application or adoption of health research outputs in clinical practices. Health research findings can affect change in the understanding of a disease, disorder or condition, which can result in more effective clinical outcomes (Sarli et al. 2010).

Finally, the final outcomes as changes in health behaviours and health outcomes, such as disease incidence and prevalence or other health indicators, or as improvements in economic benefits. Moreover, such changes rarely occur quickly, and they may be difficult to attribute to health intervention research (Milat et al. 2013; Banzi et al. 2011). For this reason, it is important to involve stakeholders—including the health industry, other industries, governments, researchers, decision-makers and the public or public groups—and follow them through the various stages of the research loops to determine the final outcomes in health, well-being, and social and economic prosperity.

Inclusion of stakeholders in the evaluation process

Following best practices in impact assessment (New Philanthropy Capital 2010), we argue that the assessment of health research’s social impact must contemplate aspects related to the different stakeholders that have expectations of the health research. We scrutinised the articles to understand the groups of stakeholders involved in the integrated health research process above described. This analysis showed that currently, health research processes separately consider stakeholders that rely on the knowledge development and knowledge exploitation loops. As Table 4 shows, for example, in the knowledge development loop, the health researchers are highly involved, research funding organisations and regulatory agencies are partially involved, but patients and health workers are commonly excluded. In this sense, the actual health research process is not structured to consider the expectations of patients and health workers in the input, activity and generation of either primary or secondary knowledge development outputs, which are based on a referee process that also includes the review by academic peers. On the opposite side, the knowledge exploitation loop relies on regulators who oversee decision-making as a step for enhancing regulation, health operators who are involved in the changes of clinical practice that come from the development in law and patients, who are the final beneficiaries of the overall health research process.
Table 4

Potential practices to promote a multi-stakeholder health research


Involved stakeholders





Health operators

Knowledge development loop


Research funding

Research capacities



Basic research

Applied research

 Primary output






 Secondary output



Knowledge exchange

Media engagement


Knowledge exploitation loop

 Informed decision-making

Policymakers’ behaviour






 First outcomes

Clinical practices

 Secondary outcomes

Health benefits



Economics benefits



✓ Current practices; ■ Possible practice 1: multi-stakeholder workshop on research agenda

♦ Possible practice 2: multi-stakeholders board; ▬ Possible practice 3: multi-stakeholder review process

As a result of this analysis, we suggest that the adoption of a multi-stakeholder model and new metrics that enable true alignment of efforts and accountability from the different stakeholders’ perspectives could have positive implications for promoting health research’s social impact on patients and society (Zaratin et al. 2016). Assuming the positive impact of an integrated consideration of stakeholders’ expectations (Fig. 7), we suggest that to increase the health research’s social impact the following three practices should be considered in each step of the two loops: (1) a multi-stakeholder workshop related to the health research agenda; (2) a multi-stakeholder supervisory board; and (3) a multi-stakeholder review process along with the adoption of relevant metrics.

To expand the number of stakeholders participating in the knowledge development loop, we suggest developing regular multi-stakeholder workshops, whereby the various stakeholders can benefit from an open discussion and together define the health research priorities. Researchers, funders and policymakers can then collect information on the expectations and priorities perceived by both patients and health workers and focus their efforts on the streams of health research that have greater potential outcomes from the perspective of patients and health workers.

Additionally, to encourage a multi-stakeholder health research process, research teams can adopt a multi-stakeholder steering committee for conducting health research activities. With the establishment of such a committee, health researchers would be able to benefit from the periodical exchange of information and opinions with patients and health operators regarding how to conduct health research that could maximise the potential final outcomes regarding health and economic benefits. This allows for the continuous fine-tuning of the health research to embed stakeholder interests and, in this way, increase the potential health research’s social impact.

Within this frame in 2012 the International Progressive MS Alliance (Fox et al. 2012) was established as a collaboration between leaders from MS societies, including Italian MS Society around the globe, volunteer experts, academics, and the industry to expedite the development of disease-modifying and symptoms management therapies for people living with progressive MS. In an unprecedented manner, through advocacy organisations that represent them worldwide, people with MS were demanding a renewed focus on progressive MS with the goal that relevant stakeholders work together to maximise their collective impact (Zaratin et al. 2016) on developing new treatments for this form of MS. Through a series of scientific and strategic planning meetings, the PMSA identified and developed a strategic health research agenda, launched health research funding initiatives and a multi-stakeholder review process. To secure future success, multi-stakeholder engagement will have to be further sustained by shared measurements of impact (new metrics) and supporting infrastructures to enable true alignment of efforts and accountability for results and to enable the proposed framework fully.

A third possible practice to be assessed is the redefinition of how publications (the primary output of knowledge development) are evaluated. Currently, this evaluation is grounded in blind peer assessments. We suggest redefining the academic review process by including in the evaluation of scientific publications the opinions of potential patients and health operators. In this way, the focus of the evaluation process for potential academic writing will shift from the original self-referential academic assessment towards a preliminary assessment of social and economic impact. Overall the assessment of health research’s social impact lies on the science of patients input (Anderson and McCleary 2016) that also deserves attention and resources.


  1. Adam, P., Solans-Domènech, M., Pons, J. M. V., Aymerich, M., Berra, S., Guillamon, I., et al. (2012). Assessment of the impact of a clinical and health services research call in Catalonia. Research Evaluation, 21(4), 319–328.CrossRefGoogle Scholar
  2. Ahmed, S., Berzon, R. A., Revicki, D. A., Lenderking, W. R., Moinpour, C. M., Basch, E., et al. (2012). The use of patient-reported outcomes (PRO) within comparative effectiveness research implications for clinical practice and health care policy. Medical Care, 50(12), 60–70.CrossRefGoogle Scholar
  3. Amara, N., Ouimet, M., & Landry, R. (2004). New evidence on the instrumental, conceptual, and symbolic utilization of university research in government agencies. Science Communication, 26(1), 75–106.CrossRefGoogle Scholar
  4. Anderson, A. R. (1998). Cultivating the Garden of Eden: environmental entrepreneuring. Journal of Organizational Change Management, 11(2), 135–144.CrossRefGoogle Scholar
  5. Anderson, M., & McCleary, K. K. (2016). On the path to a science of patient input. Science Translational Medicine, 8(336), 336.CrossRefGoogle Scholar
  6. Banzi, R., Moja, L., Pistotti, V., Facchini, A., & Liberati, A. (2011). Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. Health Research Policy and Systems, 9, 26.CrossRefGoogle Scholar
  7. Barker, K. (2007). The UK Research Assessment Exercise: the evolution of a national research evaluation system. Research Evaluation, 16(1), 3–12.CrossRefGoogle Scholar
  8. Barré, R. (2005). S&T indicators for policy making in a changing science–society relationship. In H. Moed, W. Glänzel & U. Schmoch (Eds.), Handbook of quantitative science and technology research (pp. 115–131). Dordrecht: Springer.CrossRefGoogle Scholar
  9. Bell, S., Shaw, B., & Boaz, A. (2011). Real-world approaches to assessing the impact of environmental research on policy. Research evaluation, 20(3), 227–237.CrossRefGoogle Scholar
  10. Bensing, J. M., Caris-Verhallen, W. M., Dekker, J., Delnoij, D. M., & Groenewegen, P. P. (2003). Doing the right thing and doing it right: toward a framework for assessing the policy relevance of health services research. International Journal of Technology Assessment in Health Care, 19(04), 604–612.CrossRefGoogle Scholar
  11. Bloch, C., Sørensen, M. P., Graversen, E. K., Schneider, J. W., Schmidt, E. K., Aagaard, K., et al. (2014). Developing a methodology to assess the impact of research grant funding: A mixed methods approach. Evaluation and program planning, 43, 105–117.CrossRefGoogle Scholar
  12. Boaz, A., Fitzpatrick, S., & Shaw, B. (2009). Assessing the impact of research on policy: a literature review. Science & Public Policy (SPP), 36(4), 255–270.CrossRefGoogle Scholar
  13. Bollen, J., Van de Sompel, H., & Rodriguez, M. A. (2008, June). Towards usage-based impact metrics: first results from the mesur project. In Proceedings of the 8th ACM/IEEE-CS Joint Conference on Digital Libraries (pp. 231–240). ACM.Google Scholar
  14. Bornmann, L. (2013a). Measuring the societal impact of research: research is less and less assessed on scientific impact alone—We should aim to quantify the increasingly important contributions of science to society. EMBO Reports, 13(8), 673–676.CrossRefGoogle Scholar
  15. Bornmann, L. (2013b). What is societal impact of research and how can it be assessed? a literature survey. Journal of the American Society for Information Science and Technology, 64(2), 217–233.CrossRefGoogle Scholar
  16. Bornmann, L., & Marx, W. (2014). How should the societal impact of research be generated and measured? A proposal for a simple and practicable approach to allow interdisciplinary comparisons. Scientometrics, 98(1), 211–219.CrossRefGoogle Scholar
  17. Boyd, A., Cole, D. C., Cho, D. B., Aslanyan, G., & Bates, I. (2013). Frameworks for evaluating health research capacity strengthening: a qualitative study. Health Research Policy and Systems, 11(1), 46.CrossRefGoogle Scholar
  18. Bozeman, B., & Sarewitz, D. (2011). Public value mapping and science policy evaluation. Minerva, 49(1), 1–23.CrossRefGoogle Scholar
  19. Brewer, J. D. (2011). The impact of impact. Research Evaluation, 20(3), 255–256.CrossRefGoogle Scholar
  20. Bridges, J. F., & Buttorff, C. (2010). What outcomes should US policy makers compare in comparative effectiveness research? Expert Review of Pharmacoeconomics & Outcomes Research, 10(3), 217–220.CrossRefGoogle Scholar
  21. Brody, H., Croisant, S. A., Crowder, J. W., & Banda, J. P. (2015). Ethical issues in patient-centered outcomes research and comparative effectiveness research: A Pilot study of community dialogue. Journal of Empirical Research on Human Research Ethics, 10(1), 22–30.CrossRefGoogle Scholar
  22. Burdge, R. J., & Vanclay, F. (1995). Social impact assessment. In F. Vanclay & D. A. Bronstein (Eds.), Environmental and social impact assessment (pp. 31–65). Chichester, UK: Wiley.Google Scholar
  23. Buxton, M., Hanney, S., Packwood, T., Roberts, S., & Youll, P. (2000). Getting reearch into practice: Assessing benefits from department of health and national health service research & development. Public Money and Management, 20(4), 29–34.CrossRefGoogle Scholar
  24. Castelnuovo, G., Limonta, D., Sarmiento, L., & Molinari, E. (2010). A more comprehensive index in the evaluation of scientific research: the single researcher impact factor proposal. Clinical practice and epidemiology in mental health: CP & EMH, 6, 109.CrossRefGoogle Scholar
  25. Cohen, G., Schroeder, J., Newson, R., King, L., Rychetnik, L., Milat, A. J., et al. (2015). Does health intervention research have real world policy and practice impact s: testing a new impact assessment tool. Health research policy and systems, 13(1), 3.CrossRefGoogle Scholar
  26. Colugnati, F. A., Firpo, S., de Castro, P. F. D., Sepulveda, J. E., & Salles-Filho, S. L. (2014). A propensity score approach in the impact evaluation on scientific production in Brazilian biodiversity research: The BIOTA Program. Scientometrics, 101(1), 85–107.CrossRefGoogle Scholar
  27. Cooksey, D. A. (2006). Review of UK health research funding. Norwich: HM Treasury.Google Scholar
  28. Council of Canadian Academies (2012). Expert Panel on Science Performance and Research Funding.Google Scholar
  29. Cousins, J. B., Svensson, K., Szijarto, B., Pinsent, C., Andrew, C., & Sylvestre, J. (2015). Assessing the practice impact of research on evaluation. New Directions for Evaluation, 2015(148), 73–88.CrossRefGoogle Scholar
  30. Cyril, F. M. D., & Phil, M. (2009). Health research: Measuring the social, health and economic benefits. Canadian Medical Association Journal, 180(5), 528–534.CrossRefGoogle Scholar
  31. Czarnitzki, D., & Lopes-Bento, C. (2013). Value for money? New Microeconometric Evidence on Public R&D Grants in Flanders. Research Policy, 42, 76–89.CrossRefGoogle Scholar
  32. Dannenberg, A. L., Bhatia, R., Cole, B. L., Dora, C., Fielding, J. E., Kraft, K., et al. (2006). Growing the field of health impact assessment in the United States: an agenda for research and practice. American Journal of Public Health, 96(2), 262–270.CrossRefGoogle Scholar
  33. Darmoni, S. J., Roussel, F., Benichou, J., Thirion, B., & Pinhas, N. (2002). Reading factor: A new bibliometric criterion for managing digital libraries. Journal-Medical Library Association, 90, 323–326.Google Scholar
  34. Davies, P. (2004). Is evidence-based government possible? Jerry Lee lecture to Campbell Collaboration Colloquium, Washington DC 19 February.Google Scholar
  35. Davies, P., Walker, A. E., & Grimshaw, J. M. (2010). A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implementation Science, 5(1), 5–14.CrossRefGoogle Scholar
  36. De Jong, S. P., Van Arensbergen, P., Daemen, F., Van Der Meulen, B., & Van Den Besselaar, P. (2011). Evaluation of research in context: An approach and two cases. Research Evaluation, 20(1), 61–72.CrossRefGoogle Scholar
  37. Denholm, E. M., & Martin, W. J. (2008). Translational research in environmental health sciences. Translational research: The journal of laboratory and clinical medicine, 151(2), 57.CrossRefGoogle Scholar
  38. Department of Education, Science and Training. (2005). Research quality framework: Assessing the quality and impact of research in Australia (Issue paper). Canberra: Commonwealth of Australia.Google Scholar
  39. Donovan, C. (2007). The qualitative future of research evaluation. Science and Public Policy, 34(8), 585–597.CrossRefGoogle Scholar
  40. Donovan, C. (2008). The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental, and cultural returns of publicly funded research. New Directions for Evaluation, 2008(118), 47–60.CrossRefGoogle Scholar
  41. Donovan, C. (2011). State of the art in assessing research impact: introduction to a special issue. Research Evaluation, 20(3), 175–179.CrossRefGoogle Scholar
  42. Drew, C. H., Pettibone, K. G., Finch Iii, F. O., Giles, D., & Jordan, P. (2016). Automated Research Impact Assessment: A new bibliometrics approach. Scientometrics, 106(3), 987–1005.CrossRefGoogle Scholar
  43. Ekboir, J. (2003). Why impact analysis should not be used for research evaluation and what the alternatives are. Agricultural Systems, 78(2), 166–184.CrossRefGoogle Scholar
  44. Eric. (2010). Evaluating the societal relevance of academic research: A guide. The Hague: Rathenau Institute.Google Scholar
  45. Ernø-Kjølhede, E., & Hansson, F. (2011). Measuring research performance during a changing relationship between science and society. Research Evaluation, 20(2), 130–142.CrossRefGoogle Scholar
  46. European Commission. (2010). Assessing Europe’s university-based research. Expert group on assessment of university-based research. Brussels, Belgium: Publications Office of the European Union. Google Scholar
  47. European Commission. (2011). Assessing Europe’s university-based research. Expert group on assessment of university-based research. Brussels: Publications Office of the European Union.Google Scholar
  48. Evans, A., Strezov, V., & Evans, T. J. (2009). Assessment of sustainability indicators for renewable energy technologies. Renewable and Sustainable Energy Reviews, 13(5), 1082–1088.CrossRefGoogle Scholar
  49. Figueredo, A. J., & Sechrest, L. (2001). Approaches used in conducting health outcomes and effectiveness research. Evaluation and Program Planning, 24(1), 41–59.CrossRefGoogle Scholar
  50. Fink, A. (1998). Conducting research literature review: from paper to internet. Thousand Oaks: SagePublications.Google Scholar
  51. Fox, R. J., Thompson, A., Baker, D., Baneke, P., Brown, D., Browne, P., et al. (2012). Setting a research agenda for progressive multiple sclerosis: The International Collaborative on Progressive MS. Multiple Sclerosis Journal, 18(11), 1534–1540.CrossRefGoogle Scholar
  52. Franceschini, F., Maisano, D., & Mastrogiacomo, L. (2015). Research quality evaluation: Comparing citation counts considering bibliometric database errors. Quality & Quantity, 49(1), 155–165.CrossRefGoogle Scholar
  53. Freeman, R. E. (1984). Strategic management, a stakeholder approach. Boston: Pitman.Google Scholar
  54. Frey, B. S., & Rost, K. (2010). Do rankings reflect research quality? Journal of Applied Economics, 13(1), 1–38.CrossRefGoogle Scholar
  55. Furman, E., Kivimaa, P., Kuuppo, P., Nykänen, M., Väänänen, P., Mela, H., & Korpinen, P. (2006). Experiences in the management of research funding programmes for environmental protection. Including recommendations for best practice. Finnish Environment Institute.Google Scholar
  56. Fusco, D., Barone, A. P., Sorge, C., D’Ovidio, M., Stafoggia, M., Lallo, A., et al. (2012). P. Re. Val. E.: Outcome research program for the evaluation of health care quality in Lazio, Italy. BMC Health Services Research, 12(1), 25.CrossRefGoogle Scholar
  57. Gershon, R., Rothrock, N. E., Hanrahan, R. T., Jansky, L. J., Harniss, M., & Riley, W. (2010). The development of a clinical outcomes survey research application: Assessment CenterSM. Quality of Life Research, 19(5), 677–685.CrossRefGoogle Scholar
  58. Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M. (1994). The new production of knowledge: The dynamics of science and research in contemporary societies. Thousand Oaks: Sage.Google Scholar
  59. Gibson, T. B., Ehrlich, E. D., Graff, J., Dubois, R., Farr, A. M., Chernew, M., et al. (2014). Real-world impact of comparative effectiveness research findings on clinical practice. The American journal of managed care, 20(6), e208–e220.Google Scholar
  60. Godin, B., & Dore, C. (2005). Measuring the impact s of science; beyond the economic dimension, INRS Urbanisation, Culture et Société. HIST Lecture, Helsinki Institute for Science and Technology Studies, Helsinki, Finland. Available at:
  61. Göransson, B., Maharajh, R., & Schmoch, U. (2009). New activities of universities in transfer and extension: Multiple requirements and manifold solutions. Science and Public Policy, 36(2), 157–164.CrossRefGoogle Scholar
  62. Grimshaw, J. M., Eccles, M. P., Lavis, J. N., Hill, S. J., & Squires, J. E. (2012). Knowledge translation of research findings. Implementation science, 7(1), 50.CrossRefGoogle Scholar
  63. Guinea, J., Sela, E., Gómez-Núñez, A. J., Mangwende, T., Ambali, A., Ngum, N., et al. (2015). Impact oriented monitoring: A new methodology for monitoring and evaluation of international public health research projects. Research Evaluation, 24(2), 131–145.CrossRefGoogle Scholar
  64. Guthrie, S., Wamae, W., Diepeveen, S., Wooding, S., & Grant, J. (2013). Measuring research: A guide to research evaluation frameworks and tools. Santa Monica: RAND.Google Scholar
  65. Haigh, F., Harris, P., & Haigh, N. (2012). Health impact assessment research and practice: A place for paradigm positioning? Environmental Impact Assessment Review, 33(1), 66–72.CrossRefGoogle Scholar
  66. Hall, J., & Wagner, M. (2012). Editorial: The challenges and opportunities of sustainable development for entrepreneurship and small business. Journal of Small Business & Entrepreneurship, 25(4), 409–416.CrossRefGoogle Scholar
  67. Hanney, S. R., Gonzalez-Block, M. A., Buxton, M. J., & Kogan, M. (2003). The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health research policy and systems, 1(1), 2.CrossRefGoogle Scholar
  68. Hanney, S., Packwood, T., & Buxton, M. (2000). Evaluating the benefits from health research and development centres: a categorization, a model and examples of application. Evaluation, 6(2), 137–160.CrossRefGoogle Scholar
  69. Harzing, A. W. (2010). The publish or perish book. Melbourne: Tarma Software Research Pty Ltd.Google Scholar
  70. Healthcare Industries Task Force, (2004). Better health through partnership: a programme for action, Final report. London, England: Author.Google Scholar
  71. Helming, K., Diehl, K., Kuhlman, T., Jansson, T., Verburg, P., Bakker, M., Morris, J. (2011). Ex ante impact assessment of policies affecting land use, part B: application of the analytical framework. Ecology and Society, 16(1), 1–29.Google Scholar
  72. Henshall, C. (2011). The impact of payback research: Developing and using evidence in policy. Research Evaluation, 20(3), 257–258.CrossRefGoogle Scholar
  73. Hessels, L. K., & Van Lente, H. (2010). The mixed blessing of Mode 2 knowledge production. Science Technology and Innovation Studies, 6(1), 65–69.Google Scholar
  74. Holbrook, J. B. (2012). Re-assessing the science-society relation: The case of the US National Science Foundation’s broader impact s merit review criterion (1997–2011). Techonology in Society, 27(4), 437–451.CrossRefGoogle Scholar
  75. Holbrook, J. B., & Frodeman, R. (2010, April). Comparative Assessment of Peer Review (CAPR). In EU/US workshop on peer review: Assessing ‘‘broader impact’’in research grant applications. Brussels: European Commission, Directorate-General for Research and Innovation.Google Scholar
  76. Holbrook, J. B., & Frodeman, R. (2011). Peer review and the ex ante assessment of societal impacts. Research Evaluation, 20(3), 239–246.CrossRefGoogle Scholar
  77. Horton, K., Tschudin, V., & Forget, A. (2007). The value of nursing: A literature review. Nursing Ethics, 14(6), 716–740.CrossRefGoogle Scholar
  78. Inglesi-Lotz, R., & Pouris, A. (2011). Scientometric impact assessment of a research policy instrument: the case of rating researchers on scientific outputs in South Africa. Scientometrics, 88(3), 747–760.CrossRefGoogle Scholar
  79. Ippoliti, R., & Falavigna, G. (2014). Public health institutions, clinical research and protection system of patients’ rights: An impact evaluation of public policy. Public Organization Review, 14(2), 109–125.CrossRefGoogle Scholar
  80. Jammer, I., Wickboldt, N., Sander, M., Smith, A., Schultz, M. J., Pelosi, P., et al. (2015). Standards for definitions and use of outcome measures for clinical effectiveness research in perioperative medicine: European Perioperative Clinical Outcome (EPCO) definitions: a statement from the ESA-ESICM joint taskforce on perioperative outcome measures. European Journal of Anaesthesiology (EJA), 32(2), 88–105.CrossRefGoogle Scholar
  81. Jette, A. M., & Keysor, J. J. (2002). 3. Uses of evidence in disability outcomes and effectiveness research. Milbank Quarterly, 80(2), 325–345.CrossRefGoogle Scholar
  82. Kovacs, S. M., Turner-Bowker, D. M., Calarco, G., Mulberg, A. E., & Paty, J. (2016). Practical considerations for the use of clinical outcome assessments (COAs) in pediatric clinical research: examples from pediatric gastroenterology. Therapeutic Innovation & Regulatory Science, 50(1), 37–43.CrossRefGoogle Scholar
  83. Kryl, D., Allen, L., Dolby, K., Sherbon, B., & Viney, I. (2012). Tracking the impact of research on policy and practice: Investigating the feasibility of using citations in clinical guidelines for research evaluation. British Medical Journal Open, 2(2), e000897.Google Scholar
  84. Lähteenmäki-Smith, K., Hyytinen, K., Kutinlahti, P., & Konttinen, J. (2006). Research with an impact. Evaluation practises in public research organisations. VTT Research Notes2336 Google Scholar
  85. LaKind, J. S., Goodman, M., Barr, D. B., Weisel, C. P., & Schoeters, G. (2015). Lessons learned from the application of BEES-C: Systematic assessment of study quality of epidemiologic research on BPA, neurodevelopment, and respiratory health. Environment International, 80, 41–71.CrossRefGoogle Scholar
  86. Lamm, G. M. (2006). Innovation works. A case study of an integrated pan-European technology transfer model. BIF Futura, 21(2), 86–90.MathSciNetGoogle Scholar
  87. Lane, P. J., Koka, B. R., & Pathak, S. (2006). The reification of absorptive capacity: A critical review and rejuvenation of the construct. Academy of Management Review, 31(4), 833–863.CrossRefGoogle Scholar
  88. Leduc, P. (1994). Evaluation in the social sciences: The strategic context. Research Evaluation, 4(1), 2–5.CrossRefGoogle Scholar
  89. Lee, F. S. (2007). The Research Assessment Exercise, the state and the dominance of mainstream economics in British universities. Cambridge Journal of Economics, 31(2), 309–325.CrossRefGoogle Scholar
  90. Liebow, E., Phelps, J., Van Houten, B., Rose, S., Orians, C., Cohen, J., et al. (2009). Toward the assessment of scientific and public health impact s of the National Institute of Environmental Health Sciences Extramural Asthma Research Program using available data. Environmental Health Perspectives, 117(7), 1147.CrossRefGoogle Scholar
  91. Luukkonen, T. (1998). The difficulties in assessing the impact of EU framework programmes. Research Policy, 27(6), 599–610.CrossRefGoogle Scholar
  92. Manion, F. J., Harris, M. R., Buyuktur, A. G., Clark, P. M., An, L. C., & Hanauer, D. A. (2012). Leveraging EHR data for outcomes and comparative effectiveness research in oncology. Current oncology reports, 14(6), 494–501.CrossRefGoogle Scholar
  93. Maredia, M. K., & Byerlee, D. (2000). Efficiency of research investments in the presence of international spillovers: Wheat research in developing countries. Agricultural Economics, 22(1), 1–16.CrossRefGoogle Scholar
  94. Martin, B. R. (2007). Assessing the impact of basic research on society and the economy. In Paper presented at the rethinking the impact of basic research on society and the economy (WF-EST international conference, 11 May 2007), Vienna, Austria.Google Scholar
  95. Martin, B. R. (2011). The research excellence framework and the’impact agenda’: Are we creating a Frankenstein monster? Research Evaluation, 20(3), 247–254.CrossRefGoogle Scholar
  96. Martin and Irvine. (1983). Assessing basic research: The case of the Isaac Newton telescope. Social Studies of Science, 13, 49–86.CrossRefGoogle Scholar
  97. Mayring, P. (2003). Qualitative inhaltsanalyse [Qualitative content analysis]. Qualitative Forschung, 3, 468–475.Google Scholar
  98. Milat, A. J., Bauman, A. E., & Redman, S. (2015). A narrative review of research impact assessment models and methods. Health Research Policy and Systems, 13(1), 18.CrossRefGoogle Scholar
  99. Milat, A. J., Laws, R., King, L., Newson, R., Rychetnik, L., Rissel, C., et al. (2013). Policy and practice impact s of applied research: a case study analysis of the New South Wales Health Promotion Demonstration Research Grants Scheme 2000–2006. Health Research Policy and Systems, 11(1), 5.CrossRefGoogle Scholar
  100. Moed, H. F. (2007). The effect of “open access” on citation impact: An analysis of ArXiv’s condensed matter section. Journal of the American Society for Information Science and Technology, 58(13), 2047–2054.CrossRefGoogle Scholar
  101. Moed, H. F., & Halevi, G. (2015). Multidimensional assessment of scholarly research impact. Journal of the Association for Information Science and Technology, 66(10), 1988–2002.CrossRefGoogle Scholar
  102. Molas-Gallart, J., & Tang, P. (2011). Tracing “productive interactions” to identify social impacts: An example for the social sciences. Research Evaluation, 20(3), 219–226.CrossRefGoogle Scholar
  103. Molas-Gallart, J., Salter, A., Patel, P., Scott, A., & Duran, X. (2002). Measuring third stream activities. Final report to the Russell Group of Universities. Brighton: SPRU, University of Sussex Google Scholar
  104. Moore, S. B., & Manring, S. L. (2009). Strategy development in small and medium sized enterprises for sustainability and increased value creation. Journal of Cleaner Production, 17(2), 276–282.CrossRefGoogle Scholar
  105. Morgan, M. M., & Grant, J. (2013). Making the grade: Methodologies for assessing and evidencing research impacts. Dean, A., Wykes, M. and Stevens, H.(eds), 7, 25–43Google Scholar
  106. Morton, S. (2015). Progressing research impact assessment: A ‘contributions’ approach. Research Evaluation, rvv016.Google Scholar
  107. Mostert, S. P., Ellenbroek, S. P., Meijer, I., Van Ark, G., & Klasen, E. C. (2010). Societal output and use of research performed by health research groups. Health Research Policy and Systems, 8(1), 30.CrossRefGoogle Scholar
  108. Mullins, C. D., Onukwugha, E., Cooke, J. L., Hussain, A., Baquet, C. R. (2010). The potential impact of comparative effectiveness research on the health of minority populations. Health Affairs, 29(11), 10–1377CrossRefGoogle Scholar
  109. Nallamothu, B. K., & Lüscher, T. F. (2012). Moving from impact to influence: Measurement and the changing role of medical journals. European Heart Journal, 33(23), 2892–2896.CrossRefGoogle Scholar
  110. New Philanthropy Capital. (2010). Social return on investment: Position paper. London: New Philanthropy Capital.Google Scholar
  111. Newby, H. (1994). The challenge for social science: A new role in public policy-making. Research Evaluation, 4(1), 6–11.CrossRefGoogle Scholar
  112. Niederkrotenthaler, T., Dorner, T. E., & Maier, M. (2011). Development of a practical tool to measure the impact of publications on the society based on focus group discussions with scientists. BMC Public Health, 11(1), 588.CrossRefGoogle Scholar
  113. Nightingale, P., & Scott, A. (2007). Peer review and the relevance gap: ten suggestions for policy-makers. Science & Public Policy (SPP), 34(8), 543–553.CrossRefGoogle Scholar
  114. Nutley, S. M., Walter, I., & Davies, H. T. (2007). Using evidence: How research can inform public services. Bristol: Policy Press.CrossRefGoogle Scholar
  115. O’Connor, D. P., & Brinker, M. R. (2013). Challenges in outcome measurement: Clinical research perspective. Clinical Orthopaedics and Related Research®, 471(11), 3496–3503.CrossRefGoogle Scholar
  116. OECD. (2008). OECD science, technology and industry outlook. Paris: OECD.Google Scholar
  117. OECD Report. (2016). OECD science, technology and industry outlook. Paris: OECD.Google Scholar
  118. Penfield, T., Baker, M. J., Scoble, R., & Wykes, M. C. (2014). Assessment, evaluations, and definitions of research impact: A review. Research Evaluation, 23(1), 21–32.CrossRefGoogle Scholar
  119. Perrin, E. B. (2002). Some thoughts on outcomes research quality improvement, and performance measurement. Medical Care, 40(6), 89–91.Google Scholar
  120. Pontille, D., & Torny, D. (2010). The controversial policies of journal ratings: Evaluating social sciences and humanities. Research evaluation, 19(5), 347–360.CrossRefGoogle Scholar
  121. Potì, B., & Cerulli, G. (2011). Evaluation of firm R&D and innovation support: New indicators and the ex-ante prediction of ex-post additionality-potential. Research Evaluation, 20(1), 19–29.CrossRefGoogle Scholar
  122. Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., et al. (2011). Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65–76.CrossRefGoogle Scholar
  123. Punt, A., Schiffelers, M. J. W., Horbach, G. J., van de Sandt, J. J., Groothuis, G. M., Rietjens, I. M., et al. (2011). Evaluation of research activities and research needs to increase the impact and applicability of alternative testing strategies in risk assessment practice. Regulatory Toxicology and Pharmacology, 61(1), 105–114.Google Scholar
  124. Reeve, B. B., Burke, L. B., Chiang, Y. P., Clauser, S. B., Colpe, L. J., Elias, J. W., et al. (2007). Enhancing measurement in health outcomes research supported by Agencies within the US Department of Health and Human Services. Quality of Life Research, 16(1), 175–186.CrossRefGoogle Scholar
  125. Reichertz, J. (2010). Abduction: the logic of discovery of grounded theory. Forum Qualitative Social Research, 11, 1–12.Google Scholar
  126. Roessner, D. (2000). Quantitative and qualitative methods and measures in the evaluation of research. Research Evaluation, 9(2), 125–132.CrossRefGoogle Scholar
  127. Rymer, L. (2011). Measuring the impact of research—The context for metric development. Turner, Australia: The Group of Eight.Google Scholar
  128. Sarli, C. C., Dubinsky, E. K., & Holmes, K. L. (2010). Beyond citation analysis: A model for assessment of research impact. Journal of the American Medical Library Association, 98(1), 17.CrossRefGoogle Scholar
  129. Schaltegger, S. (2002). A framework for ecopreneurship e leading bioneers and environmental managers to ecopreneurship. Greener Management International Journal, 38, 45–58.CrossRefGoogle Scholar
  130. Schulz, K. F., Chalmers, I., Hayes, R. J., & Altman, D. G. (1995). Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA, 273(5), 408–412.CrossRefGoogle Scholar
  131. Seuring, S., & Müller, M. (2008). From a literature review to a conceptual framework for sustainable supply chain management. Journal of Cleaner Production, 16(15), 1699–1710.CrossRefGoogle Scholar
  132. Seuring, P. D. S., Müller, P. D. M., Westhaus, M., Morana, R. (2005). Conducting a literature review—the example of sustainability in supply chains. In H. Kotzab, S. Seuring, M. Muller & G. Reiner (Eds.), Research methodologies in supply chain management (pp. 91–106). Physica-Verlag HD.Google Scholar
  133. Social Sciences and Humanities Scientific Committees. (2013). Humanities and social sciences in horizon 2020 societal challenges: Implementation and monitoring.Google Scholar
  134. Sombatsompop, N., Markpin, T., Yochai, W., & Saechiew, M. (2005). An evaluation of research performance for different subject categories using Impact Factor Point Average (IFPA) index: Thailand case study. Scientometrics, 65(3), 293–305.CrossRefGoogle Scholar
  135. Spaapen, J., & van Drooge, L. (2011). Introducing “productive interactions” in social impact assessment. Research Evaluation, 30(3), 211–218.CrossRefGoogle Scholar
  136. Spaapen, J., Dijstelbloem, H., & Wamelink, F. (2007). Evaluating research in context. A method for comprehensive assessment (2nd ed.). The Hague: COS.Google Scholar
  137. Stein, T. V., Anderson, D. H., & Kelly, T. (1999). Using stakeholders’ values to apply ecosystem management in an upper Midwest landscape. Environmental Management, 24(3), 399–413.CrossRefGoogle Scholar
  138. Stryer, D., Tunis, S., Hubbard, H., & Clancy, C. (2000). The outcomes of outcomes and effectiveness research: Impact s and lessons from the first decade. Health Services Research, 35(5 Pt 1), 977.Google Scholar
  139. Taylor, J., & Bradbury-Jones, C. (2011). International principles of social impact assessment: Lessons for research? Journal of Research in Nursing, 16(2), 133–145.CrossRefGoogle Scholar
  140. Technopolis. (2009). Impact Europese Kaderprogramma’s in Nederland. Woluwe-Saint-Pierre: Technopolis Group.Google Scholar
  141. Thorpe, R., Holt, R., Macpherson, A., & Pittaway, L. (2005). Using knowledge within small and medium-sized firms: A systematic review of the evidence. International Journal of Management Reviews, 7(4), 257–281.CrossRefGoogle Scholar
  142. Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review. British Journal of Management, 14(3), 207–222.CrossRefGoogle Scholar
  143. Tremblay, G., Zohar, S., Bravo, J., Potsepp, P., & Barker, M. (2010). The Canada Foundation for Innovation’s outcome measurement study: A pioneering approach to research evaluation. Research Evaluation, 19(5), 333–345.CrossRefGoogle Scholar
  144. United States Government Accountability Office. (2012). Designing evaluations. Washington, DC: Author.Google Scholar
  145. Van den Besselaar, P., & Leydesdorff, L. (2009). Past performance, peer review and project selection: A case study in the social and behavioral sciences. Research Evaluation, 18(4), 273–288.CrossRefGoogle Scholar
  146. Van der Meulen, B., & Rip, A. (2000). Evaluation of societal quality of public sector research in the Netherlands. Research Evaluation, 9(1), 11–25.CrossRefGoogle Scholar
  147. Van Raan, A. F. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143.CrossRefGoogle Scholar
  148. Van Vught, F., & Ziegele, F. (2011). Design and testing the feasibility of a multidimensional global university ranking. Final Report. European Community, Europe: Consortium for Higher Education and Research Performance Assessment, CHERPA Network.Google Scholar
  149. Vanclay, F. (2003). International principles for social impact assessment. Impact assessment and project appraisal, 21(1), 5–12.CrossRefGoogle Scholar
  150. Westrich, K. D., Wilhelm, J. A., & Schur, C. L. (2016). Comparative effectiveness research in the U.S.A.: when will there be an impact on healthcare decision-making? Journal of Comparative Effectiveness Research, 5(2), 207–216.CrossRefGoogle Scholar
  151. Willis, T. A., Hartley, S., Glidewell, L., Farrin, A. J., Lawton, R., McEachan, R. R., et al. (2016). Action to Support Practices Implement Research Evidence (ASPIRE): Protocol for a cluster-randomised evaluation of adaptable implementation packages targeting ‘high impact’ clinical practice recommendations in general practice. Implementation Science, 11(1), 25.CrossRefGoogle Scholar
  152. Wooding, S., Hanney, S., Pollitt, A., Buxton, M., & Grantm, J. (2011). Project Retrosight. Understanding the Returns from Cardiovascular and Stroke Research: Policy Report. Cambridge: RAND Europe.Google Scholar
  153. Wu, Z. (2015). Average evaluation intensity: A quality-oriented indicator for the evaluation of research performance. Library & Information Science Research, 37(1), 51–60.CrossRefGoogle Scholar
  154. Pastakia, C. M. R. (1998). The rapid impact assessment matrix (RIAM)—A new tool for environmental impact assessment. In K. Jensen (Ed.), Environmental impact assessment using the rapid impact assessment matrix (RIAM). Fredensborg, Denmark: Olsen & Olsen.Google Scholar
  155. Yiend, J., Chambers, J. C., Burns, T., Doll, H., Fazel, S., Kaur, A., et al. (2011). Outcome measurement in forensic mental health research: An evaluation. Psychology, Crime & Law, 17(3), 277–292.CrossRefGoogle Scholar
  156. Zaratin, P., Battaglia, M. A., & Abbracchio, M. P. (2014). Nonprofit foundations spur translational research. Trends in Pharmacological Sciences, 35(11), 552–555.CrossRefGoogle Scholar
  157. Zaratin, P., Comi, G., Coetzee, T., Ramsey, K., Smith, K., Thompson, A., et al. (2016). Progressive MS Alliance Industry Forum: maximizing collective impact to enable drug development. Trends in Pharmacological Sciences, 37(10), 808–810.CrossRefGoogle Scholar
  158. Zelefsky, M. J., Lee, W. R., Zietman, A., Khalid, N., Crozier, C., Owen, J., et al. (2013). Evaluation of adherence to quality measures for prostate cancer radiotherapy in the United States: Results from the quality research in radiation oncology (QRRO) survey. Practical Radiation Oncology, 3(1), 2–8.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2017

Authors and Affiliations

  1. 1.ALTIS-Alta Scuola Impresa e SocietàUniversità Cattolica del Sacro CuoreMilanItaly
  2. 2.Dipartimento di Scienza della VitaUniversità di SienaSienaItaly
  3. 3.Fondazione Italiana Sclerosi MultiplaGenoaItaly

Personalised recommendations