Innovation and innovator assessment in R&I ecosystems: the case of the EU Framework Programme

Abstract

The EU Framework Programme (FP) has evolved from supporting pre-competitive research to cover the entire innovation value chain and became the world’s largest research and innovation (R&I) ecosystem. It facilitates the creation of R&I networks among organizations from around the world. To oversee and manage the innovation activities of complex collaborative R&I projects, new data, indicators and tools were needed. We present the Innovation Radar (IR), an initiative of the European Commission to identify and manage innovations and innovators in the FP R&I ecosystem. The IR is used as an intelligence platform providing insights on innovation activities in large collaborative R&I projects. The internal IR tools allow policy officers to monitor R&I projects and provide custom support to facilitate the commercialisation of their results. External actors use the public IR data platform to seek for collaborative partners or for investment opportunities.

Introduction

Launched in the early 80’s, the Framework Programme (FP) aimed at bringing together expertise from across Europe to make it more competitive in key technologies (EC 1981). The first FPs focussed primarily on supporting pre-competitive research with the intention to close the research gap between Europe and other world’s regions (EC 2015a). As the global technological and economic landscape evolved, so did the European Union (EU) research policy orientation. Recognising knowledge and innovation as key drivers of economic development, the support to collaborative innovation activities was put on equal footing with funding exploratory research. The EU FP has evolved to the world’s largest collaborative research and innovation (R&I) ecosystem. These changes created the need for new data and indicators that would allow monitoring and managing the interactions between the stakeholders and the ecosystem’s outcomes. In 2013, the European Commission (EC) launched the Innovation Radar (IR) initiative (EC 2014c). Its main objective is to identify innovations and innovators in EU-funded R&I projects. In 2016, the IR was scaled-up across all FP-funded projects as a monitoring and innovation management tool (EC 2016a).

We describe the IR methodology consisting of the Innovation Radar Survey (IRS) and two assessment frameworks to evaluate the potential of innovations and the capacity of innovators. In addition, we present some key findings on innovations and innovators in EU-funded projects. The objective is to illustrate how such an initiative as the IR for data collection and analysis can be used in the context of managing innovation activities in the world’s largest R&I programme.

The remainder of this article is structured as follows: Sect. 2 presents the policy context and the rationale behind the IR initiative. Section 3 reviews the key ingredients of innovation and innovator assessment as discussed in the innovation management literature. Section 4 presents the IR survey and the data collection process. Section 5 describes the IR Innovation Potential and Innovator Capacity Assessment Frameworks. Section 6 provides descriptive analysis of the innovations and innovators in EU-funded R&I projects identified and assessed with the IR methodology. Section 7 concludes with policy implications and recommendations for the use of the IR as an innovation management tool for collaborative research projects and presents potential future research directions enabled by the IR data. Finally, Sect. 8 includes technical appendices.

Policy context and purpose

Formulating a common research strategy in the early 80’s, the EC was addressing an increasing research gap between Europe and other world’s regions (EC 1981). This strategy would take the form of a framework programme embracing all Community research initiatives. The 1st FP for research was launched in 1984 to bring together expertise from across Europe to make it more competitive in key technologies (EC 2015a). While at the beginning, FPs focussed primarily on pre-competitive research, its scope has gradually widened (Reillon 2017). Reflecting the conclusions of the Aho-report (EC 2006) to put R&I at the centre of European policy, in 2007, the EC adopted the proposal for FP7 with the objective of “building an ERA of knowledge for growth” (EC 2005). It placed the support of innovation activities next to the funding of exploratory research. The adoption of the Europe 2020 strategy for smart, sustainable and inclusive growth and the introduction of the ‘innovation union’ initiative further confirmed that innovation has become ‘the overarching policy objective’ of the EU and the Member States (EC 2010b, c). They considered knowledge and innovation as key drivers of economic development and proposed measures to improve framework conditions and access to finance for R&I. All this shaped the scope and objectives of the FPs, which were gradually expanded to include support to innovation activities. Changing the name of the FP8 to Framework Programme for Research and Technology Development to Research and Innovation reflects this transition towards a large collaborative innovation ecosystem (EC 2011).Footnote 1 In this context, there was a need for new types of data allowing to evaluate and monitor the results of FPs. To address this, in 2013, the EC’s Directorate General The Directorate‑General for Communications Networks, Content and Technology (DG CONNECT), responsible for the Information and Communication Technology (ICT) part of the FP, launched the Innovation Radar (IR) initiative (EC 2014c). The IR has three main objectives. First, it aims at identifying and assessing innovations and innovators in the EU-funded R&I projects.Footnote 2 Second, based on the collected information on the innovation stage, process and needs, it provides guidance to the innovators on the most appropriate steps to reach the market. Finally, it supports innovators through EC-sponsored initiatives to cover such needs as, among others, networking, access to finance or managing Intellectual Property Rights (IPR).

The reasons behind the introduction of the IR can be compared to the introduction of other innovations surveys, such as the Community Innovation Survey (CIS) implementing the Oslo Manual guidelines for collecting and interpreting innovation data (OECD 2005). Until the early 90’, statistical offices collected mainly R&D-related data and indicators (Gault 2013; Godin 2009). Having only limited information on innovation inputs, e.g. R&D expenditures, considerably limits the scope of action for policy intervention to mainly R&D subsidies. With increasing complexity of the innovation processes and pressure to maintain competitiveness in the global economy, there was an urge for innovation output measures to learn more about the results of the innovation process (Arundel 2007; Godin 2009). Understanding the output side of the innovation process expanded the range of potential support mechanisms. For example, an analysis of potential bottlenecks to innovation commercialisation allows identifying areas that need policy support.

The IR rests on a formal methodology to identify and assess the innovation potential and innovator capacity. The main element of this methodology is the Innovation Radar Survey (IRS) (see Sect. 8). It collects information on innovations developed by collaborative consortia in EU-funded R&I projects, their types, commercialisation plans and needs. The IRS identifies also the key organisations behind these innovations.

Innovation surveys, such as the IRS, suffer from the abundance of scattered information based on responses to individual questions. Simple indicators do not capture the complex reality of the dynamics innovation processes and the linkages between the actors and their practical application in the policy-making purposes is limited. One way of addressing this limitation is to develop complex indicators (Arundel 2007; Arundel and Hollanders 2005). Such indicators can reveal significantly more about innovation activities, models and strategies than simple indicators relying on the frequency of responses to a single question (OECD 2009). Therefore, the IR methodology includes also the Innovation Potential Assessment Framework (IPAF) and Innovator Capacity Assessment Framework (ICAF) (see Sect. 3). Whereas the first one makes use of complex indicators to capture the complexity of innovation development and commercialisation process, the second one profiles the innovators behind these innovations.

Although the main purpose of innovation surveys is to provide decision makers with information on innovation performance of firms and/or countries, their practical relevance and use remains low (Arundel 2007; Mytelka et al. 2008). This is not changed by the intensive use of survey data in scientific studies. Policy analysts claim that information provided by innovation surveys and studies using data derived from them are not focused on their needs (Arundel 2007). The reasons of the low level of use of the information provided by innovation surveys include both flaws in questionnaire design and a lack of awareness within the policy-community that such data exists and is relevant to policy makers (Mytelka et al. 2008).

The IR initiative was started by Policy Officers at the DG CONNECT of the European Commission in collaboration with the DG Joint Research Centre, an in-house science and research service of the EC (EC 2018c). From the very beginning, the IR and its methodology were conceived to serve specific purposes of policy makers. The key question was how to bring the results of EU-funded projects to the market. The launch of the IR survey and assessment frameworks created an intelligence platform providing policy-relevant insights on innovation activity in EU-funded R&I projects. These insights are expected to improve the understanding of innovation aspects and strategies of a project and to help individual partners in defining the best innovation path to follow (EC 2014b).

The information collected by the IR is focused on three elements: innovations and innovators within EU-funded R&I projects, and their specific go to market needs. The information on go to market needs of innovators is further used to improve the matchmaking between the demand and supply of public support to R&I. In parallel to the creation of the IR intelligence platform, a comprehensive inventory of all EC support actions and initiatives targeting innovators and start-ups is compiled (EC 2014c). Innovators identified by the IR are matched with support action(s) that are best placed to help them to address their go to market needs and to fulfil their market potential (EC 2016b, 2017).

In conclusion, besides collecting information on the innovation output in EU-funded R&I projects, the IR initiative supports innovators by suggesting a range of targeted actions to assist them in fulfilling their potential in the market. This way, it can be considered as an innovation management tool for the EU R&I ecosystem. It is expected that it will help to improve the management and performance of these projects and increase their innovative and economic outcomes.

Innovation and innovator assessment: literature review

In general terms, one can differentiate between two types of assessment of innovations and technology projects. One is a process-based and the other culturally-based (R.G. Cooper and Kleinschmidt 1997; Khurana and Rosenthal 1998). The process-based assessment uses established procedures for assessing proposals for funding. It is mainly used by, for example, banks granting loans to small, technology-based enterprises, or large research organisations, e.g. NASA, when choosing new products to develop from various technological projects. The process-based assessment tends to be a regular process, with proposals arriving and being reviewed on a regular basis. In contrast, the culturally based approach does not assess all projects against a formal methodology. Instead, assessment is based on the assessor’s experiences both individually and collectively. Business angels and venture capitalists are the most common users of the culturally based approach. The assessment is usually done on a case-by-case basis by a team consisting of experts with different backgrounds.

Considering the above, the IR methodology for assessing innovations and innovators in EU FP projects can be seen as process-based. It uses a structured approach based on a set of predefined criteria and scoring system. This type of activity is commonly performed by large research organisations, technology-based companies or universities screening companies or projects with respect to their new product development, technological readiness and market potential of new products (De Coster and Butler 2005; Lafuente and Berbegal-Mirabent 2019; Liao and Witsil 2008; M’Chirgui et al. 2018). The principles of the IR assessment frameworks are grounded on the ideas of innovation and new technology venture assessment. The following sections describe in detail the theoretical underpinnings of the IR assessment frameworks.

Innovation potential assessment

The IR framework for the assessment of innovations in EU-funded projects builds on the large body of empirical studies aiming at unravelling factors determining the commercialisation success of innovations (for extended surveys see Calantone et al. 2002; Evanschitzky et al. 2012; Fernandes et al. 2013; Griffin 1997; Montoya-Weiss and Calantone 1994). Although this literature lacks the uniformity in the measures of success, there seem to be some consensus on the overall classes or groups of the key factors (Astebro 2004; Balachandra and Friar 1997; Galbraith et al. 2006). For example, based on a meta-analysis of 60 peer-reviewed publications, Balachandra and Friar (1997) proposes four major categories on market, technology, environment, and organisational related characteristics. These categories have been widely recognised and adopted by many scholars in the field of technology commercialisation of R&D projects (Astebro 2004; Linton et al. 2002). Alternatively, Heslop et al. (2001) use factor analyses to group more than fifty variables related to the technology commercialisation process into four comprehensive dimensions of market readiness, technology readiness, commercial readiness, and management readiness.

Taking stock of the above considerations, innovation assessment approach taken by the IR is built along the following dimensions: innovation readiness, innovation management and market potential of innovation. Below the relevance of these concepts is reviewed based on the existing literature on success factors of innovations.

Innovation readiness

A successful launch of innovative products or services begins with the identification of technologies that are ready for commercialisation (Galbraith et al. 2006; Heslop et al. 2001). In this respect, innovation readiness is closely related to the notion of “technology readiness levels” (TRLs) which aim to provide a common understanding of the status and development stage of new technologies (Mankins 2009). Originally developed by the NASA in the mid-1970s, the use of TRLs rapidly increased as they enable consistent and uniform assessments of technological maturity across different types of technologies and disciplines. Being discipline-independent, they facilitate more effective communications of the maturity of evolving innovations among diverse organisation types.

Similar to TRLs the innovation readiness criterion used in the Innovation Potential Assessment Framework aims to capture the various technological steps that a product or service development process comprises prior to commercialisation. These steps include among others the idea generation and product or service definition, concept screening and prototype development, concept testing and diagnostic evaluation, and final development (Cooper 2017; Dorf and Worthington 1987; Heslop et al. 2001). Hence, the innovation readiness aims to define the development phase of the innovation, being e.g. conceptualization, experimentation or commercialisation. It also takes into account the steps to secure the necessary technological resources, e.g. skills, to bring the innovation to the market, the development stage of an innovation and the time to its commercialisation.

The process of developing new technologies and innovations in EU-funded R&I projects relies on joint efforts of participants. Technology transfer between participants is thus an essential step in the innovation commercialisation process. Technology transfer can occur through different channels and can be obtained through both formal and informal methods. Formal methods include licensing, cooperative alliances and spin-offs (Lane and Lubatkin 1998; Meoli et al. 2019; Miranda et al. 2018), while informal methods include labour mobility and networking activities through personal contacts (Grimpe and Fier 2010; Pouder and John 1996). Both methods are considered as effective means to foster the development of innovations in government funded laboratories, universities or private research organisations (Autio and Laamanen 1995; Bercovitz and Feldman 2006).

Innovation management

Innovation management is often seen as an important success indicator of a technology venture (Kirchberger and Pohl 2016; Meseri and Maital 2001). In broad terms, it is related to the various managerial capabilities in terms of risk management, absorptive capacity, knowledge management, project management and milestone setting. It involves measures such as securing human and financial resources and organising the innovation process. This is particularly important at the early stages of the innovation process where employees must be provided enough time and resources by their management to generate new ideas.

In general, an effective and efficient innovation process requires commitment and direct involvement from the top management and all the research partners (Davenport et al. 1998; Nevens 1990). Two meta-analysis studies exploring the results of a wide range of papers assessing the factors predicting success of new product innovations support this view and highlight the significant importance of dedicated human resources and senior management support (Evanschitzky et al. 2012; Henard and Szymanski 2001). In a similar type of study, Kirchberger and Pohl (2016) identified key success factors related to management techniques such as the capability to create product concepts, integrated roadmaps, and conduct market research. In this respect, innovation management is essential in providing the necessary conditions for idea generations and business propositions as elaborated in business plans and market studies. Although effective managerial capabilities and resource allocations are critical at early stages of an innovation process, they remain important throughout the overall innovation process’ duration (Astebro 2004).

Besides the provision of sufficient internal resources, also interactions with external actors are important to increase the chances of successful commercialisation of technologies (Gerard et al. 2002). First, they are needed to attract the necessary financial funds that firms may lack in-house to pursue the development process. Hence, this may imply applying for and securing capital investment from public and/or private sources in technology ventures (Christensen 2010; Civera et al. 2017). Second, external interactions may occur through involvement of end-users and costumers in the innovation process (Lüthje 2004; Riggs and Von Hippel 1994). End-user engagement through informal advice and communities-of-practice is not a straightforward option and requires the support and willingness from the innovation management in disclosing parts of the technology that is being developed in-house to a wider public. In the last decades, user-engagement became increasingly popular through the emergence of open source communities that actively involve customers in the innovation process (Cuartielles et al. 2018; Lerner and Tirole 2002).

Finally, innovation management may also include the decision to create a spin-off, usually founded with the purpose of commercialising technologies that were developed at universities, research institutes, or governmental and private organisations (Jung and Kim 2018; Rogers et al. 2001). One of the reasons for a spin-off formation is to keep tighter control on the commercialisation process. Spin-offs are often created to develop a technology for which there is not initial market yet (Hsu 2005). Hence, spin-offs are typically targeting niche markets (Autio 1994). In order to overcome lack of resources at early stages of development, spin-offs can rely on incubators or their parent organisations for informal support (Grimaldi and Grandi 2005; Steffensen et al. 2000). Alternatively, university spin-offs can be created if the support for pursuing research activities at the host organization is limited or where scientific career opportunities are scarce (Meoli et al. 2018; Meoli and Vismara 2016).

Market potential

The market potential dimension encompasses both market and technology related characteristics that have been pointed out by prior studies as important factors for an effective technology commercialisation (Balachandra and Friar 1997; Heslop et al. 2001). Both types of characteristics are aggregated into one dimension as these aspects determine the economic benefit of a commercially viable innovation on the market. At the same time, innovation commercialisation process involves the understanding of existing or potential market needs and looking for innovative ideas to satisfy them (Mitchell and Singh 1996). Thus, market potential reflects the likely economic or social value that can be generated by a new product or service (de Vries 2012).

The market potential is contingent on the prospective market conditions for a product or service, which determines the chances of successful commercialisation. Product or service developments should be driven by a clear market orientation, including the identification of a potential customer base (Goldenberg et al. 2001; Rothwell 1992). Having a clear market orientation implies that businesses are not merely focused on producing products and selling them but rather on identifying and satisfying the needs of target markets (Kotler 2003). Market orientation is seen as one of the most important factors in innovation commercialisation and requires a responsive and proactive involvement of firms during the overall product or service development and commercialisation phase. A larger degree of market orientation and the extent to which products are perceived as satisfying customer needs is positively associated with product innovation performances in terms of sales growth or new-product success (Slater et al. 2014; Van der Panne et al. 2003).

Successful commercialisation of a new technologies also depends on other market related characteristics (Evanschitzky et al. 2012; Meseri and Maital 2001). As the ultimate goal of a technology development is to launch a product or service on the market, most technology processes include analyses of the market structure and its attractiveness. Market dynamics and conditions alter the opportunities that companies can seize to raise their competitiveness with new technology introductions. Other market related characteristics relevant for the potential innovation commercialisation include the existence of definable markets, the absence of strong competitors, the size and growth rate of target markets, the market accessibility and the absence of entry barriers (see e.g. Henard and Szymanski 2001; Heslop et al. 2001).

With respect to the last issue, the commercial exploitation and market entry of a new technology can be hampered due to regulatory and trade barriers or due to standardisation issues (Cooper 2007; D’Este et al. 2012; Galia and Legros 2004). Standardisation is a particularly important issue in the context of EU-funded research projects, as they often support the development of advanced and complex technologies, e.g. ICT. Hence, many research projects contribute to the development of technologies that need to be compatible with existing technologies. Although technological standards are supposed to accelerate technology development, they may also have a negative impact on the innovation engagement of firms (Blind 2016). For example, SMEs are often excluded from the standardisation process due to a lack of resources, expertise and absorptive capacity (de Vries et al. 2009). Another challenge of the standardisation process that may act as a bottleneck to innovative efforts is the lack of homogeneity in the interpretation of licencing terms. A final hurdle of standardisation is that it requires a long-term strategy and investment, and may require anticipation of future regulations.

Efforts to obtain intellectual property rights through patents, trademarks and copyrights are commonly integrated in the technology development and commercialisation process in order to reap the benefits of the innovative activity. Hence many scholars highlight patentability as important success factors of innovations (Balachandra and Friar 1997). Proprietary and position protection through IPR facilitates technology commercialisation and is perceived as an important stage in the technology development process (Cooper 2007; Galbraith et al. 2006; Kirchberger and Pohl 2016).

Innovator assessment

The innovator capacity assessment taken by the IR rests on the assumption that firms’ innovative behaviour can be captured and evaluated through their innovation capacities (Cohen and Levinthal 1990; Fernandes et al. 2013; Teece 2011).Footnote 3 Using its innovation capacities, a firm adapts itself to the changing market conditions by, for example, responses to market needs (Nonaka and Takeuchi 1995; Potter 1989). These responses can take the form of the introduction of new products and processes (Calantone et al. 2002; Kasper 1987). Understanding the broader environment and interactions with other actors, increases an organisation’s responsiveness to the changes of the surrounding environment (Guan and Ma 2003). Consequently, innovation capacities embrace a complex set of variables including internal resources and capacities and patterns of interactions with the external conditions (Forsman 2011).

Considering the above, the innovator assessment is based on two criteria: innovator’s ability and environment. The first one aims to capture the intrinsic capacity of an organisation participating to a collaborative research project to innovate. The second one looks at the conditions in which the organisation innovates. The following sections elaborate on both criteria in detail.

Innovator’s ability

Innovator’s ability criterion relates to intrinsic innovation capacities of an organisation. These capacities are expressed through its innovations in processes, services, products and marketing (Wonglimpiyarat 2010). Innovative capacity is also understood as the ability to generate new ideas and turn them into solutions that meet potential market needs (Assink 2006; Hult et al. 2004). The speed with which an organisation responds to the changing conditions and market needs through designing and launching new products, depends on its learning capacities (Hull and Covin 2010).

Innovative and technological capacities of an organisation translate into the ability of introducing superior products or services. Such products or services can be characterised by technological sophistication or to high degree of innovativeness (Evanschitzky et al. 2012), which has a direct positive impact on their commercial success (Mahajan and Wind 1992). The capacity to introduce highly innovative products or services in terms of newness, originality, uniqueness and radicalness highly influence the perceived value of an innovation and hence can be seen as a proxy of an organisation’s innovative and technology commercialisation capacities (Henard and Szymanski 2001).

Innovative capacities have direct implications for economic performance (Klette and Griliches 2000; Klette and Kortum 2004). It is considered as the key factor to firm growth (Crepon et al. 1998; Cucculelli and Ermini 2012). Empirical findings suggest that innovation capacities, measured by the number of new products or processes, have a considerably stronger impact on firm performance than R&D-related measures (Geroski 1995; Hölzl 2008). In other words, firms that are able to translate research, technology and knowledge into marketable products or services achieve better economic performance than their non-innovative counterparts do.

Innovator’s environment

Most of the EU-funded R&I projects are of collaborative nature. For example, around 11 organisations participated to an average FP7 project (EC, 2015d). Hence, the innovator’s environment criterion incorporated in the Innovator Capacity Assessment Framework aims to capture the overall conditions in a project consortium, which an innovator faces. It reflects the composition and activity of partner organisations, the performance of the project in terms of innovation and the commitment of partners to exploiting the innovative outcomes of the project. In addition, it also takes into account the presence of organisations that are directly interested in applying or exploiting the innovations, e.g. end-users.

Working together with other organisations gives research partners access to knowledge and complementary assets that they do not possess in-house (Hagedoorn 1993; Powell et al. 1996). Conducting research and development with external partners allows partners to create and mobilize more resources than would be possible through their individual efforts (Das and Teng 2000). Collaborative research reduces also the risks associated with the R&D-intensive projects (Cohen and Levinthal 1990). Overall, research collaborations with different types of partners serve as conduits for information and learning and are of strategic importance in the diffusion of tacit and codified knowledge (Ahuja 2000; Doz and Hamel 1997). Hence, research collaborations are seen as effective means of getting information about new technologies and practices. They serve as a radar function to screen promising new technologies (Ahuja 2000; Ahuja and Lampert 2001; Powell and Brantley 1992). This way, participating in collaborative research projects increases the innovative capacity of the partners.

Collaborative research has also some downsides. Greater heterogeneity of organisations in research collaborations can hinder the performance of research networks. The potential benefits of organisational diversity may be consumed by greater communication and information exchange problems, differences in institutional culture (Hennart and Zeng 2002; Zucker 1986), incompatible reward systems (e.g. publications versus commercial products and services), managerial issues and barriers to trust (Goerzen and Beamish 2005; Pandza et al. 2011). Reconciling different objectives is another fundamental issue that can hinder inter-organisational collaboration (Harryson et al. 2008). This is particularly visible when heterogeneous organisations participate to a joint research project. For example, SMEs have a strong strategic alignment with FP projects and explicit goals related to innovation outputs such as developing a prototype, a patentable technology, or a complementary technology that will directly enhance their competitiveness (Polt et al. 2008). They focus on projects with an applied orientation and engage only in cooperative agreements that are likely to yield tangible benefits and guarantee their immediate survival and growth (Baum et al. 2000; Miles et al. 1999). In contrast, large firms are less willing to share their economic knowledge with smaller rivals and use collaborative projects as a technology watch platform (Röller et al. 2007). In addition, universities and research organisations have different motivation to engage in research collaborations. Their main objective is to build up new knowledge and technology and to investigate of new research areas. Commercialisation is not their main objective (Carayol 2003). Hence, collaborations between different types of organisations may create beneficial complementary effects, but they can at the same time be potential sources of conflict (Pandza et al. 2011).

Considering the costs and benefits of participating to collaborative research projects, the environment in which innovators operate has an impact on both individual and project capacity to generate and exploit the results of collaborative efforts. It is correct to assume that a positive attitude and commitment to exploit the results of a project of all participating organisations have positive spillover effects on the innovator and vice versa. This motivates the recognition and inclusion of the innovator’s environment dimension in the IR framework for innovator capacity assessment.

Innovation Radar Survey and data collection process

The core of the IR is the Innovation Radar Survey (IRS) using a formal questionnaire (see “Appendix”). The objective of the IRS is to collect a full set of information on innovation output of EU-funded R&I projects and the process of innovation commercialisation. In addition, the IRS identifies key organisations behind delivering these innovations to the market. The main features of the IRS are summarized in Table 1. The details of survey design and the process of data collection are described below.

Table 1 Key features of the Innovation Radar Survey, data collection actors and process

Survey design

Similar to the CIS, the IRS, belongs to the group of true innovation surveys. In contrast to complementary surveys that, in addition to other information, collect data on a specific aspect of innovation, the IRS is custom-designed. The questionnaire was developed by DG CONNECT’s Policy Officers and DG JRC, an in-house science service of the EC (De Prato et al. 2015). During the design phase, external experts specialising in technology commercialisation and technological entrepreneurship were consulted (McFarthing 2015; Wilson 2015).

According to the Oslo Manual’s guidelines for collecting innovation data, innovation surveys can take subject or object approach (OECD 2005). The subject approach looks at the organisation as an entity and its innovative activities. The object approach treats the innovation as the unit of analysis. For example, the CIS follows the subject approach. It collects data on innovation activities at the enterprise level (EC 2014a). The drawback of this approach is to treat equally all the innovation projects of a firm (Mairesse and Mohnen 2010). In the case of large firms, this leads to averaging the answers across all the innovation projects. This way, the subject approach raises difficulties to identify, compare and assess individual innovations of a firm. The object approach allowing for analyses at the individual innovation project level is therefore more instructive. Identifying and probing into innovations in EU-funded projects, the IRS uses the object approach. In addition, the IRS also identifies organisations participating to the projects that are considered as key these innovations to the market, i.e. the subject of innovation.

Considering the measures of innovation, one distinguishes between direct and indirect ones (Hong et al. 2012). Indirect measures of innovation include, for example, R&D expenditures and patent-based indicators. While R&D expenditure represents the input side of innovation activities, patent-based indicators reflect developed technologies with commercial applications. The IRS uses direct measure of innovation. The IRS takes the Oslo Manual guidelines as a reference point. It considers, among others, the introduction of new products and processes, services, organisational changes and marketing innovations as innovation outputs. New is defined as substantially improved or completely new. This is further elaborated in the question on the level of innovation.

The IRS adopts objective measure of innovation. This approach was first used (Carter and Williams 1957) on behalf of the Science and Industry Committee (UK) in a study of the sources of 201 innovations and their characteristics. It relies on information from new product/process announcements, specialized journals, databases, etc. (Hong et al. 2012). In the context of the IRS, innovation experts identify innovations produced within a project based on the information provided by project consortia during the formal reviews of a project. This includes not only answers to structured questions, but also detailed description of each innovation. The description includes a title field and a short text field with maximum of 300 characters. Such information creates the possibility to analyse the content of innovations. For example, the preliminary analysis of the innovations in the ICT FP projects identified by the IRS reveals that most of them are related to data processing or software (De Prato et al. 2015). Only few of them are related to hardware. Recognising that various types of technology-based products and services require different business models and have different development and commercialisation trajectories (Bonaccorsi 2008; Forsman 2011), the distinction between innovations derived from different technological domains allow to better understand the heterogeneity of innovations. This, in turn, enables to analyse the development and commercialisation paths of distinct innovations.

Because the IR was conceived as an innovation management tool for EU-funded projects, the IRS devotes considerable attention to the innovation commercialisation plans. It probes into the exploitation plans, time to market and the steps that the project consortium took or plans to take in order to bring innovations to the market. Such steps include, among others, technology transfer, business plan, or securing investment from third parties. Questions about the market size, maturity, dynamics and competition provide additional insights about the chances of successful commercialisation of an innovation.

Information at the innovation level is further extended by the set of general questions at the project level. The objective of these questions is to capture the dynamics, commitment at the level of a consortium. They also address obstacles to innovation exploitation and the presence of end-users in the consortium.

The IRS identifies project partners that are considered as key organisations in delivering a project’s innovations to the market. For each innovation, up to three organisations can be identified. The ceiling on the number of organisations behind an innovation introduces the notion of various motivations among organisations to participate to a collaborative research project and to commercialise its results. This is in line with findings showing that for example, small companies in research consortia have very explicit goals (Polt et al. 2008), while large firms adopt a strategy focused on technology watch and active acquisition of new knowledge from partners, rather than joint development and commercialisation of a novel technology (Hernan et al. 2003; Röller et al. 2007).

Although the IRS does not focus on the subjects of innovations, it benefits from the fact that it is deployed directly by the bodies of the European Commission, which have comprehensive data on funded organisations. Information on key organisations behind innovations in FP projects can be directly retrieved from the Community Research and Development Information Service (CORDIS) (EC 2018a). CORDIS is the EC’s primary public repository and portal to disseminate information on all EU-funded research projects and their results. It includes, among others, project fact-sheets, publishable reports and deliverables. Having information on the name of an organisation, IR data can be matched with external data sources, e.g. patent information, company financials, publication records. This allows creating comprehensive datasets for richer analyses that can be easily used to address such questions as the drivers and barriers of innovation in EU funded research projects or their impacts.

Data collection actors and process

The IRS is managed by EC’s bodies responsible for funding R&I. It is deployed at the level of EU-funded collaborative R&I projects. During its life cycle, a FP project goes through the first, interim and final review. The reviews are performed by external expert panel, which since the launch of the IR initiative must also include an Innovation Expert (IE). The IRS accompanies these reviews. At each review, based on information provided by project consortia, innovation experts can identify up to three innovations per project and up to three key organisations can behind these innovations. This creates a longitudinal perspective on projects’ innovative output.

The introduction of the IRS to the formal reviews of EU-funded research projects took place in parallel with the modification of the format of the project reviews (EC 2014c). EC’s Project Officers (POs) overviewing the projects were made responsible for ensuring appropriate innovation and market expertise in the review panel necessary to realistically help a consortium bring its results closer to a market (EC 2014d). If such expertise was not present in the review panel, POs need to add an IE to the review panel. Such an expert could be added to any ‘pre-existing’ project review panel, or may replace a member of any such ‘pre-existing’ panel. To facilitate this process, a list of experts with such innovation/market expertise was compiled, although there is no obligation to use experts from this list. The PO has discretion to decide if the IE in a review meeting is engaged just for the aspect of innovation and completing the IRS or engaged her/him as a full reviewer.

The Innovation Expert should have a clear affinity for identifying market opportunities and overcoming commercialisation hurdles (EC 2014b). Her/his key task is to collect relevant information on potential innovation and innovators by analysing project materials and engaging in discussions with project partners at the review meeting. By doing so, the IE assesses how ready the consortium/innovator is for entering the market and how they intend to anticipate changing market conditions. While the IE is assigned to help secure a rich, validated set of relevant and structured information on project innovations and innovators, s/he should also use this as an opportunity to stimulate project innovators to think more critically, anticipate and make informed decisions concerning market exploitation of project results. Overall, the interaction between the IE and the consortium is meant to raise their awareness of the issues at hand and to help them develop a more compelling exploitation attitude.

Innovation and innovator assessment frameworks

Taking stock of the literature review on innovation and innovator assessment in Sect. 3, the IR methodology includes a set of complex indicators that feed into two assessment frameworks. The first one is the Innovation Potential Assessment Framework (IPAF), which aims at indicating the potential of innovations developed within EU-funded research projects. The second IR assessment framework is the Innovator Capacity Assessment Framework (ICAF). Its objective is to assess the innovative capacity of the organisations identified in the EU-funded projects as key players in delivering innovations to the market. These multi-factor scoring systems build on prior attempts in the literature of developing scorecards and ranking systems of technology development projects (Robert G Cooper 2007). This type of scoring systems has been widely discussed in the strategic management literature and has been mainly applied by firms for the evaluation and selection of innovation projects at early-stage (Mitchell et al. 2014) or by external funders as credit rating mechanism (Sohn et al. 2005).

In essence, these scoring mechanisms aim to evaluate the innovation potential of projects by assessing the presence of a range of criteria that are perceived as determinants of the innovation potential. Typically, the range of criteria that are assessed are grounded on a theoretical framework. Scores and weights are then assigned to each of them to reflect their level of importance and aggregated together to obtain an evaluation score for each project (De Coster and Butler 2005). Using a scoring mechanism improves the reproducibility and objectivity of the evaluation exercise and allows for the calculation of the assessment at different stages of the innovation process. Moreover, the method enables comparative analysis of the achieved results.

Both the Innovation Potential Assessment Framework and the Innovator Capacity Assessment Framework provide the scientific underpinning for the various criteria that are important to measure the innovation potential and innovator capacities in EU-funded research projects, while the Innovation Radar Survey provides observable attributes for each of these criteria. Using a multi-factor scoring method, two composite indicators on innovation potential and innovator capacities are built based on the relevant questions from the IRS. A detailed overview of the scoring measures for each of the composite indicators is presented in “Innovator Capacity Assessment Framework” section in Appendix.

Innovation Potential Assessment Framework

The main component of the Innovation Potential Assessment Framework is the Innovation Potential Indicator (IPI). The IPI is built along the following dimensions: innovation readiness, innovation management and market potential. For each dimension, a composite indicator has been created, denoted as the Innovation Readiness Indicator (IRI), the Innovation Management Indicator (IMI), and the Market Potential Indicator (MPI). The multi-factor scoring system allows innovations to reach up to a score of 10 in each of these dimensions. Eventually, the Innovation Potential Indicator (IPI) was constructed as an arithmetic average of these indicators.

The IRI aims to measure how close innovations are towards commercialisation in terms of their technology development. Hence, this indicator includes questions related to the different steps that are typically associated to the technology development of an innovation such as feasibility studies, compliance to existing standards, prototyping, pilot, demonstration and testing (Mankins 2009). Innovations with a more advanced technology development and shorter time to market horizon yield a higher score on this dimension. Innovations that are currently being exploited are assigned a higher weight as this identifies an active utility of the innovation into the firms’ business model through commercialisation or internal use.

The IMI includes ten criteria from the questionnaire to measure the achievement of activities where managerial capabilities are primordial in terms of project management, knowledge management and milestone setting. In this respect, the scoring system rewards innovations that conducted market studies and business plans, succeeded in raising funding from public or private sources, and made necessary steps for the external exploitation of the innovations by means of licensing and the creation of a start-up or spin-off. Moreover, this dimension takes into account three other criteria that are particularly important in the context of FP projects that are organised as collaborative research networks. First, joint engagement of companies’ business units with partner research teams to develop the innovation are rewarded by the scoring system. Second, collaborations reinforce the need for an effective IP management and properly defined appropriation strategies to mitigate litigation risks within the research consortia, and to maximise the chances of successful development and market introduction of innovations. Hence, research consortia that do not suffer from IPR issues that could compromise the ability of organisations to exploit the innovation yield a higher score. Finally, innovations for which there is a clear owner are rewarded as sole ownership gives the greatest amount of control and facilitates the process of bringing innovations on the market.

In a similar vein as the scoring system of Cooper (2007), the MPI aims to quantify criteria that have been recognised by the scientific literature as important factors for the market potential of innovations (Balachandra and Friar 1997). From a technological perspective, new product, process or service innovations are assigned a higher score than improvements to existing ones or innovations related to marketing or organisational methods. Moreover, the scoring system differentiates across levels of innovations, with highest awards attributed to the most innovative inventions satisfying a well-known market need. Innovations that are intended to be commercialised are weighted more than those that will be exploited internally or for which no exploitation is foreseen. With respect to market conditions, the scoring system evaluates three criteria. One of these criteria relates to the market maturity and attributes a highest score to emerging market as this may point at the fact that the innovation responds to a customer need. A second criterion explores the level of competition and assigns the highest score to market with weak competition. Thirdly, innovations satisfying different markets are rewarded as this increases their market potential. Appropriability conditions are assessed with the presence of trademarks and patents, where higher patent protection yields a higher score. Finally, the scoring system penalises innovations from research consortia that encounter external bottlenecks related to regulations or trade issues among others and which may compromise the ability of research partners to bring them on the market.

Although a large stream of literature has identified innovation readiness, innovation management and market potential as salient factors in the technology development process, no convergence was found concerning their relative importance in the process (Astebro 2004; Linton et al. 2002). Due to this lack of convergence, we follow a conservative approach and opt to equally weight the sub-indicators of the Innovation Potential Indicator. With this approach we follow the perspective of scholars claiming that successful technology development is a matter of competence in all factors and of balance and coordination between them and not doing one or two things brilliantly well (Conceição et al. 2012; Rothwell 1992).

Hence, equal weighting is applied to construct the Innovation Potential Indicator as follows:

$$IPI = \frac{1}{3}IRI + \frac{1}{3}IMI + \frac{1}{3}MPI$$
(1)

Observed values of the IPI indicator are brought to a scale between 0 and 100.

Innovator Capacity Assessment Framework

The main component of the Innovator Capacity Assessment Framework is the Innovator Capacity Indicator (ICI). The ICI aims to quantify the innovation capability of key innovators in EU-funded research projects. It is built along two dimensions: innovator’s ability and innovator’s environment. The Innovator’s Ability Indicator (IAI) and the Innovator’s Environment Indicator (IEI) can respectively reach up to a maximum of 5 points and are subsequently aggregated by means of arithmetic average into the Innovator Capacity Indicator.

As discussed in Sect. 3, the IAI evaluates the intrinsic innovation capacities of an organisation. The intrinsic innovation capacities are captured by means of prior and current success rates of innovators’ participation in EU-funded research projects that are screened by the IR. In particular, the scoring system awards innovators for the number of innovations in which they have been identified as key innovators. Moreover, the maximum score of the Innovation Potential Indicator for each innovator is included in this indicator as its value highlights to a certain extent the innovator’s capacity to come up with a marketable product or service. Innovators are also awarded if they have been considered as most impressive partner of the research consortium by the reviewer of the survey. In addition, the scoring system takes into account the market prospection of innovations to which innovators contributed. Hence, innovators yield a higher score if they are associated to innovations that will mainly reach a new customer base. Finally, the scoring system penalises innovators encountering more needs when developing innovations, as higher levels of needs may point at innovator’s difficulty in successfully managing the innovation process.

The Innovator’s Environment Indicator aims to capture the overall conditions in a project consortium, which an innovator faces. A first set of criteria in this indicator are based on reviewers’ opinion about the project performance and the level of commitment of research partners. Innovators that are participating in a project consortium with a project performance that highly exceeds the expectations are rewarded in the scoring system as this result hints at a successful and effective innovator’s environment. A similar argument holds for projects with a very high perceived level of commitment of the research partners to exploit the innovation. Thirdly, innovators participating in an innovation in which the reviewer would personally invest his money yield a higher score. A second set of criteria in this indicator relates to the nature of (external) partner engagement innovators are surrounded by in the innovation process. A first criterion captures the presence of end-user engagement during the innovation development. The involvement of end-users or costumers has acclaimed importance for value co-creation (Lüthje 2004; Riggs and Von Hippel 1994). Value co-creation can enable organisations to obtain new ideas from customers and build a long-term relationship, and accordingly to improve customer experience and achieve organisational competitive advantages (Nambisan and Baron 2009). Finally, innovators are rewarded in the scoring system for the presence of a woman in a position of leadership in the research consortium. Gender mainstreaming is a key priority in establishing the European Research Area (ERA) and will be continued and strengthened in the upcoming 9th Framework Programme. In line with the recommendations of LERU (2017), the gender dimension should be systematically included in the development of work programs, topic selections, calls and evaluation processes.

Equal weighting is applied to construct the Innovator Capacity Indicator as follows:

$$ICI = \frac{1}{2}IAI + \frac{1}{2}IEI$$
(2)

Observed values of the ICI indicator are brought to a scale between 0 and 100.

Analysis of innovations and innovators

Table 2 provides an overview of the sample of innovation projects and organisation types that were screened by the IR between March 2014 and January 2018. During its pilot phase, the IRS was administered to 1115 FP projects. As a result, 2915 innovations were identified. This means that, on average, every project produced between 2 and 3 innovations. Projects are reviewed three times during the project duration.Footnote 4 The number of unique key innovators active in these projects amounted to 2037. We distinguished six types of organisations, including universities, research centres, small- and medium-sized enterprises, large firms, governmental institutions and others. SMEs and large firms constitute the largest part of key innovators with respective shares of 39% and 24%. Universities account for almost one fifth of the key innovators, while research centres have a share around 13%. The percentage of both governmental institutions and other types of key innovators amounted to 5%.

Table 2 Overview of innovation projects and organisation types

Table 3 provides descriptive statistics of the Innovation Potential Indicator and the Innovator Capacity Indicator. The average score of the IPI equals 48, varying from 12.5 up to a score of 92. Analysing averages of the sub-indicators, the average score of Market Potential (around 66) outperforms those of Innovation Readiness and Innovation Management (close to 40). Hence, market potential is the strongest dimension while most room for improvement is found in innovation management and innovation readiness. The Innovator Capacity Indicator ranges from a score of 11 to 97 with an average of 57. With an average score of 69, the Innovator Environment Indicator outperforms the average score of the Innovator Ability Indicator, suggesting the relevance of the research consortium and the environment in which innovators are operating as an important innovation enabler.

Table 3 Descriptive statistics of the Innovation Potential Indicator and the Innovator Capacity Indicator

Turning to the nature of the innovations, Table 4 presents various descriptive statistics of the innovations. Roughly 45% of the innovations concern new products, processes or services, while 47% consist of significant improvements to existing products, processes or services. Around 7% of the innovations relate to new or improved marketing and organisational methods. A large majority of innovations are destined to be exploited, both commercially on the market (63%) or internally within a partner organisation (26%). Surprisingly, around 11% of the innovations will not be exploited. Concerning the development stage, most innovations are still under development (60%). Around one quarter of the innovations are developed but not yet exploited (27%), while 12% are already being exploited. For half of the innovations, the time to commercialisation or internal exploitation lies between 1 and 3 years. More importantly, almost one fifth of the innovations are expected to be introduced to the market or deployed within a partner’s organisation in less than 1 year. Remaining innovations have a time to market of more than 3 years.

Table 4 Descriptive statistics of the innovations identified by the Innovation Radar survey

To better understand the needs of research consortia and to allow for hands-on policy support during ongoing projects, the IR questions partners’ needs in developing innovations. Figure 1 provides an overview of partner needs as identified by the IRS. The most common needs are seen as partnerships with other companies, business plan development and expanding to more markets. The least frequently named needs include investor readiness training, incubation, and participation in a start-up accelerator.

Fig. 1
figure1

Partner needs identified by the Innovation Radar Survey. Note: Data source based on the Innovation Radar and CORDIS

Similarly, barriers to innovation are surveyed across the different projects (see Fig. 2). Not surprisingly, for a majority of project partners (49%), lack of financing is seen as the major external bottleneck to innovation development. Compliance to standards and regulations are perceived as relatively important. Respectively 25% of projects highlight these issues as external factors that could threaten the ability of project partners to commercially develop and exploit innovations. Furthermore, one fifth of the projects report IPR issues and lack of workforce’s skills. Among the least harmful bottlenecks are trade issues between Member States and the rest of the world (5%).

Fig. 2
figure2

Barriers to innovations as identified by the Innovation Radar Survey. Note: Data source based on the Innovation Radar and CORDIS

Conclusions

We describe how policy makers reacted to the need for new data and indicators necessary to design and manage large collaborative R&I projects and increase their innovative outputs. The intelligence provided by the IR is currently supporting policy makers in three ways: (1) improving innovation management processes of collaborative R&I projects, (2) providing insights concerning the design of the FP R&I ecosystem and (3) strengthening the linkages between the FP R&I ecosystem and the external innovation and entrepreneurship ecosystems.

Regarding the improvement of innovation management processes of collaborative R&I projects, the main objective of the IR is to provide actionable intelligence on innovation activities and output of EU-funded R&I projects. To meet this challenge, the design and implementation of the IR methodology needed to have direct policy applications. The pilot exercise proved successful and the approach was adopted by the entire family of EU-funded R&I instruments (EC 2016a). Since its launch, the intelligence provided by the IR has been extensively used in a number of ways. For example, an internal intelligence tool was built to identify and monitor innovations and innovators in the ongoing projects (EC 2016c). It allows Project Officers to visualise and process collected information and take actions to provide support to projects they supervise. This type of information goes far beyond the feedback gathered at the review meetings. The IR intelligence helps project officers to identify and intermediate between necessary support initiatives (EC 2015c). Innovators are offered tailor-made support addressing innovators’ go to market needs (EC 2016b, 2017).

Reinforcing the linkages between the EU-funded R&I programme and the external innovation and entrepreneurship ecosystem is done in a number of ways. First, the IR provides information on exceptional organisations participating to the projects. Since 2015, the EC awards the Innovation Radar Prize to give visibility and credibility to innovators to signal their potential to external stakeholders, e.g. investors or partner organisations, who can help them to get their innovations to the market (EC 2015b). The visibility of the innovative output of the EU-funded R&I project and innovators has been further increased by launching the public Innovation Radar data platform in 2018. It offers policy makers, participants to EU-funded R&I projects, and external stakeholders a data-driven, real-time intelligence platform. Innovators can access information on their individual performance and the environment in which they innovate. Policy makers have guidance on how they can leverage the innovative output. Investors can identify organisations with high market and growth potential.

The impact of the IR on the linkages between the FP R&I ecosystem and the external innovation and entrepreneurship ecosystem is likely to increase following the signature of the declaration of cooperation on fostering a dynamic ecosystem around EU R&I funding signed on the Digital Day 2018. Over twenty EU Member States committed themselves to further support the IR initiative to leverage the outcomes of public support to R&I in Europe (EC 2018b). Each signatory designates a lead official to work with the European Commission to explore how the IR can be further developed and enriched and thus foster greater uptake of EU-funded innovations. Hence, the IR represents a reference model for management and commercialisation of innovations produced in the context of publically supported R&I activities. This way, it offers guidance on how to increase the returns on public investment in R&I activities.

Most of the breakthroughs in our understanding of innovation issues over the last two decades have emerged from investigations of new data sources (Hong et al. 2012). For example, the Community Innovation Survey contributed to unlocking the black box of the drivers and barriers of business innovation, innovation patterns and models. Today, business innovation is regarded as a multi-dimensional issue, allowing for different drivers of innovation to dominate in different contexts. Data collected by the IR opens up new avenues for analysing the drivers and barriers to large collaborative research projects, which show different patterns of innovation activities than those of individual organizations. First analyses using this type of information cast some light, for example, on the results of the collaboration between firms and universities (Pesole and Nepelski 2016). The findings suggest that there is a clear division of roles. While universities are often a source of new innovative products, their introduction to the market takes place through private organisations. Other studies address the issue of the impact of the organisational and geographical diversity of collaborative projects on their innovative performance (Nepelski and Piroli 2018; Nepelski et al. 2018). We are convinced that the IR data will further enrich the scope of impact assessments looking at the efficiency of publicly supported R&I ecosystems. In the past, evaluation studies of projects funded by the EU relied mainly on the effect of firms’ participation on their economic (Barajas et al. 2012) or scientific and technological performance (EC-CONNECT 2014). Information about the output side of these activities creates potential for addressing new questions and detailed analysis of the support to the creation and functioning of R&I ecosystems.

Notes

  1. 1.

    Previous FPs were called: Framework Programme for Research and Technological Development. In parallel, in 2010, the name of the main Directorate General (DG) responsible for the planning and execution of the FPs from DG Science and Research to Research, Science and Innovation EC (2010a).

  2. 2.

    Coordination and Support Actions (CSA) and Networks of Excellence (NoE) are not addressed by the Innovation Radar.

  3. 3.

    For a comprehensive review of literature on the role of innovative capacities and firm performance see Fernandes et al. (2013).

  4. 4.

    As the pilot edition of the Innovation Radar started at the end of FP7, many projects screened by IR were already at the end of their life-cycle. Hence, the percentage of final reviews is slightly higher.

References

  1. Ahuja, G. (2000). The duality of collaboration: Inducements and opportunities in the formation of interfirm linkages. Strategic Management Journal,21(3), 317–343. https://doi.org/10.1002/(sici)1097-0266(200003)21:3%3c317:aid-smj90%3e3.0.co;2-b.

    Article  Google Scholar 

  2. Ahuja, G., & Lampert, M. C. (2001). Entrepreneurship in the large corporation: A longitudinal study of how established firms create breakthrough inventions. Strategic Management Journal,22(6–7), 521–543. https://doi.org/10.1002/smj.176.

    Article  Google Scholar 

  3. Arundel, A. (2007). Innovation survey indicators: What impact on innovation policy? Science, technology and innovation indicators in a changing world. Paris: OECD.

    Google Scholar 

  4. Arundel, A., & Hollanders, H. (2005). EXIS: An exploratory approach to innovation scoreboards. Brussels: European Commission, DG Enterprise.

    Google Scholar 

  5. Assink, M. (2006). Inhibitors of disruptive innovation capability: A conceptual model. European Journal of Innovation Management,9(2), 215–233. https://doi.org/10.1108/14601060610663587.

    Article  Google Scholar 

  6. Astebro, T. (2004). Key success factors for technological entrepreneurs R&D projects. IEEE Transactions on Engineering Management,51(3), 381–399.

    Google Scholar 

  7. Autio, E. (1994). New, technology-based firms as agents of R&D and innovation: an empirical study. Technovation,14(4), 259–273.

    Google Scholar 

  8. Autio, E., & Laamanen, T. (1995). Measurement and evaluation of technology transfer: Review of technology transfer mechanisms and indicators. International Journal of Technology Management,10(7–8), 643–664.

    Google Scholar 

  9. Balachandra, R., & Friar, J. H. (1997). Factors for success in R&D projects and new product innovation: A contextual framework. IEEE Transactions on Engineering Management,44(3), 276–287.

    Google Scholar 

  10. Barajas, A., Huergo, E., & Moreno, L. (2012). Measuring the economic impact of research joint ventures supported by the EU Framework Programme. The Journal of Technology Transfer,37(6), 917–942.

    Google Scholar 

  11. Baum, J. A. C., Calabrese, T., & Silverman, B. S. (2000). Don’t go it alone: Alliance network composition and startups’ performance in Canadian biotechnology. Strategic Management Journal,21(3), 267–294. https://doi.org/10.1002/(sici)1097-0266(200003)21:3%3c267:aid-smj89%3e3.0.co;2-8.

    Article  Google Scholar 

  12. Bercovitz, J., & Feldman, M. (2006). Entpreprenerial universities and technology transfer: A conceptual framework for understanding knowledge-based economic development. The Journal of Technology Transfer,31(1), 175–188.

    Google Scholar 

  13. Blind, K. (2016). The impact of standardisation and standards on innovation. In J. Edler, P. Cunningham, & A. Gök (Eds.), Handbook of innovation policy impact (p. 423). Cheltenham: Edward Elgar Publishing.

    Google Scholar 

  14. Bonaccorsi, A. (2008). Search regimes and the industrial dynamics of science. Minerva,46(3), 285–315. https://doi.org/10.1007/s11024-008-9101-3.

    Article  Google Scholar 

  15. Calantone, R., Cavusgil, S., & Zhao, Y. (2002). Learning orientation, firm innovation capability, and firm performance. Industrial Marketing Management,31(6), 515–524. https://doi.org/10.1016/S0019-8501(01)00203-6.

    Article  Google Scholar 

  16. Carayol, N. (2003). Objectives, agreements and matching in science–industry collaborations: Reassembling the pieces of the puzzle. Research Policy,32(6), 887–908. https://doi.org/10.1016/S0048-7333(02)00108-7.

    Article  Google Scholar 

  17. Carter, C., & Williams, B. (1957). Industry and technical progress: Factors governing the speed of application of science. London: Oxford University Press.

    Google Scholar 

  18. Christensen, J. L. (2010). The role of finance in national systems of innovation. In B. Lundvall (Ed.), National systems of innovation: Toward a theory of innovation and interactive learning (pp. 151–172). New-York: Anthem Press.

    Google Scholar 

  19. Civera, A., Meoli, M., & Vismara, S. (2017). Policies for the provision of finance to science-based entrepreneurship. Annals of Science and Technology Policy,1(4), 317–469. https://doi.org/10.1561/110.00000004.

    Article  Google Scholar 

  20. Cohen, W., & Levinthal, D. (1990). Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly,35(1), 128–152. https://doi.org/10.2307/2393553.

    Article  Google Scholar 

  21. Conceição, O., Fontes, M., & Calapez, T. (2012). The commercialisation decisions of research-based spin-off: Targeting the market for technologies. Technovation,32(1), 43–56.

    Google Scholar 

  22. Cooper, R. G. (2007). Managing technology development projects. Research-Technology Management,49(6), 23–31.

    Google Scholar 

  23. Cooper, R. G. (2017). Winning at new products: Creating value through innovations (5th ed.). Cambridge, MA: Perseus Publishing.

    Google Scholar 

  24. Cooper, R. G., & Kleinschmidt, E. J. (1997). Winning businesses in product development: The critical success factors. The Journal of Product Innovation Management,14(2), 132.

    Google Scholar 

  25. Crepon, B., Duguet, E., & Mairessec, J. (1998). Research, innovation and productivity: An econometric analysis at the firm level. Economics of Innovation and New Technology,7(2), 115–158.

    Google Scholar 

  26. Cuartielles, D., Nepelski, D., & Van Roy, V. (2018). Arduino—A global network for digital innovation. Open Innovation 2.0 Yearbookedition 2018.

  27. Cucculelli, M., & Ermini, B. (2012). New product introduction and product tenure: What effects on firm growth? Research Policy,41(5), 808–821. https://doi.org/10.1016/j.respol.2012.02.001.

    Article  Google Scholar 

  28. D’Este, P., Iammarino, S., Savona, M., & von Tunzelmann, N. (2012). What hampers innovation? Revealed barriers versus deterring barriers. Research Policy,41(2), 482–488.

    Google Scholar 

  29. Das, T. K., & Teng, B.-S. (2000). A resource-based theory of strategic alliances. Journal of Management,26(1), 31–61.

    Google Scholar 

  30. Davenport, S., Davies, J., & Grimes, C. (1998). Collaborative research programmes: Building trust from difference. Technovation,19(1), 31–40.

    Google Scholar 

  31. De Coster, R., & Butler, C. (2005). Assessment of proposals for new technology ventures in the UK: Characteristics of university spin-off companies. Technovation,25(5), 535–543. https://doi.org/10.1016/j.technovation.2003.10.002.

    Article  Google Scholar 

  32. De Prato, G., Nepelski, D., & Piroli, G. (2015). Innovation radar: Identifying innovations and innovators with high potential in ICT FP7, CIP & H2020 projects. Seville: JRC.

    Google Scholar 

  33. de Vries, B. (2012). Assessment of market potential for innovations with new technology in an existing market. MSc, University of Twente, Twente. Retrieved from http://essay.utwente.nl/62374/

  34. de Vries, H., Blind, K., Mangelsdorf, A., Verheul, H., & van der Zwan, J. (2009). SME access to European standardization: Enabling small and medium-sized enterprises to achieve greater benefit from standards and from involvement in standardization. Research report. Rotterdam School of Management, Erasmus University.

  35. Dorf, R. C., & Worthington, K. K. (1987). Models for commercialization of technology from universities and research laboratories. The Journal of Technology Transfer,12(1), 1–8.

    Google Scholar 

  36. Doz, Y. L., & Hamel, G. (1997). The Use of Alliances in implementing technology strategies. In M. L. Tushman & P. Anderson (Eds.), Managing strategic innovation and change: A collection of readings (pp. 1–41). New York: Oxford University Press.

    Google Scholar 

  37. EC-CONNECT. (2014). Analysis of publications and patents of ICT research in FP7. Brussels: European Commission DG Communications Networks, Content & Technology.

    Google Scholar 

  38. EC. (1981). Scientific and technical research and the European Community: Proposals for the 1980 s. Brussels.

  39. EC. (2005). Building the ERA of knowledge for growth. (COM(2005) 118). Brussels.

  40. EC. (2006). Creating an innovative Europe. Report of the Independent Expert Group on R&D and Innovation appointed following the Hampton Court Summit and chaired by Mr. Esko Aho. Brussels: European Commission.

  41. EC. (2010a). European Commission: The Commissioners (2010-2014) Retrieved 20.03.2018, 2018, from http://ec.europa.eu/research/participants/docs/h2020-funding-guide/index_en.htm.

  42. EC. (2010b). Europe 2020 A strategy for smart, sustainable and inclusive growth. (COM(2010) 2020). Brussels.

  43. EC. (2010c). Europe 2020 Flagship Initiative Innovation Union. (COM(2010) 546). Brussels.

  44. EC. (2011). Proposal for a regulation of the European Parliament and of the Council establishing Horizon 2020The Framework Programme for Research and Innovation (20142020). (COM(2011) 809). Brussels.

  45. EC. (2014a). The Community Innovation Survey 2014. In E. European Commission (Ed.). Luxemburg.

  46. EC. (2014b). Guidelines for innovation experts supporting the Innovation Radar. In D. CNECT (Ed.). Brussels.

  47. EC. (2014c). Operational guidelines for implementing innovation questions in FP7/CIP project reviews. In D. CNECT (Ed.). Brussels.

  48. EC. (2014d). PO Guide regarding operational aspects of innovation questions in FP7/CIP project reviews. In D. CNECT (Ed.). Brussels.

  49. EC. (2015a). EU Research Framework Programmes: 1984–2015. Brussels.

  50. EC. (2015b). Innovation radar prize. Retrieved March 20, 2018 from http://ec.europa.eu/growth/industry/innovation/facts-figures/scoreboards_es.

  51. EC. (2015c). Innovation radar tool project charter. Brussels: European Commission.

    Google Scholar 

  52. EC. (2015d). Seventh FP7 monitoring report. Brussels: European Commission.

    Google Scholar 

  53. EC. (2016a). Europe’s next leaders: The start-up and scale-up initiative. (COM(2016) 733). Retrieved from http://ec.europa.eu/information_society/digital-agenda/documents/digital-agenda-communication-en.pdf.

  54. EC. (2016b). ICT-32-2017 call “Startup Europe for Growth and Innovation Radar”. Brussels.

  55. EC. (2016c). Innovation radar—Management and Intelligence tool. Retrieved March 20, 2018 from https://ec.europa.eu/digital-single-market/en/innovation-radar.

  56. EC. (2017). ICT-33-2019 call “Startup Europe for Growth and Innovation Radar”. Brussels.

  57. EC. (2018a). Community research and development information service. In E. Commission (Ed.).

  58. EC. (2018b). Declaration of cooperation on fostering a dynamic ecosystem around EU R&I funding building on initiatives such as the Innovation Radar. Brussels.

  59. EC. (2018c). Innovation radar. Retrieved March 20, 2018 from https://ec.europa.eu/digital-single-market/en/innovation-radar.

  60. Evanschitzky, H., Eisend, M., Calantone, R. J., & Jiang, Y. (2012). Success factors of product innovation: An updated meta-analysis. Journal of Product Innovation Management,29(S1), 21–37.

    Google Scholar 

  61. Fernandes, C., Ferreira, J., & Raposo, M. (2013). Drivers to firm innovation and their effects on performance: An international comparison. International Entrepreneurship and Management Journal,9(4), 557–580. https://doi.org/10.1007/s11365-013-0263-6.

    Article  Google Scholar 

  62. Forsman, H. (2011). Innovation capacity and innovation development in small enterprises. A comparison between the manufacturing and service sectors. Research Policy,40(5), 739–750. https://doi.org/10.1016/j.respol.2011.02.003.

    Article  Google Scholar 

  63. Galbraith, C. S., Ehrlich, S. B., & DeNoble, A. F. (2006). Predicting technology success: Identifying key predictors and assessing expert evaluation for advanced technologies. The Journal of Technology Transfer,31(6), 673–684. https://doi.org/10.1007/s10961-006-0022-8.

    Article  Google Scholar 

  64. Galia, F., & Legros, D. (2004). Complementarities between obstacles to innovation: Evidence from France. Research Policy,33(8), 1185–1199. https://doi.org/10.1016/j.respol.2004.06.004.

    Article  Google Scholar 

  65. Gault, F. (Ed.). (2013). Handbook of innovation indicators and measurement. Glos: Edward Elgar Publishing.

    Google Scholar 

  66. Gerard, G., Shaker, Z., & Robley, W. (2002). The effects of business–university alliances on innovative output and financial performance: A study of publicly traded biotechnology companies. Journal of Business Venturing,17(6), 577–609. https://doi.org/10.1016/S0883-9026(01)00069-6.

    Article  Google Scholar 

  67. Geroski, P. (1995). Innovation and competitive advantage. Working Paper (Vol. 159). Paris: OECD.

  68. Godin, B. (2009). The rise of innovation surveys: Measuring a fuzzy concept. Working Paper (Vol. 16). Montreal: INRS.

  69. Goerzen, A., & Beamish, P. W. (2005). The effect of alliance network diversity on multinational enterprise performance. Strategic Management Journal,26(4), 333–354. https://doi.org/10.1002/smj.447.

    Article  Google Scholar 

  70. Goldenberg, J., Lehmann, D. R., & Mazursky, D. (2001). The idea itself and the circumstances of its emergence as predictors of new product success. Management Science,47(1), 69–84. https://doi.org/10.1287/mnsc.47.1.69.10670.

    Article  Google Scholar 

  71. Griffin, A. (1997). Modeling and measuring product development cycle time across industries. Journal of Engineering and Technology Management,14(1), 1–24.

    Google Scholar 

  72. Grimaldi, R., & Grandi, A. (2005). Business incubators and new venture creation: An assessment of incubating models. Technovation,25(2), 111–121.

    Google Scholar 

  73. Grimpe, C., & Fier, H. (2010). Informal university technology transfer: A comparison between the United States and Germany. The Journal of Technology Transfer,35(6), 637–650.

    Google Scholar 

  74. Guan, J., & Ma, N. (2003). Innovative capability and export performance of Chinese firms. Technovation,23(9), 737–747. https://doi.org/10.1016/S0166-4972(02)00013-5.

    Article  Google Scholar 

  75. Hagedoorn, J. (1993). Understanding the rationale of strategic technology partnering: Nterorganizational modes of cooperation and sectoral differences. Strategic Management Journal,14(5), 371–385.

    Google Scholar 

  76. Harryson, S., Kliknaite, S., & Dudkowski, R. (2008). Flexibility in innovation through external learning: Exploring two models for enhanced industry? university collaboration. International Journal of Technology Management,41(1–2), 109–137.

    Google Scholar 

  77. Henard, D. H., & Szymanski, D. M. (2001). Why some new products are more successful than others. Journal of Marketing Research,38(3), 362–375.

    Google Scholar 

  78. Hennart, J.-F., & Zeng, M. (2002). Cross-cultural differences and joint venture longevity. Journal of International Business Studies,33(4), 699–716.

    Google Scholar 

  79. Hernan, R., Marin, P., & Siotis, G. (2003). An empirical evaluation of the determinants of research joint venture formation. The Journal of Industrial Economics,51(1), 75–89.

    Google Scholar 

  80. Heslop, L. A., McGregor, E., & Griffith, M. (2001). Development of a technology readiness assessment measure: The cloverleaf model of technology transfer. The Journal of Technology Transfer,26(4), 369–384. https://doi.org/10.1023/a:1011139021356.

    Article  Google Scholar 

  81. Hölzl, W. (2008). Is the R&D behaviour of fast growing SMEs different? Evidence from CIS III DATA for 16 countries. WIFO.

  82. Hong, S., Oxley, L., & McCann, P. (2012). A survey of the innovation surveys. Journal of Economic Surveys,26(3), 420–444. https://doi.org/10.1111/j.1467-6419.2012.00724.x.

    Article  Google Scholar 

  83. Hsu, C.-W. (2005). Formation of industrial innovation mechanisms through the research institute. Technovation,25(11), 1317–1329.

    Google Scholar 

  84. Hull, C., & Covin, J. (2010). Learning capability, technological parity, and innovation mode use. Journal of Product Innovation Management,27(1), 97–114. https://doi.org/10.1111/j.1540-5885.2009.00702.x.

    Article  Google Scholar 

  85. Hult, G., Hurley, R., & Knight, G. (2004). Innovativeness: Its antecedents and impact on business performance. Industrial Marketing Management,33(5), 429–438. https://doi.org/10.1016/j.indmarman.2003.08.015.

    Article  Google Scholar 

  86. Jung, H., & Kim, B.-K. (2018). Determinant factors of university spin-off: The case of Korea. The Journal of Technology Transfer,43(6), 1631–1646. https://doi.org/10.1007/s10961-017-9571-2.

    Article  Google Scholar 

  87. Kasper, H. (1987). Dilemmas of innovation management. Engineering Costs and Production Economics,12(1), 307–314. https://doi.org/10.1016/0167-188X(87)90092-9.

    Article  Google Scholar 

  88. Khurana, A., & Rosenthal, S. R. (1998). Towards holistic “front ends” in new product development. The Journal of Product Innovation Management,15(1), 57–74.

    Google Scholar 

  89. Kirchberger, M. A., & Pohl, L. (2016). Technology commercialization: A literature review of success factors and antecedents across different contexts. The Journal of Technology Transfer,41(5), 1077–1112. https://doi.org/10.1007/s10961-016-9486-3.

    Article  Google Scholar 

  90. Klette, T., & Griliches, Z. (2000). Empirical patterns of firm growth and R&D investment: A quality ladder model interpretation. The Economic Journal,110(463), 363–387. https://doi.org/10.1111/1468-0297.00529.

    Article  Google Scholar 

  91. Klette, T., & Kortum, S. (2004). Innovating firms and aggregate innovation. Journal of Political Economy,112(5), 986–1018. https://doi.org/10.1086/422563.

    Article  Google Scholar 

  92. Kotler, P. (2003). Marketing management (11th ed.). Englewood Cliffs, NJ: Prentice Hall.

    Google Scholar 

  93. Lafuente, E., & Berbegal-Mirabent, J. (2019). Assessing the productivity of technology transfer offices: An analysis of the relevance of aspiration performance and portfolio complexity. The Journal of Technology Transfer,44(3), 778–801. https://doi.org/10.1007/s10961-017-9604-x.

    Article  Google Scholar 

  94. Lane, P. J., & Lubatkin, M. (1998). Relative absorptive capacity and interorganizational learning. Strategic Management Journal,19, 461–477.

    Google Scholar 

  95. Lerner, J., & Tirole, J. (2002). Some simple economics of open source. The Journal of Industrial Economics,50(2), 197–234.

    Google Scholar 

  96. LERU. (2017). Beyond the Horizon: LERU’s view on the 9th framework programme for research and innovation. Advice paper no. 22.

  97. Liao, P., & Witsil, A. (2008). A practical guide to opportunity assessment methods. NC: RTI Press.

    Google Scholar 

  98. Linton, J. D., Walsh, S. T., & Morabito, J. (2002). Analysis, ranking and selection of R&D projects in a portfolio. R&D Management,32(2), 139–148.

    Google Scholar 

  99. Lüthje, C. (2004). Characteristics of innovating users in a consumer goods field: An empirical study of sport-related product consumers. Technovation,24(9), 683–695.

    Google Scholar 

  100. M’Chirgui, Z., Lamine, W., Mian, S., & Fayolle, A. (2018). University technology commercialization through new venture projects: An assessment of the French regional incubator program. The Journal of Technology Transfer,43(5), 1142–1160. https://doi.org/10.1007/s10961-016-9535-y.

    Article  Google Scholar 

  101. Mahajan, V., & Wind, J. (1992). New product models: Practice, shortcomings and desired improvements. Journal of Product Innovation Management,9(2), 128–139.

    Google Scholar 

  102. Mairesse, J., & Mohnen, P. (2010). Using innovations surveys for econometric analysis. National Bureau of Economic Research Working Paper Series, No. 15857. https://doi.org/10.3386/w15857.

  103. Mankins, J. (2009). Technology readiness assessments: A retrospective. Acta Astronautica,65(9–10), 1216–1223. https://doi.org/10.1016/j.actaastro.2009.03.058.

    Article  Google Scholar 

  104. McFarthing, K. (2015). Innovation radar: Review by Innovation Fixer Ltd.. Oxford: Innovation Fixer Ltd.

    Google Scholar 

  105. Meoli, M., Paleari, S., & Vismara, S. (2019). The governance of universities and the establishment of academic spin-offs. Small Business Economics,52(2), 485–504. https://doi.org/10.1007/s11187-017-9956-5.

    Article  Google Scholar 

  106. Meoli, M., Pierucci, E., & Vismara, S. (2018). The effects of public policies in fostering university spinoffs in Italy. Economics of Innovation and New Technology,27(5–6), 479–492. https://doi.org/10.1080/10438599.2017.1374048.

    Article  Google Scholar 

  107. Meoli, M., & Vismara, S. (2016). University support and the creation of technology and non-technology academic spin-offs. Small Business Economics,47(2), 345–362. https://doi.org/10.1007/s11187-016-9721-1.

    Article  Google Scholar 

  108. Meseri, O., & Maital, S. (2001). A survey analysis of university-technology transfer in Israel: Evaluation of projects and determinants of success. The Journal of Technology Transfer,26(1), 115–125. https://doi.org/10.1023/a:1007844530539.

    Article  Google Scholar 

  109. Miles, G., Preece, S. B., & Baetz, M. C. (1999). Dangers of dependence: The impact of strategic alliance use by small technology-based firms. Journal of Small Business Management,37(2), 20.

    Google Scholar 

  110. Miranda, F.-J., Chamorro, A., & Rubio, S. (2018). Re-thinking university spin-off: A critical literature review and a research agenda. The Journal of Technology Transfer,43(4), 1007–1038. https://doi.org/10.1007/s10961-017-9647-z.

    Article  Google Scholar 

  111. Mitchell, R., Phaal, R., & Athanassopoulou, N. (2014). Scoring methods for prioritizing and selecting innovation projects. Paper presented at the Management of Engineering & Technology (PICMET), 2014 Portland International Conference on.

  112. Mitchell, W., & Singh, K. (1996). Survival of businesses using collaborative relationships to commercialize complex goods. Strategic Management Journal, 17(3), 169–196.

    Google Scholar 

  113. Montoya-Weiss, M. M., & Calantone, R. (1994). Determinants of new product performance: A review and meta-analysis. Journal of Product Innovation Management,11(5), 397–417.

    Google Scholar 

  114. Mytelka, L., Goedhuys, M., Arundel, A., & Geoffrey, G. (2008). Designing a policy-relevant innovation survey for NEPAD. UNU-Merit.

  115. Nambisan, S., & Baron, R. A. (2009). Virtual customer environments: Testing a model of voluntary participation in value co-creation activities. Journal of Product Innovation Management,26(4), 388–406.

    Google Scholar 

  116. Nepelski, D., & Piroli, G. (2018). Organizational diversity and innovation potential of EU-funded research projects. The Journal of Technology Transfer,43(3), 615–639. https://doi.org/10.1007/s10961-017-9624-6.

    Article  Google Scholar 

  117. Nepelski, D., Van Roy, V., & Pesole, A. (2018). Organisational and geographic diversity and innovation potential of EU-funded research projects. Journal of Technology Transfer,43, 615–639.

    Google Scholar 

  118. Nevens, M. (1990). Commercializing technology: What the best companies do. Planning Review,18(6), 20–24.

    Google Scholar 

  119. Nonaka, I., & Takeuchi, H. (1995). The knowledge creating company. How Japanese companies create the dynamics of innovation. New York: Oxford University Press.

    Google Scholar 

  120. OECD. (2005). Oslo manual. Guidelines for collecting and interpreting innovation data. OECD Publishing.

  121. OECD. (2009). Innovation in firms: A microeconomic perspective. Paris: OECD Publishing.

    Google Scholar 

  122. Pandza, K., Wilkins, T., & Alfoldi, E. (2011). Collaborative diversity in a nanotechnology innovation system: Evidence from the EU Framework Programme. Technovation,31(9), 476–489. https://doi.org/10.1016/j.technovation.2011.05.003.

    Article  Google Scholar 

  123. Pesole, A., & Nepelski, D. (2016). Universities and collaborative innovation in EC-funded research projects: An analysis based on Innovation Radar data. EC-JRC.

  124. Polt, W., Vonortas, N., & Fisher, R. (2008). The impact of publicly funded research on innovation: An analysis of European Framework Programmes for Research and Development. European Commission.

  125. Potter, D. (1989). From experience: The customer’s eye view of innovation. Journal of Product Innovation Management,6(1), 35–42. https://doi.org/10.1111/1540-5885.610035.

    Article  Google Scholar 

  126. Pouder, R., & John, C. H. S. (1996). Hot spots and blind spots: Geographical clusters of firms and innovation. Academy of Management Review,21(4), 1192–1225.

    Google Scholar 

  127. Powell, W. W., & Brantley, P. (1992). Competitive cooperation in biotechnology: Learning through networks. Networks and Organizations,7, 366–394.

    Google Scholar 

  128. Powell, W. W., Koput, K. W., & Smith-Doerr, L. (1996). Interorganizational collaboration and the locus of innovation: Networks of learning in biotechnology. Administrative Science Quarterly,41(1), 116–145.

    Google Scholar 

  129. Reillon, V. (2017). EU framework programmes for research and innovation. Evolution and key data from FP1 to Horizon 2020 in view of FP9: European Parliament, European Parliamentary Research Service.

  130. Riggs, W., & Von Hippel, E. (1994). Incentives to innovate and the sources of innovation: The case of scientific instruments. Research Policy,23(4), 459–469.

    Google Scholar 

  131. Rogers, E. M., Takegami, S., & Yin, J. (2001). Lessons learned about technology transfer. Technovation,21(4), 253–261.

    Google Scholar 

  132. Röller, L.-H., Siebert, R., & Tombak, M. (2007). Why Firms Form (or do not Form) RJVs. The Economic Journal,117(522), 1122–1144. https://doi.org/10.1111/j.1468-0297.2007.02069.x.

    Article  Google Scholar 

  133. Rothwell, R. (1992). Successful industrial innovation: Critical factors for the 1990s. R&D Management,22(3), 221–240.

    Google Scholar 

  134. Slater, S. F., Mohr, J. J., & Sengupta, S. (2014). Radical product innovation capability: Literature review, synthesis, and illustrative research propositions. Journal of Product Innovation Management,31(3), 552–566.

    Google Scholar 

  135. Sohn, S. Y., Moon, T. H., & Kim, S. (2005). Improved technology scoring model for credit guarantee fund. Expert Systems with Applications,28(2), 327–331. https://doi.org/10.1016/j.eswa.2004.10.012.

    Article  Google Scholar 

  136. Steffensen, M., Rogers, E. M., & Speakman, K. (2000). Spin-offs from research centers at a research university. Journal of Business Venturing,15(1), 93–111.

    Google Scholar 

  137. Teece, D. (2011). Dynamic capabilities and strategic management: Organizing for innovation and growth. Oxford: Oxford University Press.

    Google Scholar 

  138. Van der Panne, G., Van Beers, C., & Kleinknecht, A. (2003). Success and failure of innovation: A literature review. International Journal of Innovation Management,7(03), 309–338.

    Google Scholar 

  139. Wilson, M. (2015). Innovation radar review. Rochester, New York: Neworks LLC.

    Google Scholar 

  140. Wonglimpiyarat, J. (2010). Innovation index and the innovative capacity of nations. Futures,42(3), 247–253. https://doi.org/10.1016/j.futures.2009.11.010.

    Article  Google Scholar 

  141. Zucker, L. G. (1986). Production of trust: Institutional sources of economic structure, 1840–1920. Research in organizational behavior,8, 53–111.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Daniel Nepelski.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Disclaimer: The views expressed are those of the authors and may not in any circumstances be regarded as stating an official position of the European Commission.

Appendix: Innovation Radar Survey and Assessment Frameworks

Appendix: Innovation Radar Survey and Assessment Frameworks

Innovation Radar Survey

Note:

the first 17 questions below are to be answered for each innovation the project develops (up to a maximum of 3 innovations).

  1. (1)

    Title of the innovation

  2. (2)

    Describe the innovation (in less than 500 characters, spaces included):

  3. (3)

    Is the innovation developed within the project…:

    1. (a)

      Under development

    2. (b)

      Already developed but not yet being exploited

    3. (c)

      Being exploited

  4. (4)

    Characterise the type of innovation

    1. (a)

      Significantly improved product

    2. (b)

      Significantly improved service (except consulting services)

    3. (c)

      Significantly improved process

    4. (d)

      Significantly improved marketing method

    5. (e)

      Significantly improved organisational method

    6. (f)

      Consulting services

    7. (g)

      New product

    8. (h)

      New service (except consulting services)

    9. (i)

      New process

    10. (j)

      New marketing method

    11. (k)

      New organisational method

    12. (l)

      Other

  5. (5)

    Level of innovation: What is the level of innovation?

    1. (a)

      Some distinct, probably minor, improvements over existing products

    2. (b)

      Innovative but could be difficult to convert customers

    3. (c)

      Obviously innovative and easily appreciated advantages to customer

    4. (d)

      Very innovative

  6. (6)

    How will the innovation be exploited?

    1. (a)

      Introduced as new to the market (commercial exploitation)

    2. (b)

      Only deployed as new to the organisation/company (new internal processes implemented, etc.)

    3. (c)

      No exploitation planned

  7. (7)

    Indicate the step(s) in order to bring the innovation to (or closer to) the market

      Done or ongoing Planned Not Planned but needed/desirable Not planned and not needed
    1. Technology transfer     
    2. A partner’s research team and business units are both engaged in activities relating to this innovation     
    3. Market study     
    4. Prototyping in laboratory environment     
    5. Prototyping in real world environment     
    6. Pilot, Demonstration or Testing activities     
    7. Feasibility study     
    8. Launch a start-up or spin-off     
    9. Licensing the innovation to a 3rd party     
    10. Complying with existing standards     
    11. Contribution to standards     
    12. Raise capital     
    13. Raise funding from public sources     
    14. Business Plan     
    15. Other (please specify)     
  8. (8)

    Is there a clear owner of the innovation in the consortium or multiple owners?

    1. (a)

      One clear owner

    2. (b)

      Multiple owners

  9. (9)

    Indicate (up to a maximum of 3) key organisation(s) delivering this innovation. For each of these identify under the next question their needs to fulfil their market potential

    • Organisation 1:

    • Organisation 2:

    • Organisation 3:

  10. (10)

    Indicate their needs to fulfil their market potential

      Organisation 1 Organisation 2 Organisation 3
    1. Investor readiness training    
    2. Investor introductions    
    3. Biz plan development    
    4. Expanding to more markets    
    5. Legal advice (IPR or other)    
    6. Mentoring or Coaching    
    7. Partnership with other SME(s)    
    8. Partnership with large corporates    
    9. Incubation/Startup accelerator    
    10. Executive Training Other    
    11. Other (specify)    
  11. (11)

    For the private company/companies chosen as one of the 3 “key innovators”, will this innovation be used by mainly current or new customers?

    1. (a)

      Current customers

    2. (b)

      New customers

  12. (12)

    Market maturity: The market targeted by this innovation is…

    1. (a)

      The market is not yet existing and it is not yet clear that the innovation has potential to create a new market

    2. (b)

      Market-creating: The market is not yet existing but the innovation has clear potential to create a new market

    3. (c)

      Emerging: There is a growing demand and few offerings are available

    4. (d)

      Mature: The market is already supplied with many products of the type proposed

Note:

the next question is only to be answered if “mature” is selected as the answer to the previous question

  1. (13)

    Market dynamics: is the market…

    1. (a)

      In decline

    2. (b)

      Holding steady

    3. (c)

      Growing

  1. (14)

    Are there other markets for this innovation that the innovators are not yet targeting?

    1. (a)

      Yes

    2. (b)

      No

  2. (15)

    Market competition: How strong is competition in the target market?

    1. (a)

      Patchy, no major players

    2. (b)

      Established competition but none with a proposition like the one under investigation

    3. (c)

      Several major players with strong competencies, infrastructure and offerings

  3. (16)

    When do you expect that such innovation could be commercialised (from today)?

    1. (a)

      Less than 1 year

    2. (b)

      Between 1 and 3 years

    3. (c)

      Between 3 and 5 years

    4. (d)

      Between 5 and 10 years

    5. (e)

      More than 10 years

  4. (17)

    When do you expect that such innovation could be commercialised (from today)?

    1. (a)

      Yes

    2. (b)

      No

(End of questions that are specific to each innovation)

General Questions

(General observations of the innovation expert on this project’s innovation activities)

  1. (1)

    How do you consider the project’s performance in terms of innovation?

    1. (a)

      Performing below my expectations

    2. (b)

      Meeting my expectations

    3. (c)

      Exceeding my expectations

    4. (d)

      Highly exceeding my expectations

  2. (2)

    How does the innovator engage End-users?

    1. (a)

      End-users are actively engaged in co-creating the innovation(s)

    2. (b)

      No End-users consulted or engaged in innovation(s) development

    3. (c)

      End-users are consulted (e.g. in testing activities)

  3. (3)

    Are there IPR issues within the consortium that could compromise the ability of the organisation(s) to exploit new products/solutions/services, internally or in the market place?

    1. (a)

      Yes

    2. (b)

      No

  4. (4)

    Which are the external bottlenecks that compromise the ability of project partners to exploit new products, solutions or services, internally or in the market place?

    1. (a)

      Regulation

    2. (b)

      Skills in the wider workforce

    3. (c)

      Standards

    4. (d)

      Financing

    5. (e)

      Trade issues (between MS, globally)

    6. (f)

      IPR

    7. (g)

      Others

  5. (5)

    Indicate how many patents have been applied for by the project: _________

  6. (6)

    How would you rate the level of commitment of relevant organisation(s) to exploit the innovation?

    1. (a)

      Very low

    2. (b)

      Low

    3. (c)

      Average

    4. (d)

      High

    5. (e)

      Very high

  7. (7)

    Please indicate the 1 partner (excluding large enterprises) that the panel considers to be the most impressive in terms of innovation potential within the context of the innovations identified

  8. (8)

    Please provide concrete recommendations for the project to improve its innovations and their potential to deliver impact in—or close to—the market place.

  9. (9)

    Hypothetically but honestly, would you invest your own money in any innovation developed by this project?

    1. (a)

      Yes

    2. (b)

      No

  10. (10)

    Please indicate the participant(s) from which a woman is in a position of leadership (such as Principal Investigator/Work Package Leader) for this project:

Innovation Potential Assessment Framework

See Table 5

Table 5 Innovation Potential Assessment Framework

Innovation Management

Criteria and questions Scoring**
Innovation Management Question code* Max: 10
There is a clear owner of the innovation Q8  
 One clear owner a 1
 Multiple owners b 0
A partner’s research team and business units are both engaged in activities relating to this innovation Q7.2  
 Done   1
 Planned   0.5
Market study Q7.3  
 Done   1
 Planned   0.5
Launch a start-up or spin-off Q7.8  
 Done   1
 Planned   0.5
Licensing the innovation to a 3rd party Q7.9  
 Done   1
 Planned   0.5
Raise capital Q7.12  
 Done   1
 Planned   0.5
Raise funding from public sources Q7.13  
 Done   1
 Planned   0.5
Business plan Q7.14  
 Done   1
 Planned   0.5
Other Q7.15  
 Done   1
 Planned   0.5
Are there IPR issues within the consortium that could compromise the ability of the organisation(s) to exploit new products, solutions, services, internally or in the market place? GQ3  
 Yes a 0
 No b 1
  1. *GQ refers to general questions in the questionnaire
  2. **Observed values of each indicator are brought to the scale between 0 and 100

Market potential

Criteria and questions Scoring**
Market Potential Question code* Max: 10
Type of innovation: Q4  
 New product, process or service g OR i OR h 1
 Significantly improved product, process or service a OR c OR b 0.75
 New marketing or organisational method j OR k 0.5
 Significantly improved marketing or organisational method d OR e 0.25
 Consulting services, other f OR l 0
Level of innovation: What is the level of innovation Q5  
 Some distinct, probably minor, improvements over existing products. a 0.25
 Innovative but could be difficult to convert customers b 0.5
 Obviously innovative and easily appreciated advantages to customer c 0.75
 Very innovative d 1
Innovation exploitation: Q6  
 Commercial exploitation a 2
 Internal exploitation b 1
 No exploitation c 0
Market maturity: The market for this innovation is… Q12  
 The market is not yet existing… a 0
 Market-creating: … b 0.5
 Emerging: … c 1
 Mature: … d 0.75
Are there other markets for this innovation … Q14  
 Yes a 1
 No b 0
Market competition: How strong is competition in the target market? Q15  
 Patchy, no major players a 1
 Established competition but none with a proposition like the one under investigation b 0.5
 Several major players with strong competencies and infrastructure c 0
Has a trademark been registered for this innovation Q17  
 Yes a 1
 No b 0
Number of patents that have been applied by the project GQ5  
 0   0
 1   0.25
 2–3   0.75
 > 3   1
Number of external bottlenecks that compromise the ability of project partners to exploit new products, … GQ4  
 0   1
 1   0.5
 2   0.25
 > 2   0
  1. *GQ refers to general questions in the questionnaire
  2. ** Observed values of each indicator are brought to the scale between 0 and 100

Innovator Capacity Assessment Framework

See Table 6

Table 6 Innovator Capacity Assessment Framework

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Nepelski, D., Van Roy, V. Innovation and innovator assessment in R&I ecosystems: the case of the EU Framework Programme. J Technol Transf (2020). https://doi.org/10.1007/s10961-020-09814-5

Download citation

Keywords

  • Research and innovation policy
  • Innovation management
  • Innovation ecosystem
  • Framework Programme
  • European Commission

JEL Classification

  • L52
  • L53
  • O31
  • O32
  • O25