6.1 Monitoring and Evaluation

Various conceptual frameworks are used to design and structure M&E evaluation criteria. For instance, these include: (i) the logic model, (ii) results-chain framework, and (iii) balanced scorecard approach.

In using the logic model, the following key variables are considered: inputs, outputs and outcomes. The model also considers the logical linkages to external influences, environmental and related programmes as well as the situational context (problem) the motivates the introduction of the intervention (inputs and outputs) to achieve a specific impact (outcome) (Millar, Simeone and Carnevale, 2001). Oftentimes, the logic model is critiqued for being a linear model that aims to monitor and evaluate a multi-dimensional process. When planning to build a logic model the following questions can be posed: (i) what is the current situation that needs to be tackled? (ii) what will it look like when the desired outcome has been achieved? (iii) what behaviours need to change for that outcome to be achieved? (iv) what knowledge or skills do people need before the behaviour changes? (v) what activities need to be performed to cause the necessary change? (vi) what resources will be required to achieve the desired outcome? (Millar et al., 2001).

The results-chain framework on the other hand, is a M&E tool that is used by the World Bank (2012) to measure effectiveness. This framework aims to establish and link strategic development objectives to interventions and intermediate outcomes and results. In developing such a framework that demonstrates effectivess, the following guiding questions can be discussed: 

  • Relevance

    • Does the programme in its current form respond to national priorities and original objectives?

  • Implementation

    • What progress has been made in implementing the contractual framework?

    • Were the programme, systems, processes and activities put into place as originally intended?

    • What factors have facilitated and/or acted as barriers to implementation?

    • How can the implementation process of the new contract be improved?

    • To what extent are the strategic objectives for the programme being met?

  • Effectiveness

    • Is the programme achieving the goals and objectives it was intended to accomplish?

    • Have the interventions and equipment used produced the expected effects?

    • Could more effects be obtained by using different equipment?

  • Efficiency

    • Are the programme’s activities being produced with appropriate use of resources such as budget and staff time?

    • Have the objectives been achieved at the lowest cost, or can better effects be obtained at the same cost?

    • To what extent has the infrastructure and workload changed?

  • Utility

    • Is the equipment producing satisfactory outcomes with regard to the initial goal from the beneficiary’s point of view?

    • Have local working relationships with and within field system changed?

  • Attribution

    • Can progress on goals and objectives be shown to be related to the programme, as opposed to other things that are going on at the same time?

  • Sustainability

    • Is the programme sustainable? This links to: (i) financial, (ii) human resourcing, (iii) environment, and (iv) research outputs.

    • What quality assurance measures have been introduced? (World Bank, 2012)                                                                                                          

The third approach is that of the balanced scorecard. In 1992, Kaplan and Norton proposed the balanced scorecard method to evaluate and measure the financial and non-financial performance of organisations in terms of finances, customers, internal business processes, and learning and growth. The development of the balanced scorecard, therefore, claims to provide a holistic perspective of progress and performance towards achieving strategic goals that allow the organisation to function in a rapidly evolving environment. This multi-perspective method articulates links between inputs, processes and outcomes as well as focuses on the importance of managing these components in order to achieve the organisation’s strategic priorities and targets (Kaplan & Norton, 1992). The balanced scorecard has been adopted in the services, manufacturing, marketing and retailing, and public sectors (Hoque, 2014).

The choice of the most suitable M&E tool depends on its fit with the organisation’s mandate and its strategic imperatives. This means that based on the maturity of the organisation and the systems and processes that are in place, the choice of the M&E tool may differ.

6.2 Site Visits and/or Technical Audits

An integral component of monitoring and evaluation of equipment grants is a site visit and technical audit as conducted by funding agencies at the time at which the equipment is pronounced to be commissioned by the grant holder and the research institution’s designated authority. This entails the visit of public agency staff to the location at which the research equipment has been installed and commissioned with the objective of assessing:

  • All management plan criteria and requirements are met.

  • The functional capability of the equipment in terms of the equipment yielding results that meet publication or journal standards.

  • The quality and quantity of outputs linked to the usage of the equipment.

  • The usage of the equipment by (i) postgraduate students; (ii) other researchers, both national and international; and (iii) private sector.

In cases where any of the above criteria are not met then a full technical audit will need to be conducted. This would firstly entail the submission of an audit report by the supplier, highlighting the following:

  • Have the manufacturer-specified environmental conditions for housing the equipment been met? If there are gaps in meeting any or all of the specified conditions as described in Chap. 7, then these must be stated.

  • Are there are any challenges that relate to either the hardware or software? If hardware, is reported as a challenge then the supplier must indicate if the replacement components are covered by the guarantees and/or warranties of the service level agreement.

  • Are there gaps in the skills set at the institution, in terms of optimally operating the equipment?

Secondly, based on the audit report from the manufacturer, the institution must be able to respond in writing to the report. The final report must be submitted to the funding agency and must consider the following requirements:

  • Steps that will be taken to address the gaps identified.

  • Timelines for delivery.

  • Available budget for the implementation of the above.

Thirdly, a face-to-face meeting must be convened comprising the following parties:

  • Technical audit team comprising representatives from the funding agency, including (i) staff responsible for managing the RI grants; (ii) internal auditors; (iii) financial grants management staff; and (iv) an independent research equipment expert.

  • Research equipment team comprising of (i) senior management representatives at the research institution; (ii) the grant holder; and (iii) the supplier and/or manufacturer of the equipment.

The objectives of such a meeting focus on:

  • Reaching consensus and recording the agreements, committed budgets and timeframes for implementation.

  • Engaging the supplier and/or manufacturer on how best to expedite the process for addressing the gaps and/or challenges. This may include defining the role of the supplier and/or manufacturer in aiding the grant holder to resolve these challenges.

  • Engaging senior management and the grant holder of the research institution on meeting the agreed to deliverables.

In the event that there is a lack of commitment or adherence to the timelines and/or deliverables in the management plan the funding agency is liable to make reference to the breach clause in the Conditions of Grant and to proceed to either withdraw or recall the grant awarded to the institution as described in Chap. 4.

6.3 Risk Management

Funding agencies need to manage risks on a daily basis, especially relating to financial controls and integrity (Bailey, 2010). These organisations need to guard against falling prey to managing risks in a haphazard and unsystematic manner. In this section, the term “risk” is used to describe event(s) that have a potentially negative impact on the funding agency’s assets, activities and operations (Kwak & Keleher, 2015). The management of risks and risk events refers to the (i) continuous process of assessing risks; (ii) reducing the chances of a risk event transpiring; and (iii) putting in places measures to tackle an event should it occur (Kwak & Keleher, 2015). The mapping of potential risks and the impact of risk events against the likelihood of such events transpiring, forms part of a risk register, and is an important risk management exercise (Bailey, 2010). Hence risk management must commence at the RI planning phase.

Part of risk management relating to research equipment involves the planning related to minimising loss (financial and other), damages, and impact of acquired physical assets from third party allegations of liability. Information presented in this section makes reference to the work done by Bailey (2010) and Kwak and Keleher (2015). There are six components identified as part of the risk management process which includes the (i) internal environment; (ii) objective setting; (iii) event identification; (iv) risk assessment and response; (v) control activities, and (vi) communication and monitoring (Bailey, 2010).

One of the suggestions of Kwak and Keleher (2015) is to adopt enterprise risk management (ERM) as a tool to manage risks and exploit opportunities. The rationale for using ERM is that it affords organisations, particularly funding agencies, the ability to identify and assess threats or risk events in terms of the likelihood of such an event transpiring and the magnitude of impact should the risk event occur. A further suggestion is that the funding agency develop new internal policies in support of the ERM and that for risk management processes to be effective existing data sources must be utilised whilst simultaneously considering the incorporation of new ones. In the way of recommendations, Kwak and Keleher (2015) propose that funding agencies utilise data-driven systems to collect and manage data which in turn can be utilised to assess risks—such data may include historic data on the grant holder in terms of historic number of grants and size of grant values, performance and other monitoring data. Another recommendation that the investment in the introduction of new or revised risk management practices be supported by parallel investments in training and capacity development interventions. These in turn can inform tools and processes to standardise the decision-making and decision-approving process within the funding agency (Kwak & Keleher, 2015).

In addition, risk management must be an iterative process across the four stages of the grant lifecycle. Within each stage of the grant lifecycle, risk events have the possibility of materialising and funding agencies need to be proactive in preparing for such threats. For a detailed implementation framework of risk refer to Annexure B.

Usually risks can be minimised through institutional insurance cover that extends to instances where there may be theft or breakage of equipment and the associated loss of research data. Hence part of the planning process may take into consideration the following:

  • What will be insured?

  • At whose cost?

  • What are the options for public liability cover?

  • What are the options for professional liability cover?

In safeguarding the funding or investment from any risks, it is imperative for the funding agency that is awarding the grant to stipulate the conditions associated with that grant award. This is a legally binding document that is issued by the funding agency and is consented to and signed by the researcher and their research institution’s designated authority.

As part of a risk management process, one of the recommendations by Kwak and Keheler (2015) is for a business unit for risk management services to be established. This unit ought to comprise of (i) a policy team that drafts policy and provides technical assistance to staff at the funding agency; (ii) management improvement team that focuses on providing assistance to grant holders on matters relating to grants; and (iii) a programme monitoring team that concentrates on monitoring and evaluation activities as well as measuring performance against KPIs. This team also focuses on standardisation of the collection and review of data (Kwak & Keleher, 2015).

In order to manage risks relating to large investments in RI, a requirement from the side of the funding agency would be to put in place a governance and management structure at the host research institution. Based on experience, it is imperative to have a two-layered governance structure. The first layer will primarily (i) have an advisory role; (ii) ensure good governance; (iii) commit to the provision of the necessary resources required to meet obligations and conditions relating to the equipment, including risks relating to currency fluctuations; and (iv) review performance and budgets. This first layer can be termed the advisory committee and may comprise of, but be not limited to, representatives from (i) senior management at research institutions; (ii) the funding agency; (iii) private sector or other donor parties if they have contributed in some form to the cost of acquiring the research equipment; (iv) public outreach sector; (v) operations management; and (vi) independent experts.

The second layer, or operations committee, may comprise of, but not limited to, representatives from (i) the user community; (ii) the researcher to whom the equipment was awarded; (iii) staff scientists, operators, technicians, engineers and data specialists; and (iv) the finance officer. The operations committee will be responsible for (i) the day-to-day management of the facility; (ii) reporting on usage of the equipment, income and expenditure, and research outputs; (iii) develop an access and research strategy for the research equipment facility; and (iv) submit statutory reports that are required by the funding agency.

6.4 Reporting

Funding agencies such as the NRF tend to measure performance against the two said indicators, viz. financial and non-financial indicators, as described by the balanced scorecard approach to M&E (National Research Foundation, 2018b). A summary is presented in Fig. 6.1.

Fig. 6.1
figure 1

Return on RI investments, as measured by financial and non-financial indicators, must reflect accuracy, completeness, transparency validity and reliability

  • Financial indicators

One of the financial indicators that the NRF measures performance in this perspective, is against the financial spend of grants awarded to grant holders (National Research Foundation, 2018b). This means that the NRF measures performance against grant funds being claimed or drawn by the grant holder. Usually funding agencies face the challenge of poor uptake of grants by institutions due to challenges associated with procurement processes amongst others (refer to Chap. 7). Consequently there is a large cash holding of funds committed to grants that reside with public funding agencies. Hence, the facilitated movement of funds from funding agencies to grant holder institutions is a measure of performance against the financial indicators.

  • Non-financial indicators

Data received by the NRF is usually sourced from annual progress reports (APRs) that are submitted by the grant holder on an annual basis (National Research Foundation, 2018b). This data must be checked by the institutional management that information presented to the funding agency is: (i) accurate; (ii) complete; (iii) valid; (iv) reliable; and (v) transparent, in accordance with Sect. 4.2 above. This quality assurance check ensures that collated and consolidated information is accurately reported by the funding agency against both financial and non-financial indicators. The non-performance indicators within the NRF context extends firstly to outputs linked to human capital development, which in turn counts (i) the number of users linked to the placement of an equipment; and (ii) the number of postgraduate students trained on using the equipment. The second non-financial indicator links to research outputs, viz. (i) number of publications; (ii) number of patents; and (iii) other research outputs (National Research Foundation, 2018b). These indicators are expanded, as follows:

  • Human capital development

  • Number of postgraduate students trained: A reflection of how many Master’s and Doctoral students have obtained degrees where they utilised the research equipment.

  • Number of users: A reflection of usage of the equipment by the wider research community.

  • Staff and researcher development: A reflection of capacity development for training instrument staff and researchers, both at the home institution as well as other research institutions. This also links to the concept of succession planning.

  • Research outputs

  • Number of publications: A reflection of the productivity linked to the usage of the equipment.

  • Number of patents: A reflection of the innovative capacity linked to the usage of the equipment.

  • Other research outputs: A reflection of other novel areas of productivity linked to the usage of the equipment. These may extend to invited plenary talks at national and/or international meetings that links to the research equipment.

Based on the annual reports submitted by the recipients of RI grants, over the period spanning 2009–2017, the outputs have been reported in Table 6.1.

Table 6.1 Outputs against research infrastructure grants awarded by the National Research Foundation from 2009 to 2017 (National Research Foundation, 2018a)

Of the total number of RI grants awarded by the NRF, 301 grants (approximately 74% of a total number of 408 grants awarded) were able to support the priority investment areas in the country spanning (i) Farmer to Pharma; (ii) Space Science; (iii) Energy Security; (iv) Global Climate Change; (v) Water Security; and (vi) Human and Social Dynamics. The remaining, 26% of grants were in support of blue skies research in areas such as nanotechnology and biotechnology, amongst others (Table 6.2).

Table 6.2 Priority areas that have been supported through research conducted on equipment grants awarded by the National Research Foundation (2018a)

6.5 Equipment Database

The development of a national research equipment database is a critical enabler for the effective management of research infrastructure grants by any funding agency. Such a database fulfils the role of an online repository that houses relevant information pertaining to investments across the various RI categories that have been procured using public funds. The database hosted by the NRF, the Research Equipment Database (RED), is a live tool that plays an important role in:

  • Informing a funding agency of continued investment(s) in research equipment and platforms.

  • Advising the researcher community of what equipment is available nationally.

  • Facilitating access by researchers and students to multi-user equipment.

  • Stimulating new applications to the funding agency for research infrastructure (National Research Foundation, 2018a).

  • Minimising the duplication of equipment within a specific institution, region or country.

The database should house information that would allow one to adequately gauge the:

  • Type of equipment.

  • Model of the equipment.

  • Functional state of the equipment.

  • Disciplines supported by the equipment.

  • Geographical location of the equipment (name of the research institution, the department and the laboratory space/building the equipment occupies).

  • Contact details of the person in charge of the equipment who would facilitate access to various users (National Research Foundation, 2018a).

Such a database is able to map the type of research equipment available within a country and how this is distributed across the national landscape with the secondary objective of minimising the duplication of investments at institutions that are in close proximity. It serves as an analytical tool that allows funding agency staff to update content and also track the outputs, outcomes and impact relating to the investment in research equipment.

6.6 Summary

This chapter presents an overview of monitoring and evaluation aligned to the management of research infrastructure. Furthermore, the chapter makes reference to pertinent issues such as risk management, reporting, site visits and technical audits. This chapter also recommends the development and maintenance of a database that can serve as a central repository of RI grants within a specific country.