Annotated Bibliography on Evaluation of Research, 1985–1990

  • H. Averch
Chapter

Abstract

This annotated bibliography contains relevant literature on research evaluation appearing after 1985. It covers work published in journals and government documents. There are 147 listings.

Keywords

Data Envelopment Analysis Research Policy Science Policy Unpublished Paper European Economic Community 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. (1).
    T. Ahn et al. “Some Statistical and DEA Evidence of Relative Efficiencies of Public and Private Institutions of Higher Learning,” Socioeconomic Planning Science vol.22, 1988, pp.259–269. SeeGoogle Scholar
  2. (2).
    J. Anderson et al., “On-line Approaches to Measuring National Scientific Output: A Cautionary Tale,” Science and Public Policy vol. 15, 1988, pp. 153161.Google Scholar
  3. (3).
    W. B. Ashton and R. K. Sen, “Understanding Technology Change Using Patent Information,” 1986, unpublished paperGoogle Scholar
  4. (4).
    W. W. Ashton and R. K. Sen, “Using Patent Information in Business Planning-I, Research Technology Management, vol. 31, 1988, pp. 42–46.Google Scholar
  5. (5).
    W. W. Ashton and R. K. Sen, “Using Patent Information in Business Planning-II,” Research Technology Management, vol. 32, 1989, pp. 36–42.Google Scholar
  6. (6).
    A. A. Araji, “Returns to Public Research Investment in the United States, Canadian Journal of Agricultural Economics, vol. 37, 1989, pp. 467–479.CrossRefGoogle Scholar
  7. (7).
    R. W. Ashford et al., “The Capital-Investment Appraisal of New Technology: Problems, Misconceptions and Research Directions,” Journal of the Operational Research Society, vol. 39, 1988, pp. 637–642.Google Scholar
  8. (8).
    W. Ashton et al., Patent Trend Analysis: Tracking Technology Change for Business Planning ( Columbus, OH: Battelle Memorial Institute, 1985 ).Google Scholar
  9. (9).
    H. A. Averch, “Measuring the Cost-Efficiency of Basic Research: Input-Output Approaches,” Journal of Policy Analysis and Management, vol. 6, 1987, pp. 342–362.CrossRefGoogle Scholar
  10. (10).
    Exploring the Cost-Efficiency of Basic Research Funding in Chemistry,“ Research Policy vol. 18, 1989, pp. 165–172.Google Scholar
  11. (11).
    The Practice of Research Evaluation in the United States,“ Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988).Google Scholar
  12. (12).
    B. B. Bare and R. Loveless, A Case history of the Regional Forest Nutrition Research Project: Investments, Results and Applications (Seattle, College of Forest Resources, University of Washington, College of Forest Resources, 1985 ).Google Scholar
  13. (13).
    R. Barré, “A Strategic Assessment of the Scientific Performance of Five Countries, ” Science and Technology Studies. vol. 5, 1987, pp. 32–38.Google Scholar
  14. (14).
    B. L. Basberg, “Patents and the Measurement of Technological Change: A Survey of the Literature,” Research Policy, vol. 16 (1987) pp. 131–141.CrossRefGoogle Scholar
  15. (15).
    Patents and the Measurement of Technological Change,“ in K. Gronhaug and G. Kaufmann, Innovation: A Cross-Disciplinary Perspective (New York: Norwegian University Press, 1988).Google Scholar
  16. (16).
    J. E. Beasley, “Comparing University Departments, Omega, vol. 18, 1990, pp. 171–183.Google Scholar
  17. (17).
    D. N. Bengston, “Economic Evaluation of Agricultural Research,” Evaluation Review, volume 9, 1985, pp. 242–262.CrossRefGoogle Scholar
  18. (18).
    D. N. Bengston and H. F. Kaiser, “Research Planning and Evaluation in the U.S. Forest Service,” Evaluation Review, 12, 1988, pp. 276–290.CrossRefGoogle Scholar
  19. (19).
    F. Bilich, Science and Technology Planning and Policy ( New York: Elsevier Science Publishers, 1989 ).Google Scholar
  20. (20).
    P. Bisogno and G. Sirilli, “The Use of R and D Evaluation in Policy-Making in Italy,” 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  21. (21).
    J. Blanco et al., “Proposal of an Alternative Evaluation Program,” 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  22. (22).
    J. T. Bonnen, “Historical Sources of U.S. Agricultural Productivity: Implications for RD Policy and Social Science Research,” American Journal of Agricultural Economics, vol. 65, 1983, pp. 958–966.CrossRefGoogle Scholar
  23. (23).
    T. Braun et al, Scientometric Indicators: A 32-Country Comparative Evaluation of Publishing Performance and Citation Impact ( Singapore: World Scientific Publishing Co., Ltd., 1985 ).CrossRefGoogle Scholar
  24. (24).
    R. Bud, “The Case of the Disappearing Caveat: A Critique of Irvine and Martin’s Methodology,” Social Studies of Science vol. 15, 1985, pp. 548–553.CrossRefGoogle Scholar
  25. (25).
    M. P. Carpenter et al. “Bibliometric Profile for British Academic Institutions: An Experiment to Develop Research Output Indicators,” Scientometrics, vol 14, no 3–4, pp. 213–234.Google Scholar
  26. (26).
    L. Christansen and J. K. Christansen, “An Analysis of Evaluations in the Nordic Countries, 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  27. (27).
    D. E. Chubin, “Research Evaluation and the Generation of Big Science Policy, Knowledge vol. 9, 1987, pp. 254–277.Google Scholar
  28. (28).
    Designing Research Program Evaluations: A Science Studies Approach, Science and Public Policy vol. 14, 1987, pp. 82–90.Google Scholar
  29. (29).
    Ciba Foundation Conference, The Evaluation of Scientific Research ( New York: John Wiley Sons, 1989 ).Google Scholar
  30. (30).
    G. A. Cole, The Evaluation of Basic Research in Industrial Laboratories (Cambridge, MA: Abt Associates, Inc.).Google Scholar
  31. (31).
    H. M. Collins, “The Possibilities of Science Policy,” Social Studies of Science vol. 15, 1985, pp. 554–558.CrossRefGoogle Scholar
  32. (32).
    H. R. Coward and J. J. Franklin, “Identifying the Science-Technology Interface,” Science, Technology, and Human Values, vol. 14, 1989, pp. 50–77.CrossRefGoogle Scholar
  33. (33).
    R. Cordero, “The Measurement of Innovation Performance in the Firm: An Overview, Research Policy, vol. 19, 1990, pp. 185–192.CrossRefGoogle Scholar
  34. (34).
    S. E. Cozzens, “Exert Review in Evaluating Programs,” Science and Public Policy, vol. 14, 1987, pp. 71–81.Google Scholar
  35. (35).
    J. W. Creswell, Faculty Research Performance: Lessons from the Sciences and Social Sciences ( Washington, DC: Association for the Study of Higher Education, 1985 ).Google Scholar
  36. (36).
    D. Crouch et al., “Bibliometrics Analysis for Science Policy: An Evaluation of the United Kingdom’s research Performance in Ocean Currents and Protein Crystallography, Scientometrics vol 9, 1986, pp. 239–267.CrossRefGoogle Scholar
  37. (37).
    A. J. Czajowski and S. Jones, “Selecting Interrelated RD Projects in Space Technology,” IEEE Transactions on Engineering Management, vol. 33, 1986, pp. 17–24.Google Scholar
  38. (38).
    W. L. Currie, “The Art of Justifying New Technology to Top Management,” Omega, vol. 17, 1989, pp. 409–418.CrossRefGoogle Scholar
  39. (39).
    N. Danila, “Strategic Evaluation and Selection of RD Projects,” RD Management, vol 19, 1989, pp. 47–62.Google Scholar
  40. (40).
    Department of Trade and Industry, Evaluation of RD-A Policymaker’s Perspective (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  41. (41).
    C. J. Doyle and M. S. Ridout, “The Impact of Scientific Research on UK Agricultural Productivity,” Research Policy, vol. 14, 1985, pp. 109–116.CrossRefGoogle Scholar
  42. (42).
    W. A. Dejong, “Assessment and Evaluation of Output Quality at TNO,” Paper presented at the International workshop on Assessment and Evaluation, Queen Elizabeth II Conference Centre, 17 /18 November 1988, pp. 142–145.Google Scholar
  43. (43).
    I. Dror, “Technology Innovation Indicators,” RD Management, vol. 19, 1989, pp. 243–249.Google Scholar
  44. (44).
    L. Dwyer, “RD Project Assessment as an Information and Communication Process,” Prometheus, vol. 5, 1987, pp. 419–426.CrossRefGoogle Scholar
  45. (45).
    I. Feller, “Evaluating State Advanced Technology Programs,” Evaluation Review, vol. 12, 1988, pp. 232–252.CrossRefGoogle Scholar
  46. (46).
    P. J. Finn, “Evaluation of the Crop Production Development Research Program, Canadian Farm Economics, vol. 21, 1987, pp. 19–27.Google Scholar
  47. (47).
    S. J. Fitzsimmons, Strategic Evaluation of the Research Programs in the European Economic Community (Cambridge, MA: Abt Associates, Inc., 1985 ).Google Scholar
  48. (48).
    G. Fox, “Is the United States Really Underinvesting in Agricultural Research, ” American Journal of Agricultural Economics vol. 67, 1985, pp. 806812.Google Scholar
  49. (49).
    J. Jeffrey Franklin, “Selectivity in Funding: Evaluation of Research in Australia,” Prometheus, vol. 6., No. 1, June 1988, pp. 34–60.Google Scholar
  50. (50).
    G. Friborg, “The Evaluation of National Schemes-with Single Organisations and within Collaborative Groups,” Paper presented at the International workshop on Assessment and Evaluation, Queen Elizabeth II Conference Centre, 17/18 November 1988, in Department of Trade and Industry, pp. 78–85.Google Scholar
  51. (51).
    M. Gibbons, “Methods for Evaluation of Research,” International Journal of Institutional Management in Higher Education, vol. 9, 1985, pp. 79–85.Google Scholar
  52. (52).
    C.S. Gilmor, “Comments on the Paper, ‘A Reevaluation of the Contributions to Radio-Astronomy of the Nancay Observatory,’ 4S Review, vol. 3, 1989, pp. 19–21.Google Scholar
  53. (53).
    W. L. Giusti and L. Georghiu, “The Use of Co-Nomination Analysis in Real Time Evaluation of an RF Programme, Scientometrics 14, 1986, pp.Google Scholar
  54. (54).
    B. Gold, “Charting a Course to Superior Technology Evaluation,” Sloan Management Review, vol. 30, 1988, pp. 19–27.Google Scholar
  55. (55).
    L. Gougrnhrim, Comments on the Paper, ‘A Reevaluation of the Contributions to Radio-Astronomy of the Nancay Observatory,’ 4S Review, vol. 3, 1989, pp. 21–23.Google Scholar
  56. (56).
    V. Grandis and G. Lewison, “Evaluation of European Community Programs in Information Technology and Biotechnology,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  57. (57).
    P. E. Graves et al, Economics Departmental Rankings: Administrators Research Incentives, Constraints, and Efficiency, American Economic Review 72, 1982, pp. 1131–1141.Google Scholar
  58. (58).
    R. Gualtieri, “The Canadian Experience in Evaluating Regional Science and Technology Support Programmes,” Evaluation of RD-A Policymaker’s Perspective, Department of Trade and Industry ( London: Her Majesty’s Stationery Office, 1988 )Google Scholar
  59. (59).
    R. Gualtieri, “Evaluation of RD in Canada,” Evaluation of RD-A Policymaker’s Perspective, Department of Trade and Industry ( London: Her Majesty’s Stationery Office, 1988 ).Google Scholar
  60. (60).
    K. Guy and L. Georghiou,“Real-Time Evaluation and the Management of Mission-Oriented Research: The Evaluation of the Alvey Program: aims, achievements and Lessons, 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  61. (61).
    J. Haygreen et al., “The Economic Impact of Timber Utilization Research,” Forest Products Journal, vol. 36, 1986, pp. 12–20.Google Scholar
  62. (62).
    P. Hare et al., “Evaluation of the Involvement of the United Kingdom in Esprit,” Paper presented at the International workshop on Assessment and Evaluation, Queen Elizabeth II Conference Centre, 17/18 November 1988, in Department of Trade and Industry, pp. 100–106.Google Scholar
  63. (63).
    P. Hare and G. Wyatt, “Modelling the Determination of Research Output in British Universities,” Research Policy, vol. 17, 1988, pp. 315–328.CrossRefGoogle Scholar
  64. (64).
    P. Healy, H. Rothman, and P. K. Hoch, “An Experiment in Science Mapping for Research Planning,” Research Policy, vol. 15, 1986, pp. 233–251.CrossRefGoogle Scholar
  65. (65).
    E. K. Hicks and W. Callebaut. Evaluative Proceedings: 4S/EASST ( Amsterdam: SISWO Pulikatie, 1989 ).Google Scholar
  66. (66).
    R. E. Eldon and C. M. Devine, “Government’s Research and Evaluation Gap,” Public Relations Review, vol. 11, 1985, pp. 47–56.CrossRefGoogle Scholar
  67. (67).
    J. D. Hodgdon, Methods for the Strategic Evaluation of Research Programs: The State of the Art ( Cambridge, MA: Abt Associates Inc, 1985 ).Google Scholar
  68. (68).
    T.D. Hogan, “The Publishing Performance of U.S. Ph.D. Programs in Economics During the 1970’s,” Journal of Human Resources, vol. 21, 1986, pp. 216–229.CrossRefGoogle Scholar
  69. (69).
    J. Irvine, Evaluating Applied Research: Lessons from Japan ( London: Pinter, 1988 ).Google Scholar
  70. (70).
    J. Irvine, “Evaluation of Scientific Institutions: Lessons from a Bibliometric Study of UK Technical Universities,” in Ciba Foundation Conference, The Evaluation of Scientific Research ( New York: John Wiley Sons, 1989 ).Google Scholar
  71. (71).
    J. Irvine et al, “Assessing Basic Research: Reappraisal and Update of an Evaluation of Four Radio Astronomy Observatories,” Research Policy, vol. 16, 1987, pp. 213–227.CrossRefGoogle Scholar
  72. (72).
    J. Irvine and B. R. Martin, “Assessing Basic Research: The Case of the Isaac Newton Telescope,” Social Studies of Science, vol 13, 1983, pp. 49–86.CrossRefGoogle Scholar
  73. (73).
    J. Irvine and B. R. Martin, Research Foresight: Creating the Future ( The Hague, Netherlands Ministry of Education and Science, 1989 ).Google Scholar
  74. (74).
    J. Irvine et al., “Assessing Basic Research: Reappraisal and Update of an Evaluation of Four Radio Astronomy Observatories,” Research Policy, vol. 16, 1987, pp. 213–227.CrossRefGoogle Scholar
  75. (75).
    P. M. Jakes, “Research Evaluation in the U.S. Forest Service: Opinions of Research Managers,” Research Policy vol. 17, 1988, pp. 283–292.CrossRefGoogle Scholar
  76. (76).
    P. M. Jakes and E.C. Leatherberry, Alternative Approaches to Forestry Research Evaluation: an Assessment (St. Paul: U.S. Department of Agriculture Forest Service, 1986 ).Google Scholar
  77. (77).
    G. Lockett and M. Stratford, “Ranking of Research Projects: Experiments with Two Methods,” Omega, vol. 15, 1987, pp. 395–400.CrossRefGoogle Scholar
  78. (78).
    H. F. Moed, W.J.M Burger, J.G. Frankfort, and A.F.J. Van “The Use of Bibliometric Data for the Measurement of University Research Performance. ” Research Policy, vol. 14, 1985, pp. 131–149.CrossRefGoogle Scholar
  79. (79).
    M. R. Jalongo, “Faculty Productivity in Higher Education,” The Educational Forum, vol. 49, 1985, pp. 171–182.CrossRefGoogle Scholar
  80. (80).
    J. King, “A Review of Bibliometric and Other Science Indicators and their Role in Research Evaluation,” Journal of Information Science, vol. 13, 1987, pp. 261–276.CrossRefGoogle Scholar
  81. (81).
    R. N. Kostoff, “Evaluation of Proposed and Existing Accelerated Research Programs of the Office of Naval Research,” IEEE Transactions on Engineering Management, vol. 35, 1988. pp. 271–279.CrossRefGoogle Scholar
  82. (82).
    C. E. Kruytbosch, “Some Social and Organizational Characteristics of Breakthrough Science: An Analysis of Major Innovations in Four Fields of Science, 1950–1976.” 1978. Paper presented at the I World Congress of Sociology, Uppsala, Sweden.Google Scholar
  83. (83).
    I. Karatzas and G. Lewison, Evaluation of Scientifically-led Programmes, 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  84. (84).
    J. King, “The Use of Bibliometric Techniques for Institutional Research Evaluation: a Study of Avian Virology, Scientometrics vol. 14, 1988, pp. 295314.Google Scholar
  85. (85).
    J. Krige and D. Pestre, “A Critique of Irvine and Martin’s Methodology for Evaluating Big Science,” Social Studies of Science, vol. 15, 1985, pp. 525–539.CrossRefGoogle Scholar
  86. (86).
    P. Laredo, “The Assessment of National Schemes; Problems Associated with Implementation: Discussion of the French Experience,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  87. (87).
    T. Lazlo, Management and Evaluation of Central R and D programmes in Hungary, 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  88. (88).
    L. Leydesdorff and P. van der Schaar, “The Use of Scientometric Models for Evaluating National Research Programs,” Science and Technology Studies, vol. 5, 1987, pp. 22–31.Google Scholar
  89. (89).
    J. M. Logsdon and C. Rubin, An Overview of Federal Research Evaluation Activities. ( Washington, DC: George Washington University, 1985 )Google Scholar
  90. (90).
    H-P. Lorenzen, “Formulation of Aims and Evaluation using the Example of the Pilot Scheme ‘Support for New Technology-based Firms’ of the BMFT in Germany.’ Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  91. (91).
    T. Luukkonen-Gronow, “Scientific Research Evaluation: A Review of Methods and various Contexts of Their Application,” RD Management, vol. 17, 1987, pp. 207–221.Google Scholar
  92. (92).
    T. Luukkonen-Gronow and B. Staehle, “Quality Evaluations in the Management of Basic and Applied Research,” unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  93. (93).
    E. Mansfield, “The Social Rate of Return from Academic Research,” 1988, unpublished manuscript.Google Scholar
  94. (94).
    B. R. Martin and J. Irvine, “Evaluating the Evaluators: A Reply to our Critics,” Social Studies of Science, vol. 15, 1985, pp. 558–575.CrossRefGoogle Scholar
  95. (95).
    B. R. Martin and J. Irvine,, An International Comparison of Government Funding of Academic and Academically Related Research ( Brighton, UK: Science Policy and Research Evaluation Group, 1986 ).Google Scholar
  96. (96).
    Martin, B.R. et al. “A Re-Evaluation of the Contributions to Radio Astronomy of the Nancay Observatory, 4S Review, vol. 3, 1985, pp. 14–18.Google Scholar
  97. (97).
    Jean-Francois Miguel, “Indicators to Measure Internalization of Science,” unpublished paper 1989.Google Scholar
  98. (98).
    L. Massimo and P. Kerr, The Evaluation of R and D Programmes of the Commission of the European Communities, in Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  99. (99).
    G. F. Mechlin and D. Berg, “ Evaluating Research-ROI is not Enough,” in K. Gronhaug and G. Kaufmann, Innovation: A Cross-Disciplinary Perspective ( New York: Norwegian University Press, 1988 ).Google Scholar
  100. (100).
    J. Metters, “Assessment in the UK Department of Health and Social Security Research,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  101. (101).
    F. Moisan, “Appraisal of a Research Programme and its Consequences: The AFME Experience,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  102. (102).
    H. F. Moed and A. F. J. van Raan, “Critical Remarks on Irvine and Martin’s Methodology for Evaluating Scientific Performance,” Social Studies of Science vol. 15, 1985, pp. 539–547.CrossRefGoogle Scholar
  103. (103).
    T. K. Moran, “Research and Management Strategies for Integrating Evaluation Research in Agency Decisionmaking,” Evaluation Review, vol. 11, 1987, pp. 612–630.CrossRefGoogle Scholar
  104. (104).
    F. Narin, “Bibliometric Techniques in the Evaluation of Research Programs, Science and Public Policy, vol. 14, 1987, pp. 99–106.Google Scholar
  105. (105).
    F. Narin et al., “Patents as Indicators of Corporate Technological Strength,” Research Policy vol.,1986, pp..Google Scholar
  106. (106).
    F. Narin and D. Olivastro, Identifying Areas of Leading Edge Japanese Science and Technology: First Interim Report, “Activity Analysis Using SIC Categories and Scientific Subfields” (Cherry Hill, NJ: Computer Horizons, Inc, 1986 ).Google Scholar
  107. (107).
    A. J. Nederhoff and A. F. J. van Raan. An International Interview Round on the Use and Development of Science and Technology Indicators ( Leiden: University of Leiden, 1988 ).Google Scholar
  108. (108).
    R. Neimeyer and W. R. Shadish, Jr. “Optimizing Scientific Validity: Toward an Interdisciplinary Science Studies,: Knowledge: Creation, Diffusion, Utilization vol. 8, No.3, March 1987, pp. 463–485.Google Scholar
  109. (109).
    Nordic Science Policy Council, Evaluation of Research: Nordic Experiences ( Copenhagen: Nordic Science Policy Council, 1986 ).Google Scholar
  110. (110).
    NSF Evaluation Staff, Office of Audit and Oversight, The NSF Post-Performance Evaluation Study. 84–2. ( NSF, Washington, 1984 ).Google Scholar
  111. (111).
    NSF Evaluation Staff, “Post-Performance Evaluation of Behavioral and Neural Sciences 1985, unpublished manuscript.Google Scholar
  112. (112).
    OECD, Evaluation of Research: A Selection of Current Priorities. ( Paris, OECD, 1987 ).Google Scholar
  113. (113).
    E. Ormala, “Evaluation for Selection in Technical R and D,”1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  114. (114).
    OTA, Research Funding as Investment: Can We Measure the Returns ( Washington, DC: OTA, 1986 ).Google Scholar
  115. (115).
    P. G. Pardey, “The Agricultural Knowledge Production Function: An Empirical Look,” Review of Economics and Statistics,_, 1989, 453–461.Google Scholar
  116. (116).
    P. G. Pardey and B. Craig, “Causal Relations between Public Sector Agricultural Research Expenditures and Output, American Journal of Agricultural Economics, vol. 71, 1989, pp. 9–19.CrossRefGoogle Scholar
  117. (117).
    K. Pavitt, “Patent Statistics as Indicators of Innovative Activities,” Scientometrics, vol. 7, 1985, pp. 77–99.CrossRefGoogle Scholar
  118. (118).
    D. C. Phillips and J. Turney, “Bibliometrics and UK Science Policy,” Scientometrics, vol. 14, 1988, pp. 185–200.CrossRefGoogle Scholar
  119. (119).
    A. P. Power, A Strategy for Developing the System of Assessment in the Ministry of Agriculture, Fisheries, and Food,“ Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  120. (120).
    M. Quatre, “Evaluation-The French Experience,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  121. (121).
    A. L. C. Quigley, “Evaluation of Government Funded R and D in the United Kingdom, 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  122. (122).
    J. Rae, “RD Assessment in the UK Department of Energy,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  123. (123).
    H. Rigter, “Evaluation of Performance of Health Research in the Netherlands,” Research Policy, vol. 15, 1986, pp. 33–48.CrossRefGoogle Scholar
  124. (124).
    J. D. Roessner, “The Multiple Functions of Formal Aids to Decisionmakirig in Public Agencies,” IEEE Transactions on Engineering Management, vol. 32, 1985, pp. 124–128.Google Scholar
  125. (125).
    J. D. Roessner, “Evaluating Government Innovation Programs: Lessons from the U.S. Experience,” Research Policy, vol. 18, 1989, pp. 343–359.CrossRefGoogle Scholar
  126. (126).
    H. Rothman, ABRC Policy Study: Further Studies on the Evaluation and Measurement of Scientific Research (London, ABRC1985).Google Scholar
  127. (127).
    J. P. Rushton et al., “Personality Characteristics Associated with High Research Productivity, Scientific Excellence: Origins and Assessment D.N. Jackson and J. P. Rushton (eds.) ( Beverly Hills, SAGE Publications, 1987 ).Google Scholar
  128. (128).
    F. Schlie-Rosen, “Evaluation in Germany-Philosophy, Approaches, and Examples,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  129. (129).
    B. J. Seldon, “A Nonresidual Approach to the Measurement of Social Returns to Research with Application to the Softwood Plywood Industry, ” Ph.D. thesis. (Durham, N.C.: Duke University, 1985 ).Google Scholar
  130. (130).
    W. R. Shadish, Jr., “The Perception and Evaluation of Quality in Science,” unpublished paper, undated.Google Scholar
  131. (131).
    L. Simon et al, “A Bibliometric Evaluation of the U.S.-Italy Cooperative Scientific Research Program,” unpublished paper, 1985.Google Scholar
  132. (132).
    W. Smith, “The Evaluation and Management of Mission-oriented Programmes: National Research Council of Canada: Strategies and Experiences,” 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  133. (133).
    F. A. Spangenberg et al., “Some Incentives and Constraints of Scientific Performance in Departments of Economics,” Scientometrics vol 18, 1990 pp. 241–268, Parts i and 2.Google Scholar
  134. (134).
    S. Sperlagh, “Evaluation of Inter-Academy (USA-Hungary) Research and Exchange Programmes,” 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  135. (135).
    L. W. Steele, “Evaluating the Technical Operation,” Research-Technology Management, vol. 31, 1988, pp. 11–18.Google Scholar
  136. (136).
    P. Strangert, “The Framework of Evaluation: Experiences in Sweden,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  137. (137).
    M. Tanaka, “Japanese-Style Evaluation Systems for RD Projects: The MITI Experience,” Research Policy, vol. 18, 1989, pp. 361–378.CrossRefGoogle Scholar
  138. (138).
    A. G. Thomas, “The Use of Output Measures in the Review of Science,” 1989, unpublished paper, presented to the ECE Seminar on Evaluation in the Management of R and D, April 3–7, 1989.Google Scholar
  139. (139).
    P. Tindemans, “Some Experiences and Observations from Evaluation Mechanisms Applied to Dutch Science and Technology Policy,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  140. (140).
    Y. Uchinaka, “The Structure of Assessment and Evaluation in Japan,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  141. (141).
    U. K. Department of Trade and Industry, “Assessment of Science and Technology Support Programmes,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar
  142. (142).
    A. F. J. van Raan (ed.) Handbook of the Quantitative Study of Science and Technology. (Amsterdam: Elsevier, 1987–1988).Google Scholar
  143. (143).
    P. Vinkler, “Management System for a Scientific Research Institute Based on the Assessment of Scientific Publications,” Research Policy, vol. 15, 1986, pp. 77–87.CrossRefGoogle Scholar
  144. (144).
    J. T. Wallmark and K. J. Sedig, “Quality of Research Measured by Citation Method and Peer Review,” IEEE Transactions on Engineering Management, vol. 33, 1986, pp. 218–222.Google Scholar
  145. (145).
    K. M. Watts and J. C. Higgins, “The Use of Advanced Management Techniques in RD,” Omega, vol. 15, 1987, pp. 21–29.CrossRefGoogle Scholar
  146. (146).
    T. Whiston, “Restructuring and Selectivity in Academic Science,” SPRU, University of Sussex, 1988, unpublished paper.Google Scholar
  147. (147).
    W. Zegveld, “Evaluation of the Netherlands Government Programme Aimed at Stimulating Information Technology,” Evaluation of RD-A Policymaker’s Perspective Department of Trade and Industry (London: Her Majesty’s Stationery Office, 1988.)Google Scholar

Copyright information

© Springer Science+Business Media New York 1993

Authors and Affiliations

  • H. Averch
    • 1
  1. 1.Florida International UniversityUSA

Personalised recommendations