Use of Quantitative Methods to Support Research Decisions in Business and Government

  • David Roessner
Chapter

Abstract

Modern industrial nations expend about 2.3% of their gross national products for research and development. In the U.S., concern over the nation’s economic performance has been linked to the size and composition of expenditures for research. At issue is not whether the U.S. should make such expenditures, but how large they should be; how much should be devoted to basic research, applied research, development, and to other phases of knowledge generation and application; how research resources such as money and manpower should be allocated among scientific fields, industries, and areas of national concern (defense, energy, health); and, more broadly, what the relative roles of government and the private sector should be in supporting the national research effort.

Keywords

Quantitative Method Quantitative Model Consumer Surplus Support Research Quantitative Technique 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Averch, H. Policy Uses of “Evaluation Research” Literature. Contractor report to the Office of Technology Assessment, 1990.Google Scholar
  2. Baker, N.R. and Pound, W.H. “RandD Project Selection: Where We Stand,” IEEE Transactions on Engineering Management, EM-21 (November 1964): 165–171.Google Scholar
  3. Chubin, D.E. “Research Evaluation and the Generation of Big Science Policy,” Knowledge: Creation, Diffusion, Utilization, 9, 2 (December 1987): 254–277.Google Scholar
  4. Chubin, D.E., and Robinson, E.M. “Data on the Federal Research System in the United States,” Knowledge, 13, 1 (September 1991): 49–78.Google Scholar
  5. Cetron, M.J., Martino, J. and Roepcke, L. “The Selection of RandD Program Content-Survey of Quantitative Methods,” IEEE Transactions on Engineering Management, 14 (March 1967): 4–13.Google Scholar
  6. Cole, G.A. The Evaluation of Basic Research in Industrial Laboratories. Cambridge, MA: Abt Associates, 1985.Google Scholar
  7. Collier, D.W. “Measuring the Performance of RandD Departments,” Research Management, 20 (March 1977): 30–34.Google Scholar
  8. Dean, B.V. “Evaluating, Selecting and Controlling RandD Projects,” New York, NY: American Management Association, 1968. Research Study 89.Google Scholar
  9. Feller, K.G. A Review of Methods for Evaluating RandD. Livermore, CA: Lawrence Livermore Laboratory, 1980.Google Scholar
  10. Gee, R.E. “A Survey of Current Project Selection Practices,” Research Management, 14 (September 1971): 38–45.Google Scholar
  11. Hertzfeld, H.R. “The Impact of NASA Research and Development Expenditures on TechnologicalInnovation and the Economy,” Proceedings of the International Colloquium on Economic Effects of Space and Other Advanced Technologies, Strasbourg, 28–30 April, 1980.Google Scholar
  12. House, P.W. and Jones, D.W. Getting It Off the Shelf. Boulder, CO: Westview, 1977.Google Scholar
  13. Kostoff, R.N. “Evaluation of Proposed and Existing Accelerated Research Programs by the Office of Naval Research,” IEEE Transactions on Engineering Management, 35, 4 (November 1988): 271–279.CrossRefGoogle Scholar
  14. Lambright, W.H. and Stirling, H.C. National Laboratories and Research Evaluation. Cambridge, MA: Abt Associates, 1985.Google Scholar
  15. Liberatore, M.J. and Titus, G.T. “The Practice of Management Science in RandD Project Management,” Management Science, 29, 8 (1983): 962–974.CrossRefGoogle Scholar
  16. Lindblom, C.E. and Cohen, D.K. Usable Knowledge. New Haven, CT: Yale University Press, 1979.Google Scholar
  17. Logsdon, J.M. and Rubin, C.B. Federal Research Evaluation Activities. Cambridge, MA. Abt Associates, 1985.Google Scholar
  18. Lynn, L.E. Jr. Managing the Public’s Business. New York: Basic Books, 1981.Google Scholar
  19. Mansfield, E. “How Economists See RandD,” Research Management, 25 (1982): 23–29.Google Scholar
  20. Mansfield, E. and others. Research and Innovation in the Modern Corporation. New York, NY: Norton, 1971.Google Scholar
  21. Martin, B.R. and Irvine, J. “Assessing Basic Research: Some Partial Indicators of Scientific Progress in Radio Astronomy,” Research Policy, 12 (1983): 61–90.CrossRefGoogle Scholar
  22. Mathematica, Inc. Quantifying the Benefits to the National Economy from Secondary Applications of NASA Technology. Washington, DC: NASA, 1976.Google Scholar
  23. Mathtech, Inc. A Cost-Benefit Anaysis of Selected Technology Utilization Office Programs. Washington, DC: NASA, 1977.Google Scholar
  24. Meadows, D.L. “Estimate Accuracy and Project Selection Models in Industrial Research,” Industrial Management Review, (Spring, 1968): 105–119.Google Scholar
  25. Morison, R.S. “Needs, Leads, and Indicators,” Science, Technology, and Human Values, 7 (Winter 1982 ): 5–13.Google Scholar
  26. Patterson, W.C. “Evaluating RandD Performance at Alcoa Laboratories,” Research Management, 26 (1983): 23–27.Google Scholar
  27. Roessner, J.D. “Incentives to Innovate in Public and Private Organizations,” Administration and Society, 9 (November 1977): 341–365.CrossRefGoogle Scholar
  28. Roessner, J.D. RandD Project Selection Models in the U.S. Department of Energy. Atlanta, GA: Georgia Institute of Technology, 1981.Google Scholar
  29. Roessner, J.D. Quantitative Methods for Research Evaluation and Decisionmaking. Final Report to the Office of Technology Assessment, U.S. Congress, July 22, 1985.Google Scholar
  30. Roessner, J.D. “The Multiple Functions of Formal Aids to Decisionmaking in Public Agencies,” IEEE Transactions on Engineering Management, 32, 3 (August 1985): 124–128.Google Scholar
  31. Rubenstein, A.H. “Economic Evaluation of Research and Development: A Brief Survey of Theory and Practice,” The Journal of Industrial Engineering, 17 (November 1966).Google Scholar
  32. Schainblatt, A.H. “How Companies Measure the Productivity of Engineers and Scientists,” Research Management, 25 (1982): 10–18.Google Scholar
  33. Schultz, R.L. and Henry, M.D. “Implementing Decisions Models,” in R. Schultz and A. Zoltners, eds., Marketing Decision Models. New York, NY: Elsevier, 1981.Google Scholar
  34. Souder, W.E. and Mandakovic, T. “RandD Project Selection Models,” Research Management, (July-August 1986 ): 36–42.Google Scholar
  35. Steele, L.W. “Selecting RandD Programs and Objectives,” Research-Technology Management, 32, 2 (March-April 1988): 17–36.Google Scholar
  36. U.S. Congress, Office of Technology Assessment. Research Funding As an Investment: Can We Measure the Returns? Washington, D.C.: OTA, 1986.Google Scholar
  37. U.S. General Accounting Office. Assessing the “Output” of Federal Commercially Directed R&D. Washington, DC: GAO, 1979, PAD-79–69.Google Scholar
  38. Weiss, C.H. “Improving the Linkage between Social Research and Public Policy,” in L.E. Lynn, Jr., ed. Knowledge and Policy: The Uncertain Connection. Washington, DC: National Academy of Sciences, 1978.Google Scholar
  39. Winkofsky, E.P., Mason, R.M., and Souder, W.E. “RandD Budgeting and Project Selection: A Review of Practices and Models,” in B.V. Dean and J.D. Goldhar, eds., Management of Research and Innovation. New York, NY: Elsevier, 1980.Google Scholar
  40. Wirt, J.G., and others. R&D Management: Methods Used by Federal Agencies. Santa Monica, CA: The Rand Corporation. R-1156-HEW, 1974.Google Scholar

Copyright information

© Springer Science+Business Media New York 1993

Authors and Affiliations

  • David Roessner
    • 1
  1. 1.Georgia Institute of TechnologyUSA

Personalised recommendations