A Hardware Instrumentation Approach for Performance Measurement of a Shared-Memory Multiprocessor

  • George Nacht
  • Alan Mink

Abstract

Performance measurement requires a mechanism (tool) to obtain performance information (raw samples). The performance information can be placed into two orthogonal categories: (1) trace measurement, and (2) resource utilization. Trace measurement is concerned with the activities of the application or system processes, and provides information such as program execution time, execution path, response time, etc. Resource utilization is concerned with the detailed operation of the hardware, and provides information such as cache hit ratios, access delays, duty cycles, etc. Roberts [ROB86] provides a more complete discussion on performance information to be measured.

Keywords

Cage Assure Pyramid Alan 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [ABL85]
    Ableidinger, B., Agarwal, N. and Nobles, C., Real-time analyzer furnishes high-level look at software operation, Electronic Design, Sept. 19, 1985, pp 117–131.Google Scholar
  2. [AGA75]
    Agajanian, A., A Bibliography on System Performance Evaluation, IEEE Computer, Vol. 8, No. 11, Nov. 1975, pp 63–74.Google Scholar
  3. [AND82]
    Anderson, T., The Design of a Multiprocessor Development System, Masters Thesis, MIT, Dept of Electrical Engineering and Computer Science, Sept. 1982.Google Scholar
  4. [BUR84]
    Burkhart, H. and Milien, R., High-Level Language Monitoring: Design Concepts and Case Study, in Advances in Microprocessing and Microprogramming, Myhrhaug, B. and Wilson, D. (eds.), Elsevier Science Publishers B.V. (North Holland), 1984, pp 177–185.Google Scholar
  5. [CAR87]
    Carpenter, R., Performance Measurement Instrumentation for Multiprocessor Computers, NBSIR 87-3627, National Bureau of Standards, Gaithersburg, MD, Aug. 1987.Google Scholar
  6. [FRA82]
    Franta, W., Berg, H. and Wood, W., Issues and Approaches to Distributed Testbed Instrumentation, IEEE Computer, Vol. 15, No. 10, Oct. 1982, pp 71–81.Google Scholar
  7. [FED75]
    FEDSIM, Hardware Monitor Specifications Comparison: Tesdata Systems Corp. and Comress, Div. of Comten, Inc. Federal Computer Performance Evaluation and Simulation Center, 6118 Franconia Rd., Alexandria, Va. 22310, USA, Report No. R-75-1, Jan. 1975.Google Scholar
  8. [FER78]
    Ferrari, D., Computer System Performance Evaluation, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1978.Google Scholar
  9. [FRO83]
    Fromm, H., et. al., Experiences with Performance Measurement and Modeling of a Processor Array, IEEE Trans. on Computers, Vol. C-32, No. 1, Jan 83, pp 15–31.Google Scholar
  10. [GRE86]
    Gregoretti, F., Maddaleno, F. and Zamboni, M., Monitoring Tools For Multiprocessors, Euromicro Journal: Microprocessing and Microprogramming, Vol. 18, No. 1–5, Dec.86, pp 409–416.Google Scholar
  11. [HER82]
    Hercksen, U., Klar, R., Kleinoder, W., and Kneibl, K., Measuring Simultaneous Events in a Multiprocessor System, Proc. of 1982 ACM SIGMETRICS Conf. on Measurement and Modelling of Computer Systems, Aug. 1982, Seattle, Wash., pp 77–87.Google Scholar
  12. [HUG80]
    Hughes, J., Diamond: A Digital Analyzer And Monitoring Device, Perf. Evaluation Review, Vol. 9, No. 2, 7th Intern’l Symp. on Computer Performance Modelling, Measurement, and Evaluation, Toronto, Ontario, Canada, May 1980, pp 27–34.CrossRefGoogle Scholar
  13. [KLA86]
    Klar, R. and Luttenberger, N., VLSI-based Monitoring of the Inter-process-Communication in Multi-Microcomputer Systems with Shared Memory, EUROMICRO Journal: Microprocessing and Microprogramming, Vol 18, No. 1–5, pp 195–204, 1986.CrossRefGoogle Scholar
  14. [KRU87]
    Kruskal, C. and Smith, C., On the Notion of Granularity, NBSIR 87-3605, National Bureau of Standards, Gaithersburg, MD, July 1987.Google Scholar
  15. [MIN86]
    Mink, A., Roberts, J., Draper, J. and Carpenter, R., Simple Multi-Processor Performance Measurement Techniques and Examples of Their Use, NBSIR 86-3416, National Bureau of Standards, Gaithersburg, MD, July 1986.Google Scholar
  16. [MIN87]
    Mink, A., Draper, J., Roberts, J. and Carpenter, R., Hardware Assisted Multiprocessor Performance Measurements, to appear in Proc. of Performance 87, Brussels, Belgium, Dec. 1987.Google Scholar
  17. [MIT86]
    Mitchell, S., SySM Functional Requirements Description, Harris Corp., P.O. Box 98000, Melbourne, Fl. 32902, Feb. 1986.Google Scholar
  18. [NOE74]
    Noe, J., Acquiring And Using A Hardware Monitor, Datamation, Vol. 20, No. 4 (Apr. 1974) pp 89–95.Google Scholar
  19. [NOE86]
    Noelcke, G., Debug System Targets Multiprocessor Design, Computer Design, Nov 1, 1986, pp 105–114.Google Scholar
  20. [NUT75]
    Nutt, G., Tutorial: Computer System Monitors, IEEE Computer, Vol. 8, No. 11, Nov. 1975, pp 51–61.Google Scholar
  21. [ROB86]
    Roberts, J., Performance Measurement Techniques for Multi-Processor Computers, NBSIR 85-3296, National Bureau of Standards, Gaithersburg, MD, Feb 1986.Google Scholar
  22. [SCH83]
    Schrott, G. and Tempelmeier, T., Monitoring of Real Time Systems By a Separate processor, Proc. of the 12th Intern’l Federation of Automatic Controls/IFIP? Workshop: Real Time Programming 1983, Hatfield, UK, Mar. 1983, pp 69–79.Google Scholar
  23. [SEG83]
    Segall, Z., Singh, A., Snodgrass, R., Jones, A., and Siewiorek, D., An Integrated Instrumentation Environment for Multiprocessors, IEEE Trans. on Computers, Vol. C-32, No. 1, Jan 83, pp 4–14.Google Scholar

Copyright information

© Plenum Press, New York 1989

Authors and Affiliations

  • George Nacht
    • 1
  • Alan Mink
    • 1
  1. 1.National Computer and Telecommunications LaboratoryNational Institute of Standards and Technology (formerly National Bureau of Standards)GaithersburgUSA

Personalised recommendations