Advertisement

Journal of Computer Science and Technology

, Volume 15, Issue 1, pp 73–83 | Cite as

Using confidence interval to summarize the evaluating results of DSM systems

  • Shi Weisong 
  • Tang Zhimin 
  • Shi Jinsong 
Article
  • 55 Downloads

Abstract

Distributed Shared Memory (DSM) systems have gained popular acceptance by combining the scalability and low cost of distributed system with the ease of use of single address space. Many new hardware DSM and software DSM systems have been proposed in recent years. In general, benchmarking is widely used to demonstrate the performance advantages of new systems. However, the common method used to summarize the measured results is the arithmetic mean of ratios, which is incorrect in some cases. Furthermore, many published papers list a lot of data only, and do not summarize them effectively, which confuse users greatly. In fact, many users want to get a single number as conclusion, which is not provided in old summarizing techniques. Therefore, a new data-summarizing technique based on confidence interval is proposed in this paper. The new technique includes two data-summarizing methods: (1) paired confidence interval method; (2) unpaired confidence interval method. With this new technique, it is concluded that at some confidence one system is better than others. Four examples are shown to demonstrate the advantages of this new technique. Furthermore, with the help of confidence level, it is proposed to standardize the benchmarks used for evaluating DSM systems so that a convincing result can be got. In addition, the new summarizing technique fits not only for evaluating DSM systems, but also for evaluating other systems, such as memory system and communication systems.

Keywords

data-summarizing technique performance evaluation DSM system confidence interval benchmarking 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Lenoski D E, Ludon J, Gharachorloo Ket al. The Stanford dash multiprocessor.IEEE Computer, March 1992, 25(3): 63–79.Google Scholar
  2. [2]
    Kuskin J, Ofelt D, Heinrich Met al. The Stanford flash multiprocessor. InProc. the 21st Annual Int. Symp. Computer Architecture (ISCA’94), April 1994, pp. 302–313.Google Scholar
  3. [3]
    Agarwal A, Chaiken D, Johnson Ket al. The MIT alewife machine: A large-scale distributed-memory multiprocessor. InScalable Shared Memory Multiprocessors, Dubois M, Thakkar M M (eds.), Kluwer Academic Publishers, 1992, pp. 239–261.Google Scholar
  4. [4]
    Torrellas J, Padua D. The Illinois aggressive Coma multiprocessor project (i-acoma). InProc. the 6th Symp. the Frontiers of Massively Parallel Computing (Frontiers’96), October 1996.Google Scholar
  5. [5]
    Carter J B, Bennett J K, Zwaenepoel W. Implementation and performance of Munin. InProc. the 13th ACM Symp. Operating Systems Principles (SOSP-13), October 1991, pp. 152–164.Google Scholar
  6. [6]
    Keleher P, Dwarkadas S, Cox A L, Zwaenepoel W. Treadmarks: Distributed shared memory on standard workstations and operating systems. InProc. the Winter 1994 USENIX Conference, January 1994, pp. 115–131.Google Scholar
  7. [7]
    Li K. Ivy: A shared virtual memory system for parallel computing. InProc. the 1988 Int. Conf. Parallel Processing (ICPP’88), August 1988, II: 94–101.Google Scholar
  8. [8]
    Bershad B N, Zekauskas M J, Sawdon W A. The Midway distributed shared memory system. InProc. the 38th IEEE Int. Computer Conf. (COMPCON Spring’93), February 1993, pp. 528–537.Google Scholar
  9. [9]
    Khandekar D R. Quarks: Distributed shared memory as a building block for complex parallel and distributed systems. Master’s thesis, Department of Computer Science, The University of Utah, March 1996.Google Scholar
  10. [10]
    Keleher P. The relative importance of concurrent writers and weak consistency models. InProc. the 16th Int. Conf. Distributed Computing Systems (ICDCS-16), May 1996, pp. 91–98.Google Scholar
  11. [11]
    Koch P T, Fowler R J, Jul E B. Message-driven relaxed consistency in a software distributed shared memory. InProc. the 1st Symp. Operating Systems Design and Implementation (OSDI’94), November 1994, pp. 75–85.Google Scholar
  12. [12]
    Hu W, Shi W, Tang Z. JIAJIA: An SVM system based on a new cache coherence protocol. InProc. the 7th High Performance Computing and Networking (HPCN’99), April 1999, LNCS 1593, pp. 463–472, Springer-Verlag, Amersterdan, Netherlands.Google Scholar
  13. [13]
    Jain R. The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, Modeling. John Wiley & Sons, Inc., 1991.Google Scholar
  14. [14]
    Thirikamol K, Keleher P. Multi-threading and remote latency in software DSMs. InProc. the 17th Int. Conf. Distributed Computing Systems (ICDCS-17), May 1997.Google Scholar
  15. [15]
    Veenstra J, Fowler R. Mint: A front end for efficient simulation of shared memory multiprocessors. InProc. the 2nd Int. Workshop on Modeling, Analysis, and Simulation of Computers and Telecommunication Systems (MASCOTS’94), February 1994, pp. 201–207.Google Scholar
  16. [16]
    Nguyen A, Michael M, Sharma A, Torrellas J. The augment multiprocessor simulation toolkit for Intel x86 architectures. InProceedings of 1996 International Conference on Computer Design, October 1996.Google Scholar
  17. [17]
    Stoller L, Kuramkote R, Swanson M. Paint-pa instruction set interpreter. Technical Report, University of Utah, Computer Scienece Department, March 1996.Google Scholar
  18. [18]
    Herrod S. Tangolite: A multiprocessor simulation environment. Technical Report, Stanford University, November 1993.Google Scholar
  19. [19]
    Patterson D, Hennessy J. Computer Architecture: A Quantitative Approach. Morgan Kaufmann, Inc., 1996.Google Scholar
  20. [20]
    Adve S V, Cox A L, Dwarkadas S, Rajamony R, Zwaenepoel W. A comparison of entry consistency and lazy release consistency implementations. InProc. the 2nd IEEE Symp. High-Performance Computer Architecture (HPCA-2), February 1996, pp. 26–37.Google Scholar
  21. [21]
    Singh J P, Wolf-Doetrich Weber, Anoop Gupta. Splash: Stanford parallel applications for shared memory.ACM Computer Architecture News, March 1992, 20(1): 5–44.CrossRefGoogle Scholar
  22. [22]
    Woo S, Ohara M, Torrie E, Singh J P, Gupta A. The Splash-2 programs: Charachterization and methodological considerations. InProc. the 22nd Annual Int. Symp. Computer Architecture (ISCA’95), June 1995, pp. 24–36.Google Scholar
  23. [23]
    Lu H, Dwarkadas S, Cox A L, Zwaenepoel W. Quantifying the performance differences between PVM and Treadmarks.Journal of Parallel and Distributed Computing, June 1997, 43(2): 65–78.CrossRefGoogle Scholar
  24. [24]
    Yang L, Nguyen A, Torrellas J. How processor-memory intergration affects the design of DSMs. InWorkshop on Mixing Logic and DRAM: Chips that Compute and Remember, ISCA’97, June 1997.Google Scholar

Copyright information

© Science Press, Beijing China and Allerton Press Inc. 2000

Authors and Affiliations

  1. 1.Institute of Computing TechnologyChinese Academy of SciencesBeijingP.R. China
  2. 2.Department of Applied MathematicsEast China University of Science and TechnologyShanghaiP.R. China

Personalised recommendations