Advertisement

Rapid Development of Application-Specific Network Performance Tests

  • Scott Pakin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3515)

Abstract

Analyzing the performance of networks and messaging layers is important for diagnosing anomalous performance in parallel applications. However, general-purpose benchmarks rarely provide sufficient insight into any particular application’s behavior. What is needed is a facility for rapidly developing customized network performance tests that mimic an application’s use of the network but allow for easier experimentation to help determine performance bottlenecks.

In this paper, we contrast four approaches to developing customized network performance tests: straight C, C with a helper library, Python with a helper library, and a domain-specific language. We show that while a special-purpose library can result in significant improvements in functionality without sacrificing language familiarity, the key to facilitating rapid development of network performances tests is to use a domain-specific language designed expressly for that purpose.

References

  1. 1.
    Bleck, R.: An oceanic general circulation model framed in hybrid isopycnic-Cartesian coordinates. Ocean Modelling 4, 55–88 (2002)CrossRefGoogle Scholar
  2. 2.
    Blelloch, G., Narlikar, G.: A practical comparison of N-body algorithms. In: Parallel Algorithms. DIMACS: Series in Discrete Mathematics and Theoretical Computer Science, vol. 30, American Mathematical Society, DIMACS (1997)Google Scholar
  3. 3.
    Basney, J., Raman, R., Livny, M.: High throughput Monte Carlo. In: Proceedings of the 9th SIAM Conference on Parallel Processing for Scientific Computing, San Antonio, Texas (1999)Google Scholar
  4. 4.
    Snell, Q.O., Mikler, A.R., Gustafson, J.L.: NetPIPE: A network protocol independent performance evaluator. In: Proceedings of the 1996 ISMM International Conference on Intelligent Information Management Systems, Washington, DC. ACTA Press (1996)Google Scholar
  5. 5.
    Gropp, W., Lusk, E.: Reproducible measurements of MPI performance characteristics. In: Margalef, T., Dongarra, J., Luque, E. (eds.) PVM/MPI 1999. LNCS, vol. 1697, pp. 11–18. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  6. 6.
    Pallas, GmbH: Pallas MPI Benchmarks—PMB, Part MPI-1 (2000)Google Scholar
  7. 7.
    Reussner, R., Sanders, P., Prechelt, L., Müller, M.: SKaMPI: A detailed, accurate MPI benchmark. In: Alexandrov, V.N., Dongarra, J. (eds.) PVM/MPI 1998. LNCS, vol. 1497, pp. 52–59. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  8. 8.
    Gropp, W.: Tutorial on MPI: The Message-Passing Interface. Argonne National Laboratory, Argonne, Illinois (1995), Available from ftp://info.mcs.anl.gov/pub/mpi/tutorial.ps
  9. 9.
    Pakin, S.: Reproducible network benchmarks with coNCePTuaL. In: Danelutto, M., Vanneschi, M., Laforenza, D. (eds.) Euro-Par 2004. LNCS, vol. 3149, pp. 64–71. Springer, Heidelberg (2004), ISBN 3-540-22924-8 Available from http://www.c3.lanl.gov/~pakin/papers/europar2004.pdf CrossRefGoogle Scholar
  10. 10.
    Pakin, S.: coNCePTuaL user’s guide. Technical Report LA-UR 03-7356, Los Alamos National Laboratory, Los Alamos, New Mexico (2004), Available from http://www.c3.lanl.gov/~pakin/software/conceptual/conceptual.pdf
  11. 11.
    Hinsen, K.: ScientificPython User’s Guide. Centre National de la Recherche Scientifique d’Orléans, Orléans, France (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Scott Pakin
    • 1
  1. 1.lOS Alamos National LaboratoryLos AlamosUSA

Personalised recommendations