Advertisement

Isoefficiency and the Parallel Descartes Method

  • Thomas Decker
  • Werner Krandick
Conference paper

Abstract

The efficiency of a parallel algorithm with input x on P ≥ 1 processors is defined as \(E(x,P) = \frac{{T(x,1)}}{{PT(x,P)}}\) where T(x, P) denotes the time it takes to perform the computation using P processors and T(x, 1) is the sequential execution time. The efficiency of many parallel algorithms decreases when the number of processors increases and the sequential execution time is fixed; likewise, the efficiency increases when the sequential computing time increases and the number of processors is fixed. The term scalability refers to this change of efficiency (Sahni & Thanvantri, 1996). Intuitively, a parallel algorithm is scalable if it stays efficient when the number of processors and the sequential execution time are both increased.

Keywords

Computing Time Parallel Algorithm Search Tree Efficiency Function Node Computation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Collins, G.E. (1974). The computing time of the Euclidean algorithm. SIAM Journal on Computing, 3(1), 1–10.MathSciNetMATHCrossRefGoogle Scholar
  2. Collins, G.E., & Akritas, A.G. (1976). Polynomial real root isolation using Descartes’ rule of signs. In R.D. Jenks (Ed.), Proceedings of the 1976 ACM Symposium on Symbolic and Algebraic Computation (pp. 272–275). ACM.Google Scholar
  3. Collins, G.E., Johnson, J.R., & Küchlin, W. (1992). Parallel real root isolation using the coefficient sign variation method. In R.E. Zippel (Ed.), Computer Algebra and Parallelism., LNCS 584, pp. 71–87. Springer-Verlag.CrossRefGoogle Scholar
  4. Culler, D.E., Karp, R.M., Patterson, D., Sahay, A., Santos, E.E., Schauser, K.E., Subramonian, R., & von Eicken, T. (1996). LogP: A practical model of parallel computation. Communications of the ACM, 39(11),78–85.CrossRefGoogle Scholar
  5. Decker, T., & Krandick, W. (1999). Parallel real root isolation using the Descartes method. In P. Banerjee, V.K. Prasanna, & B.P. Sinha (Eds.), High Performance Computing-HIPC’99, LNCS 1745, pp. 261–268. Springer-Verlag.CrossRefGoogle Scholar
  6. Grama, A.Y., Gupta, A., & Kumar, V. (1993). Isoefficiency: Measuring the scalability of parallel algorithms and architectures. IEEE Parallel and Distributed Technology, 1(3), 12–21.CrossRefGoogle Scholar
  7. Gupta, A., Karypis, G., & Kumar, V. (1997). Highly scalable parallel algorithms for sparse matrix factorization. IEEE Transactions on Parallel and Distributed Systems, 8(5), 502–520.CrossRefGoogle Scholar
  8. Gupta, A., & Kumar, V. (1993). The scalability of FFT on parallel computers. IEEE Transactions on Parallel and Distributed Systems, 4(8), 922–932.CrossRefGoogle Scholar
  9. Johnson, J.R., & Krandick, W. (1997). Polynomial real root isolation using approximate arithmetic. In W. Küchlin (Ed.), International Symposium on Symbolic and Algebraic Computation (pp. 225–232). ACM.Google Scholar
  10. Krandick, W. (1995). Isolierung reeller NullstelIen von Polynomen. In J. Herzberger (Ed.), Wissenschaftliches Rechnen (pp. 105–154). Akademie Verlag, Berlin.Google Scholar
  11. Kruskal, C.P., Rudolph, L., & Snir, M. (1990). A complexity theory of efficient parallel algorithms. Theoretical Computer Science, 71(1),95–132.MathSciNetMATHCrossRefGoogle Scholar
  12. Kumar, V., Grama, A., Gupta, A., & Karypis, G. (1994). Introduction to Parallel C011lputing: Design and Analysis of Algorithms. Redwood City, CA, USA: Benjamin/Cummings.Google Scholar
  13. Kumar, V., Nageshwara Rao, V., & Ramesh, K. (1988). Parallel depth first search on the ring architecture. In D.H. Bailey (Ed.), Proceedings of the 1988 International Conference on Parallel Processing (Vol. III, pp. 128–132). The Pennsylvania State University Press.Google Scholar
  14. Kumar, V., & Singh, V. (1991). Scalability of parallel algorithms for the all-pairs shortest-path problem. Journal of Parallel and Distrihuted Computing, 13, 124–138.CrossRefGoogle Scholar
  15. Mahapatra, N.R., & Dutt, S. (1997). Scalable global and local hashing strategies for duplicate pruning in parallel A* graph search. IEEE Transactions on Parallel and Distributed Systems, 8(7), 738–756.CrossRefGoogle Scholar
  16. Sahni, S., & Thanvantri, V. (1996). Performance metrics: Keeping the focus on runtime. IEEE Parallel and Distributed Technology, 4(1),43–56.CrossRefGoogle Scholar
  17. Schreiner, W., Mittermaier, C., & Winkler, F. (2000). On solving a problem in algebraic geometry by cluster computing. In A. Bode, T. Ludwig, W. Karl, & R. Wismüller (Eds.), Euro-Par 2000 Parallel Processing, LNCS 1900, pp. 1196–1200. Springer-Verlag.CrossRefGoogle Scholar
  18. Yang. T.-R., & Lin, H.-X. (1997). Isoefficiency analysis of CGLS algorithm for parallel least squares problems. In B. Hertzberger & P. Sloot (Eds.), High-Performance Computing and Networking, LNCS 1225, pp. 452–461. Springer-Verlag.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • Thomas Decker
  • Werner Krandick

There are no affiliations available

Personalised recommendations