International Journal of Parallel Programming

, Volume 39, Issue 2, pp 232–269 | Cite as

Parallel Iterator for Parallelizing Object-Oriented Applications

  • Nasser Giacaman
  • Oliver Sinnen


With the advent of multi-core processors, desktop application developers must finally face parallel computing and its challenges. A large portion of the computational load in a program rests within iterative computations. In object-oriented languages these are commonly handled using iterators which are inadequate for parallel programming. This paper presents a powerful Parallel Iterator concept to be used in object-oriented programs for the parallel traversal of a collection of elements. The Parallel Iterator may be used with any collection type (even those inherently sequential) and it supports several scheduling schemes which may even be decided dynamically at run-time. Some additional features are provided to allow early termination of parallel loops, exception handling and a solution for performing reductions. With a slight contract modification, the Parallel Iterator interface imitates that of the Java-style sequential iterator. All these features combine together to promote minimal, if any, code restructuring. Along with the ease of use, the results reveal negligible overhead and the expected inherent speedup.


Object-oriented Desktop applications Parallel Iterator Loop scheduling 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Sutter H., Larus J.: Software and the concurrency revolution. Queue 3(7), 54–62 (2005)CrossRefGoogle Scholar
  2. 2.
    Barroso L.A.: The price of performance. Queue 3(7), 48–53 (2005)CrossRefGoogle Scholar
  3. 3.
    Bull, J.M., Smith, L.A., Pottage, L., Freeman, R.: Benchmarking Java against C and Fortran for scientific applications, In: JGI ’01: Proceedings of the 2001 Joint ACM-ISCOPE Conference on Java Grande, (New York, NY, USA), pp. 97–105, ACM, (2001)Google Scholar
  4. 4.
    Giacaman, N., Sinnen, O.: Task parallelism for object oriented programs. In: 9th International Symposium on Parallel Architectures, Algorithms and Networks (I-SPAN’08), Sydney, Australia (2008)Google Scholar
  5. 5.
    Giacaman, N., Sinnen, O.: Parallel Task for parallelizing object-oriented desktop applications. In: IEEE International Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC), held in Conjunction with 24th IEEE International Parallel and Distributed Processing Symposium (IPDPS’10), Atlanta, USA (2010)Google Scholar
  6. 6.
    OpenMP Architecture Review Board, OpenMP Application Program Interface Version 3.0 (2008)Google Scholar
  7. 7.
    Hyde, P.: Java Thread Programming. Sams, (2001)Google Scholar
  8. 8.
    Harbulot, B., Gurd J.R.: Using AspectJ to separate concerns in parallel scientific Java code. In: AOSD ’04: Proceedings of the 3rd International Conference on Aspect-oriented Software Development, (New York, NY, USA), pp. 122–131, ACM Press (2004)Google Scholar
  9. 9.
    Giacaman, N., Sinnen, O.: Parallel Iterator for parallelising object oriented applications. In: The 7th WSEAS International Conference on Software Engineering, Parallel and Distributed Systems (SEPADS ’08), Cambridge, UK (2008)Google Scholar
  10. 10.
    Aguilar J., Leiss E.: Parallel loop scheduling approaches for distributed and shared memory systems. Parallel Process. Lett. 15(1–2), 131–152 (2005)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Kruskal C.P., Weiss A.: Allocating independent subtasks on parallel processors. IEEE Trans. Softw. Eng. 11(10), 1001–1016 (1985)CrossRefGoogle Scholar
  12. 12.
    Polychronopoulos C.D., Kuck D.J.: Guided self-scheduling: a practical scheduling scheme for parallel supercomputers. IEEE Trans. Comput. 36(12), 1425–1439 (1987)CrossRefGoogle Scholar
  13. 13.
    Fisher, A.L., Ghuloum, A.M.: Parallelizing complex scans and reductions. In: PLDI ’94: Proceedings of the ACM SIGPLAN 1994 Conference on Programming Language Design and Implementation, (New York, NY, USA), pp. 135–146, ACM (1994)Google Scholar
  14. 14.
    Powell, M.L., Kleiman, S.R., Barton, S., Shah, D., Stein, D., Weeks, M.: Sunos multi-thread architecture. In: Proceedings of the Winter 1991 USENIX Conference, pp. 65–80 (1991)Google Scholar
  15. 15.
    Sun Microsystems Inc, Java Platform Standard Edition 6 API Specification (2006)Google Scholar
  16. 16.
    Pottenger, W.M.: The role of associativity and commutativity in the detection and transformation of loop-level parallelism. In: ICS ’98: Proceedings of the 12th International Conference on Supercomputing, (New York, NY, USA), pp. 188–195, ACM (1998)Google Scholar
  17. 17.
    Goodenough J.B.: Exception handling: issues and a proposed notation. Commun. ACM 18(12), 683–696 (1975)MathSciNetMATHCrossRefGoogle Scholar
  18. 18.
    Winstead, J.: Structured exception semantics for concurrent loops. Master’s thesis, School of Engineering and Applied Science, University of Virginia (2002)Google Scholar
  19. 19.
    Blumofe R.D., Leiserson C.E.: Scheduling multithreaded computations by work stealing. J. ACM 46(5), 720–748 (1999)MathSciNetMATHCrossRefGoogle Scholar
  20. 20.
    Consortium, W.W.W.: W3C scalable vector graphics (SVG). (2009)
  21. 21.
    Burton, W.F., Sleep, R.M.: Executing functional programs on a virtual tree of processors. In: FPCA ’81: Proceedings of the 1981 Conference on Functional Programming Languages and Computer Architecture, (New York, NY, USA), pp. 187–194, ACM (1981)Google Scholar
  22. 22.
    Blumofe R.D., Papadopoulos D.: The performance of work stealing in multiprogrammed environments (extended abstract). ACM SIGMETRICS Performance Eval. Rev. 26(1), 266–267 (1998)CrossRefGoogle Scholar
  23. 23.
    Lu, W., Gannon, D.: Parallel XML processing by work stealing. In: SOCP ’07: Proceedings of the 2007 Workshop on Service-oriented Computing Performance: Aspects, Issues, and Approaches, (New York, NY, USA), pp. 31–38, ACM (2007)Google Scholar
  24. 24.
    Philippsen M.: A survey of concurrent object-oriented languages. Concurrency Pract. Exp. 12, 917–980 (2000)MATHCrossRefGoogle Scholar
  25. 25.
    Bischof, H., Gorlatch, S., Leshchinskiy, R.: Generic parallel programming using C++ templates and skeletons, Lecture notes in computer science, 3016, 107–126 (2004)Google Scholar
  26. 26.
    Johnson E., Gannon, D.: HPC++: experiments with the parallel standard template library. In: ICS ’97: Proceedings of the 11th International Conference on Supercomputing, (New York, NY, USA), pp. 124–131, ACM Press (1997)Google Scholar
  27. 27.
    An, P., Jula, A., Rus, S., Saunders, S., Smith, T., Tanase, G., Thomas, N., Amato, N., Rauchwerger, L.: STAPL: an adaptive, generic parallel programming library for C++. In: In Workshop on Languages and Compilers for Parallel Computing (LCPC) (2001)Google Scholar
  28. 28.
    Stepanov, A., Lee, M.: The standard template library. Hewlett-Packard Laboratories, (1995)Google Scholar
  29. 29.
    Intel Corporation, Reference for Intel Threading Building Blocks (2006)Google Scholar
  30. 30.
    Carlin, P., Chandy, K.M., Kesselman, C.: The Compositional C++ language definition, Tech. Rep. 1993.cs-tr-92-02, 12 (1993)Google Scholar
  31. 31.
    Austern M.H., Towle R.A., Stepanov A.A.: Range partition adaptors: a mechanism for parallelizing STL. SIGAPP Appl. Comput. Rev. 4(1), 5–6 (1996)CrossRefGoogle Scholar
  32. 32.
    Joyner, M., Chamberlain, B.L., Deitz, S.J.: Iterators in Chapel, p. 8 (2006)Google Scholar
  33. 33.
    Liskov B., Snyder A., Atkinson R., Schaffert C.: Abstraction mechanisms in CLU. Commun. ACM 20(8), 564–576 (1977)MATHCrossRefGoogle Scholar
  34. 34.
    Allen, E., Chase, D., Hallet, J., Luchangco, V., Maessen, J.-W., Ryu, S., G.L.S. Jr., Tobin-Hochstadt S.: The Fortress Language Specification. Sun Microsystems, Inc., version 1.0 ed. (2008)Google Scholar
  35. 35.
    Charles, P., Grothoff, C., Saraswat, V., Donawa, C., Kielstra, A., Ebcioglu, K., von Praun, C., Sarkar, V.: X10: an object-oriented approach to non-uniform cluster computing. In: OOPSLA ’05: Proceedings of the 20th Annual ACM SIGPLAN Conference on Object Oriented Programming, Systems, Languages, and Applications, (New York, NY, USA), pp. 519–538, ACM Press (2005)Google Scholar
  36. 36.
    Microsoft, Parallel Extensions to the .NET Framework Community Technology Preview (CTP) (2008)Google Scholar
  37. 37.
    Microsoft Corporation, MSDN Parallel Computing Developer Center (2009)Google Scholar
  38. 38.
  39. 39.
  40. 40.
    Dean, J., Ghemawat, S.: Mapreduce: simplified data processing on large clusters. In: OSDI’04: Sixth Symposium on Operating System Design and Implementation, (San Francisco) (2004)Google Scholar
  41. 41.
    Pan, Y., Lu, W., Zhang, Y., Chiu, K.: A static load-balancing scheme for parallel XML parsing on multicore CPUs. In: Proceedings of the Seventh IEEE International Symposium on Cluster Computing and the Grid (2007)Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringUniversity of AucklandAucklandNew Zealand

Personalised recommendations