Journal of Grid Computing

, Volume 10, Issue 4, pp 647–664 | Cite as

Distributed Application Runtime Environment (DARE): A Standards-based Middleware Framework for Science-Gateways



Gateways have been able to provide efficient and simplified access to distributed and high-performance computing resources. Gateways have been shown to support many common and advanced requirements, as well as proving successful as a shared access mode to production cyberinfrastructure such as the TG/XSEDE. There are two primary challenges in the design of effective and broadly-usable gateways: the first revolves around the creation of interfaces that catpure existing and future usage modes so as to support desired scientific investigation. The second challenge and the focus of this paper, is concerned about the requirement to integrate the user-interfaces with computational resources and specialized cyberinfrastructure in an interoperable, extensible and scalable fashion. Currently, there does not exist a commonly usable middleware to that enables seamless integration of different gateways to a range of distributed and high-performance infrastructures. The development of multiple similar gateways that can work over a range of production cyberinfrastructures, usage modes and application requirements is not scalable without a effective and extensible middleware. Some of the challenges that make using production cyberinfrastructure as a collective resource difficult are also responsible for the absence of middleware that enables multiple gateways to utilize the collective capabilities. We introduce the SAGA-based, Distributed Application Runtime Environment (DARE) framework, using which gateways that seamlessly and effectively utilize scalable distributed infrastructure can be built. We discuss the architecture of DARE-based gateways, and show using several different prototypes—DARE-HTHP, DARE-NGS, how gateways can be constructed by utilizing the DARE middleware framework.


DARE Grids Clouds Science gateways SAGA Pilot Jobs XSEDE EGI Standards Interoperability Middleware 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Wilkins-Diehr, N.: A history of the teragrid science gateway program: a personal view. In: GCE 11 Workshop, Supercomputing 11 (2011)Google Scholar
  2. 2.
    Jha, S., Cole, M., Katz, D.S., Parashar, M., Rana, O., Weissman, J.: Distributed computing practice for large-scale science & engineering applications. Computing and Concurrency: Practise and Experience. (2012, in press)
  3. 3.
    Pierce, M., Marru, S., Singh, R., Kulshrestha, A., Muthuraman, K.: Open Grid computing environments: advanced gateway support activities. In: Proceedings of the 2010 TeraGrid Conference, TG ’10, pp. 16:1–16:9. ACM, New York (2010)Google Scholar
  4. 4.
  5. 5. Accessed 1 Dec 2012
  6. 6.
    Luckow, A., Santcroos, M., Weidner, O., Merzky, A., Mantha, P., Jha, S.: P*: a model of pilot-abstractions. In: 8th IEEE International Conference on e-Science 2012 (2012)Google Scholar
  7. 7.
    Luckow, A., Jha, S., Kim, J., Merzky, A., Schnor, B.: Adaptive distributed replica-exchange simulations. Philos. Trans. R. Soc. Lond. Ser. A: Math. Phys. Sci. 367(1897), 2595–2606 (2009)Google Scholar
  8. 8.
    Luckow, A., Lacinski, L., Jha, S.: SAGA BigJob: an extensible and interoperable pilot-job abstraction for distributed applications and systems. In: The 10th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, pp. 135–144 (2010)Google Scholar
  9. 9.
    Jha, S., Katz, D.S., Luckow, A., Merzky, A., Stamou, K.: Understanding scientific applications for cloud environments. In: Buyya, R., Broberg, J., Goscinski, A.M. (eds.) Cloud Computing: Principles and Paradigms, Chapter 13, p. 664. (2011)Google Scholar
  10. 10.
  11. 11.
    The SAGA Project: Accessed 1 Dec 2012
  12. 12. Accessed 1 Dec 2012
  13. 13.
    SAGA BigJob: (2012). Accessed 1 Dec 2012
  14. 14.
    Thota, A., Luckow, A., Jha, S.: Efficient large-scale replica-exchange simulations on production infrastructure. Philos. Trans. R. Soc. Lond. Ser. A: Math. Phys. Sci. 369(1949), 3318–3335 (2011)Google Scholar
  15. 15.
    Kim, J., Huang, W., Maddineni, S., Aboul-Ela, F., Jha, S.: Exploring the RNA folding energy landscape using scalable distributed cyberinfrastructure. In: Emerging Computational Methods in the Life Sciences, Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, HPDC ’10, pp. 477–488. ACM, New York (2010)Google Scholar
  16. 16.
    Kim, J., Maddineni, S., Jha, S.: Characterizing deep sequencing analytics using BFAST: towards a scalable distributed architecture for next-generation sequencing data. In: Proceedings of the Second International Workshop on Emerging Computational Methods for the Life Sciences, ECMLS ’11, pp. 23–32. ACM, New York. (2011)
  17. 17.
    Ko, S.-H., Kim, N., Kim, J., Thota, A., Jha, S.: Efficient runtime environment for coupled multi-physics simulations: dynamic resource allocation and load-balancing. In: Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, CCGRID ’10, pp. 349–358. IEEE Computer Society, Washington, DC (2010)Google Scholar
  18. 18.
    Luckow, A., Jha, S.: Abstractions for loosely-coupled and ensemble-based simulations on azure. In: International IEEE Conference on Cloud Computing Technology and Science, pp. 550–556. IEEE Computer Society, Los Alamitos (2010)CrossRefGoogle Scholar
  19. 19.
    Redis: (2011). Accessed 1 Dec 2012
  20. 20.
    ZeroMQ: (2011). Accessed 1 Dec 2012
  21. 21.
  22. 22.
    Mardis, E.R.: Next-generation DNA sequencing methods. Annu. Rev. Genomics Hum. Genet. 9, 387–402 (2008)CrossRefGoogle Scholar
  23. 23.
    Eucalyptus Walrus: (2011). Accessed 1 Dec 2012
  24. 24.
    Cui, Y., Olsen, K.B., Jordan, T.H., Lee, K., Zhou, J., Small, P., Roten, D., Ely, G., Panda, D.K., Chourasia, A., Levesque, J., Day, S̃.M., Maechling, P.: Scalable earthquake simulation on petascale supercomputers. In: Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’10, pp. 1–20. IEEE Computer Society, Washington, DC (2010)Google Scholar
  25. 25.
    Mukherjee, R., Thota, A., Fujioka, H., Bishop, T.C., Jha, S.: Running many molecular dynamics simulations on many supercomputers. In: Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment: Bridging from the eXtreme to the Campus and Beyond, XSEDE ’12, pp. 2:1–2:9. ACM, New York (2012)Google Scholar
  26. 26.
    Kim, J., Huang, W., Maddineni, S., Aboul-ela, F., Jha, S.: Energy landscape analysis for regulatory RNA finding using scalable distributed cyberinfrastructure. Concurr. Comp.-Pract. E. 23(17), 2292–2304 (2011)CrossRefGoogle Scholar
  27. 27.
    Indiana Quarry Gateway Web Services Hosting Resource: (2011). Accessed 1 Dec 2012
  28. 28.
    XSEDE Science Gateways: (2011). Accessed 1 Dec 2012

Copyright information

© Springer Science+Business Media Dordrecht 2012

Authors and Affiliations

  1. 1.Center for Computation and TechnologyLouisiana State UniversityBaton RougeUSA
  2. 2.Texas Advanced Computing Center, TACCThe University of Texas At AustinAustinUSA
  3. 3.ECERutgers UniversityNew BrunswickUSA

Personalised recommendations