Advertisement

Optimal Worksharing of DNA Sequence Analysis on Accelerated Platforms

  • Suejb MemetiEmail author
  • Sabri Pllana
  • Joanna Kołodziej
Chapter
Part of the Computer Communications and Networks book series (CCN)

Abstract

In this chapter, we describe an optimized approach for DNA sequence analysis on a heterogeneous platform that is accelerated with the Intel Xeon Phi. Such platforms commonly comprise one or two general purpose CPUs and one (or more) Xeon Phi coprocessors. Our parallel DNA sequence analysis algorithm is based on Finite Automata and finds patterns in large-scale DNA sequences. To determine the optimal worksharing (that is, DNA sequence fractions for the host and accelerating device) we propose a solution that combines combinatorial optimization and machine learning. The objective function that we aim to minimize is the execution time of the DNA sequence analysis. We use combinatorial optimization to efficiently explore the system configuration space and determine with machine learning the near-optimal system configuration for execution of the DNA sequence analysis. We evaluate our approach empirically using real-world DNA segments of various organisms. For experimentation, we use an accelerated platform that comprises two 12-core Intel Xeon E5 CPUs and an Intel Xeon Phi 7120P accelerator with 61 cores.

Keywords

Execution Time Simulated Annealing System Configuration Deterministic Finite Automaton Heterogeneous Platform 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Abraham, E., Bekas, C., Brandic, I., Genaim, S., Johnsen, E.B., Kondov, I., Pllana, S., Streit, A.: Preparing HPC applications for exascale: challenges and recommendations. In: 2015 International Conference on Network-Based Information Systems (NBiS), IEEE (2015)Google Scholar
  2. 2.
    Albayrak, O.E., Akturk, I., Ozturk, O.: Improving application behavior on heterogeneous manycore systems through kernel mapping. Parallel Comput. 39(12), 867–878 (2013). doi: 10.1016/j.parco.2013.08.011
  3. 3.
    Arudchutha, S., Nishanthy, T., Ragel, R.G.: String matching with multicore CPUs: performing better with the Aho-Corasick algorithm. arXiv preprint arXiv:14031305 (2014)
  4. 4.
    Augonnet, C., Thibault, S., Namyst, R., Wacrenier, P.A.: StarPU: a unified platform for task scheduling on heterogeneous multicore architectures. Concurrency Comput.: Pract. Experience 23(2), 187–198 (2011)CrossRefGoogle Scholar
  5. 5.
    Ayguadé, E., Blainey, B., Duran, A., Labarta, J., Martínez, F., Martorell, X., Silvera, R.: Is the schedule clause really necessary in OpenMP? In: OpenMP Shared Memory Parallel Programming, pp. 147–159. Springer (2003)Google Scholar
  6. 6.
    Bellekens, X., Andonovic, I., Atkinson, R., Renfrew, C., Kirkham, T.: Investigation of GPU-based pattern matching. In: The 14th Annual Post Graduate Symposium on the Convergence of Telecommunications, Networking and Broadcasting (PGNet2013) (PGNet2013) (2013)Google Scholar
  7. 7.
    Benkner, S., Pllana, S., Traff, J., Tsigas, P., Dolinsky, U., Augonnet, C., Bachmayer, B., Kessler, C., Moloney, D., Osipov, V.: PEPPHER: efficient and productive usage of hybrid computing systems. Micro IEEE 31(5), 28–41 (2011)CrossRefGoogle Scholar
  8. 8.
    Brandic, I., Pllana, S., Benkner, S.: An approach for the high-level specification of QoS-aware grid workflows considering location affinity. Sci. Program. 14(3–4), 231–250 (2006)Google Scholar
  9. 9.
    Chacón, A., Moure, J.C., Espinosa, A., Hernndez, P.: In-step FM-Index for faster pattern matching. In: Alexandrov V.N., Lees M., Krzhizhanovskaya V.V., Dongarra J., Sloot P.M.A. (eds.) ICCS, Elsevier, Procedia Computer Science, vol. 18, pp. 70–79 (2013)Google Scholar
  10. 10.
    Chrysos, G.: Intel Xeon Phi Coprocessor-the Architecture. Intel Whitepaper (2014)Google Scholar
  11. 11.
    Collins, F.S., Green, E.D., Guttmacher, A.E., Guyer, M.S.: A vision for the future of genomics research. Nature 422(6934), 835–847 (2003)CrossRefGoogle Scholar
  12. 12.
    Dokulil, J., Bajrovic, E., Benkner, S., Pllana, S., Sandrieser, M., Bachmayer, B.: High-level support for hybrid parallel execution of C++ applications targeting Intel Xeon Phi coprocessors. In: ICCS, Elsevier, Procedia Computer Science, vol. 18, pp. 2508–2511 (2013)Google Scholar
  13. 13.
    Drews, F., Lichtenberg, J., Welch, L.R.: Scalable parallel word search in multicore/multiprocessor systems. J. Supercomput. 51(1), 58–75 (2010)CrossRefGoogle Scholar
  14. 14.
    Duran, A., Ayguadé, E., Badia, R.M., Labarta, J., Martinell, L., Martorell, X., Planas, J.: Ompss: a proposal for programming heterogeneous multi-core architectures. Parallel Process. Lett. 21(02), 173–193 (2011)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Fahringer, T., Pllana, S., Testori, J.: Teuta: tool support for performance modeling of distributed and parallel applications. Computational Science - ICCS 2004. Lecture Notes in Computer Science, vol. 3038, pp. 456–463. Springer, Berlin (2004)CrossRefGoogle Scholar
  16. 16.
    Farkaš, T., Kubán, P., Lucká, M.: Effective parallel multicore-optimized k-mers counting algorithm. In: SOFSEM 2016: Theory and Practice of Computer Science: 42nd International Conference on Current Trends in Theory and Practice of Computer Science, Harrachov, Czech Republic, January 23–28, 2016, pp. 469–477. Springer, Berlin (2016)Google Scholar
  17. 17.
    Grewe, D., OBoyle, M.F.: A static task partitioning approach for heterogeneous systems using OpenCL. In: Compiler Construction, pp. 286–305. Springer (2011)Google Scholar
  18. 18.
    Herath, D., Lakmali, C., Ragel, R.: Accelerating string matching for bio-computing applications on multi-core CPUs. In: 2012 7th IEEE International Conference on Industrial and Information Systems (ICIIS), pp. 1–6 (2012)Google Scholar
  19. 19.
    Kessler, C.W., Dastgeer, U., Thibault, S., Namyst, R., Richards, A., Dolinsky, U., Benkner, S., Trff, J.L., Pllana, S.: Programmability and performance portability aspects of heterogeneous multi-/manycore systems. IEEE, pp. 1403–1408 (2012)Google Scholar
  20. 20.
    Khan, F.A., Han, Y., Pllana, S., Brezany, P.: An ant-colony-optimization based approach for determination of parameter significance of scientific workflows. In: 24th IEEE International Conference on Advanced Information Networking and Applications. Perth, WA, 2010, pp. 1241–1248 (2010). doi: 10.1109/AINA.2010.24
  21. 21.
    Kołodziej, J., Khan, S.: Data scheduling in data grids and data centers: a short taxonomy of problems and intelligent resolution techniques. In: Nguyen, N.T., Kolodziej, J., Burczyski, T., Biba, M. (eds.) Transactions on Computational Collective Intelligence X. Lecture Notes in Computer Science, vol. 7776, pp. 103–119. Springer, Berlin (2013)CrossRefGoogle Scholar
  22. 22.
    Kołodziej, J., Khan, S.U., Wang, L., Zomaya, A.Y.: Energy efficient genetic-based schedulers in computational grids. Concurrency Comput.: Pract. Experience 27(4), 809–829 (2015)CrossRefGoogle Scholar
  23. 23.
    Kouzinopoulos, C., Margaritis, K.: String matching on a multicore GPU using CUDA. In: 13th Panhellenic Conference on Informatics, 2009. PCI ’09, pp. 14–18 (2009)Google Scholar
  24. 24.
    Li, H., Ni, B., Wong, M.H., Leung, K.S.: A fast CUDA implementation of agrep algorithm for approximate nucleotide sequence matching. In: SASP, pp. 74–77. IEEE Computer Society (2011)Google Scholar
  25. 25.
    Lin, C.H., Liu, C.H., Chien, L.S., Chang, S.C.: Accelerating pattern matching using a novel parallel algorithm on GPUs. IEEE Trans. Comput. 62(10), 1906–1916 (2013)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Luchaup, D., Smith, R., Estan, C., Jha, S.: Speculative parallel pattern matching. IEEE Trans. Inf. Forensics Secur. 6(2), 438–451 (2011)CrossRefGoogle Scholar
  27. 27.
    Luftig, M.A., Richey, S.: DNA and forensic science. New Eng. L Rev. 35, 609 (2000)Google Scholar
  28. 28.
    Luk, C.K., Hong, S., Kim, H.: Qilin: exploiting parallelism on heterogeneous multiprocessors with adaptive mapping. In: 42nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-42, 2009, pp. 45–55. IEEE (2009)Google Scholar
  29. 29.
    Mellmann, A., Harmsen, D., Cummings, C.A., Zentz, E.B., Leopold, S.R., Rico, A., Prior, K., Szczepanowski, R., Ji, Y., Zhang, W., McLaughlin, S.F., Henkhaus, J.K., Leopold, B., Bielaszewska, M., Prager, R., Brzoska, P.M., Moore, R.L., Guenther, S., Rothberg, J.M., Karch, H.: Prospective genomic characterization of the german enterohemorrhagic escherichia coli O104:H4 outbreak by rapid next generation sequencing technology. PLoS ONE 6(7):e22, 751 (2011)Google Scholar
  30. 30.
    Memeti, S., Pllana, S.: PaREM: a novel approach for parallel regular expression matching. In: 17th International Conference on Computational Science and Engineering (CSE-2014), pp. 690–697 (2014). doi: 10.1109/CSE.2014.146
  31. 31.
    Memeti, S., Pllana, S.: Accelerating DNA sequence analysis using Intel Xeon Phi. In: PBio at the 2015 IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA). IEEE (2015a)Google Scholar
  32. 32.
    Memeti, S., Pllana, S.: Analyzing large-scale DNA sequences on multi-core architectures. In: 18th IEEE International Conference on Computational Science and Engineering (CSE-2015). IEEE (2015b)Google Scholar
  33. 33.
    Nakao, M., Lee, J., Boku, T., Sato, M.: XcalableMP implementation and performance of NAS parallel benchmarks. In: Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model, p. 11. ACM (2010)Google Scholar
  34. 34.
    NCBI: National center for biotechnology information U.S. National Library of Medicine. http://www.ncbi.nlm.nih.gov/genbank (2015). Accessed Dec 2015
  35. 35.
    Odajima, T., Boku, T., Hanawa, T., Lee, J., Sato, M.: GPU/CPU work sharing with parallel language XcalableMP-dev for parallelized accelerated computing. In: 2012 41st International Conference on Parallel Processing Workshops (ICPPW), pp. 97–106. IEEE (2012)Google Scholar
  36. 36.
    Pllana, S., Benkner, S., Mehofer, E., Natvig, L., Xhafa, F.: Towards an intelligent environment for programming multi-core computing systems. In: Euro-Par Workshops, Lecture Notes in Computer Science, vol. 5415, pp. 141–151. Springer (2008a)Google Scholar
  37. 37.
    Pllana, S., Benkner, S., Xhafa, F., Barolli, L.: Hybrid performance modeling and prediction of large-scale computing systems. In: CISIS 2008. International Conference on Complex, Intelligent and Software Intensive Systems, 2008, pp. 132–138 (2008b)Google Scholar
  38. 38.
    Pllana, S., Brandic, I., Benkner, S.: A survey of the state of the art in performance modeling and prediction of parallel and distributed computing systems. Int. J. Comput. Intell. Res. (IJCIR) 4(1), 17–26 (2008c)Google Scholar
  39. 39.
    Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical recipes, 3rd edn. In: The Art of Scientific Computing, 3rd edn. Cambridge University Press (2007)Google Scholar
  40. 40.
    Ravi, V.T., Agrawal, G.: A dynamic scheduling framework for emerging heterogeneous systems. In: 2011 18th International Conference on High Performance Computing (HiPC), pp. 1–10. IEEE (2011)Google Scholar
  41. 41.
    Rohrer, B.: How to choose algorithms for Microsoft Azure Machine Learning. https://azure.microsoft.com/en-us/documentation/articles/machine-learning-algorithm-choice/ (2015). Accessed Oct 2015
  42. 42.
    Sandrieser, M., Benkner, S., Pllana, S.: Using explicit platform descriptions to support programming of heterogeneous many-core systems. Parallel Comput. 38(1–2), 52–56 (2012)CrossRefGoogle Scholar
  43. 43.
    Scogland, T.R., Feng, Wc., Rountree, B., de Supinski, B.R.: CoreTSAR: adaptive worksharing for heterogeneous systems. In: Supercomputing, pp. 172–186. Springer (2014)Google Scholar
  44. 44.
    Stephens, Z.D., Lee, S.Y., Faghri, F., Campbell, R.H., Zhai, C., Efron, M.J., Iyer, R., Schatz, M.C., Sinha, S., Robinson, G.E.: Big data: astronomical or genomical? PLoS Biol 13(7):e1002, 195 (2015)Google Scholar
  45. 45.
    Tian, X., Saito, H., Preis, S., Garcia, E.N., Kozhukhov, S., Masten, M., Cherkasov, A.G., Panchenko, N.: Practical SIMD vectorization techniques for Intel Xeon Phi Coprocessors. In: IPDPS Workshops, pp. 1149–1158. IEEE (2013)Google Scholar
  46. 46.
    Tumeo, A., Villa, O.: Accelerating DNA analysis applications on GPU clusters. In: 2010 IEEE 8th Symposium on Application Specific Processors (SASP), pp. 71–76 (2010)Google Scholar
  47. 47.
    Viebke, A., Pllana, S.: The potential of the Intel (R) Xeon Phi for supervised deep learning. In: 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC). pp. 758–765. IEEE (2015)Google Scholar
  48. 48.
    Villa, O., Chavarra-Miranda, D.G., Maschhoff, K.J.: Input-independent, scalable and fast string matching on the Cray XMT. In: IPDPS, IEEE, pp. 1–12 (2009)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Department of Computer ScienceLinnaeus UniversityVaxjoSweden
  2. 2.Cracow University of TechnologyCracowPoland

Personalised recommendations