Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing

  • Andrea Arcuri
  • Muhammad Zohaib Iqbal
  • Lionel Briand
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6435)


Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.


Search based software engineering branch distance model based testing environment context UML MARTE OCL 


  1. 1.
    Douglass, B.P.: Real-time UML: developing efficient objects for embedded systems. Addison-Wesley Longman Publishing Co., Inc., Boston (1997)Google Scholar
  2. 2.
    Utting, M., Legeard, B.: Practical model-based testing: a tools approach. Elsevier, Amsterdam (2007)Google Scholar
  3. 3.
    Chen, T.Y., Kuoa, F., Merkela, R.G., Tseb, T.: Adaptive random testing: The art of test case diversity. Journal of Systems and Software, JSS (in press, 2010)Google Scholar
  4. 4.
    McMinn, P.: Search-based software test data generation: A survey. Software Testing, Verification and Reliability 14(2), 105–156 (2004)CrossRefGoogle Scholar
  5. 5.
    Myers, G.: The Art of Software Testing. Wiley, New York (1979)Google Scholar
  6. 6.
    Iqbal, M.Z., Arcuri, A., Briand, L.: Environment Modeling with UML/MARTE to Support Black-Box System Testing for Real-Time Embedded Systems: Methodology and Industrial Case Studies. In: ACM/IEEE International Conference on Model Driven Engineering Languages and Systems (MODELS) (2010)Google Scholar
  7. 7.
    Clarke, D., Lee, I.: Testing real-time constraints in a process algebraic setting. In: IEEE International Conference on Software Engineering (ICSE), pp. 51–60 (1995)Google Scholar
  8. 8.
    Krichen, M., Tripakis, S.: Conformance testing for real-time systems. Formal Methods in System Design 34(3), 238–304 (2009)zbMATHCrossRefGoogle Scholar
  9. 9.
    Alur, R., Dill, D.L.: A Theory of Timed Automata. Theoretical Computer Science 126, 183–235 (1994)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    En-Nouaary, A.: A scalable method for testing real-time systems. Software Quality Journal 16(1), 3–22 (2008)CrossRefGoogle Scholar
  11. 11.
    Miicke, T., Huhn, M.: Generation of optimized testsuites for UML statecharts with time. In: IFIP international conference on testing of communicating systems, pp. 128–143 (2004)Google Scholar
  12. 12.
    Zheng, M., Alagar, V., Ormandjieva, O.: Automated generation of test suites from formal specifications of real-time reactive systems. Journal of Systems and Software (JSS) 81(2), 286–304 (2008)CrossRefGoogle Scholar
  13. 13.
    Auguston, M., Michael, J.B., Shing, M.T.: Environment behavior models for automation of testing and assessment of system safety. Information and Software Technology (IST) 48(10), 971–980 (2006)CrossRefGoogle Scholar
  14. 14.
    Harman, M., Mansouri, S.A., Zhang, Y.: Search based software engineering: A comprehensive analysis and review of trends techniques and applications. Technical Report TR-09-03, King’s College (2009)Google Scholar
  15. 15.
    Garousi, V., Briand, L.C., Labiche, Y.: Traffic-aware stress testing of distributed real-time systems based on uml models using genetic algorithms. Journal of Systems and Software (JSS) 81(2), 161–185 (2008)CrossRefGoogle Scholar
  16. 16.
    Lindlar, F., Windisch, A., Wegener, J.: Integrating model-based testing with evolutionary functional testing. In: International Workshop on Search-Based Software Testing, SBST (2010)Google Scholar
  17. 17.
    Zeigler, B.P., Praehofer, H., Kim, T.G.: Theory of modeling and simulation. Academic Press, New York (2000)Google Scholar
  18. 18.
    Duran, J.W., Ntafos, S.C.: An evaluation of random testing. IEEE Transactions on Software Engineering (TSE) 10(4), 438–444 (1984)CrossRefGoogle Scholar
  19. 19.
    Arcuri, A., Iqbal, M.Z., Briand, L.: Formal analysis of the effectiveness and predictability of random testing. In: ACM International Symposium on Software Testing and Analysis, ISSTA (2010)Google Scholar
  20. 20.
    Lefticaru, R., Ipate, F.: Functional search-based testing from state machines. In: IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 525–528 (2010)Google Scholar
  21. 21.
    Arcuri, A.: It does matter how you normalise the branch distance in search based software testing. In: IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 205–214 (2010)Google Scholar
  22. 22.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1(1), 67–82 (1997)CrossRefGoogle Scholar
  23. 23.
    Feller, W.: An Introduction to Probability Theory and Its Applications, 3rd edn. vol. 1. Wiley, Chichester (1968)zbMATHGoogle Scholar
  24. 24.
    Arcuri, A.: Longer is better: On the role of test sequence length in software testing. In: IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 469–478 (2010)Google Scholar
  25. 25.
    Cohen, J.: A power primer. Psychological bulletin 112(1), 155–159 (1992)CrossRefGoogle Scholar
  26. 26.
    Vargha, A., Delaney, H.D.: A critique and improvement of the CL common language effect size statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics 25(2), 101–132 (2000)Google Scholar
  27. 27.
    Harman, M., Hu, L., Hierons, R., Wegener, J., Sthamer, H., Baresel, A., Roper, M.: Testability transformation. IEEE Transactions on Software Engineering 30(1), 3–16 (2004)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2010

Authors and Affiliations

  • Andrea Arcuri
    • 1
  • Muhammad Zohaib Iqbal
    • 1
    • 2
  • Lionel Briand
    • 1
    • 2
  1. 1.Simula Research LaboratoryUniversity of OsloLysakerNorway
  2. 2.Department of InformaticsUniversity of OsloNorway

Personalised recommendations