Advertisement

Empirical Analysis of Defects in Handheld Device Applications

  • Mamta Pandey
  • Ratnesh LitoriyaEmail author
  • Prateek Pandey
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1046)

Abstract

A lot of effort and literature has been developed for conventional software. Defect prediction models can be helpful for project managers to improve the quality of software. However, there is insufficient literature concerning the defect proneness of handheld device (mobile) applications, (henceforth HHDA) instead of conventional applications. Still, no efforts were accomplished to figure out the distinct characteristics of handheld device app bugs and their dispersion among the layered architecture of applications. This paper aims to investigate bug proneness of handheld device applications in contrast with the conventional application. In this work, the authors analyzed the bug distribution of HHDA and conventional apps in the different layer of the architecture. There are 15591 bugs of 28 distinct applications have considered. Two-way ANOVA and Bootstrapping approach have used. This empirical analysis firmly administers that mobile application is more defect prone as compared to conventional applications in the presentation layer.

Keywords

HHDA Conventional applications Defect prone Bug distribution Two way ANOVA 

References

  1. 1.
    Vasquez, M.L., Moran, K., Poshyvanyk, D.: Continuous, evolutionary and large-scale: a new perspective for automated mobile app testing. In: International Conference on Software Maintenance and Evolution, pp. 399–410 (2017)Google Scholar
  2. 2.
    Hess, S., Kiefer, F., Carbon, R.: Quality by construction through mConcAppt. In: 8th International Conference on the Quality of Information and Communications Technology, pp. 313–318 (2012)Google Scholar
  3. 3.
    Harrold, M.J., Offutt, A.J., Tewary, K.: An approach to fault modeling and fault seeding using the program dependence graph. J. Syst. Softw. 36, 273–295 (1997)CrossRefGoogle Scholar
  4. 4.
    Zhou, B., Neamtiu, I., Gupta, R.: A cross-platform analysis of bugs and bug-fixing in open source projects: desktop vs. Android vs. iOS. In: Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering (2015)Google Scholar
  5. 5.
    Kim, H., Choi, B., Wong, W.E.: Performance testing of mobile applications at the unit test level. In: 3rd International Conference on Secure Software Integration and Reliability Improvement, pp. 171–180 (2009)Google Scholar
  6. 6.
    Liu, Z., Hu, Y., Cai, L.: Research on software security and compatibility test for mobile application. In: 4th Edition of the International Conference on the Innovative Computing Technology, pp. 140–145 (2014)Google Scholar
  7. 7.
    Morgado, I.C., Paiva, A.C.: Testing approach for mobile applications through reverse engineering of UI patterns. In: 30th International Conference on Automated Software Engineering Workshop, pp. 42–49 (2015)Google Scholar
  8. 8.
    Shabaan, M.M., Hamza, H.S., Omar, Y.K.: Effects of FSM minimization techniques on number of test paths in mobile applications MBT. In: 15th International Conference on Software Engineering Research, Management and Applications, pp. 297–302 (2017)Google Scholar
  9. 9.
    Coelho, R., Almeida, L., Gousios, G., Deursen, A.V., Treude, C.: Exception handling bug hazards in Android results from a mining study and an exploratory survey. Technical Report Series, pp. 1–44. Delft University of Technology Software Engineering Research Group (2016)Google Scholar
  10. 10.
    Gatou, C., Politis, A., Zevgolis, D.: The importance of mobile interface icons on user interaction. Int. J. Comput. Sci. Appl. 9, 92–107 (2012)Google Scholar
  11. 11.
    Jiang, H., Yang, H., Qin, S., Su, Z., Zhang, J., Yan, J.: Detecting energy bugs in Android apps using static analysis. In: Duan, Z., Ong, L. (eds.) ICFEM 2017. LNCS, vol. 10610, pp. 192–208. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68690-5_12CrossRefGoogle Scholar
  12. 12.
    Lo, D., Cheng, H., Han, J., Khoo, S.C., Sun, C.: Classification of software behaviours for failure detection: a discriminative pattern mining approach. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 557–566 (2009)Google Scholar
  13. 13.
    Banerjee, A., Chong, L.K., Chattopadhyay, S., Roychoudhury, A.: Detecting energy bugs and hotspots in mobile apps. In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 588–598 (2014)Google Scholar
  14. 14.
    Bhattacharya, P., Ulanova, L., Neamtiu, L., Koduru, S.C.: An empirical analysis of bug reports and bug fixing in open source. In: Proceedings of 17th European Conference on Software Maintenance and Reengineering, pp. 133–143 (2013)Google Scholar
  15. 15.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M., Regnell, B., Wesslén, A.: Experimentation in Software Engineering. An Introduction. Kluwer Academic Publishers, Dordrecht (2000)CrossRefGoogle Scholar
  16. 16.
    Shaffer, J.P.: Multiple hypothesis testing. Ann. Rev. Psychol. 46, 561–584 (1995)CrossRefGoogle Scholar
  17. 17.
    Litoriya, R., Kothari, A.: Cost Estimation of web projects in context with Agile paradigm: improvements and validation. Int. J. Softw. Eng. 6(2), 91–114 (2013)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Mamta Pandey
    • 1
  • Ratnesh Litoriya
    • 1
    Email author
  • Prateek Pandey
    • 1
  1. 1.Jaypee University of Engineering and TechnologyGunaIndia

Personalised recommendations