Advertisement

A Proposed Context-Awareness Taxonomy for Multi-data Fusion in Smart Environments: Types, Properties, and Challenges

  • Doaa Mohey El-DinEmail author
  • Aboul Ella Hassanein
  • Ehab E. Hassanien
Chapter
  • 9 Downloads
Part of the Studies in Systems, Decision and Control book series (SSDC, volume 295)

Abstract

this paper presents a new taxonomy for the context-awareness problem in data fusion. It interprets fusing data extracted from multiple sensory datatypes like images, videos, or text. Any constructed smart environment generates big data with various datatypes which are extracted from multiple sensors. This big data requires to fuse with expert people due to the context-awareness problem. Each smart environment has specific characteristics, conditions, and roles that need to expert human in each context. The proposed taxonomy tries to cure this problem by focusing on three dimensions classes for data fusion, types of generated data, data properties as reduction or noisy data, or challenges. It neglects the context domain and introduces solutions for fusing big data through classes in the proposed taxonomy. This taxonomy is presented from studying sixty-six research papers in various types of fusion, and different properties of data fusion. This paper presents new research challenges of multi-data fusion.

Keywords

Data fusion Big data Telemedicine Internet-for-Things Smart environment Data visualization 

References

  1. 1.
    Thapliyal, R., Patel, R.K., Yadav, A.K., Singh, A.: Internet of Things for smart environment and integrated ecosystem. In: International Conference on Advanced Research in Engineering Science and Management At: Dehradun, Uttarakhand (2018)Google Scholar
  2. 2.
    Bhayani, M., Patel, M., Bhatt, C.: Internet of Things (IoT): in a way. In: Proceedings of the International Congress on Information and Communication Technology, Advances in Intelligent Systems and Computing (2016)Google Scholar
  3. 3.
    Bongartz, S., Jin, Y., Paternò, F., Rett, J., Santoro, C., Spano, L.D.: Adaptive User Interfaces for Smart Environments with the Support of Model-Based Languages. Springer, Berlin (2012)Google Scholar
  4. 4.
    Ayed, S.B., Trichili, H., Alimi, A.M.: Data fusion architectures: a survey and comparison. In: 15th International Conference on Intelligent Systems Design and Applications (ISDA) (2015)Google Scholar
  5. 5.
    Chao, W., Jishuang, Q., Zhi, L.: Data fusion, the core technology for future on-board data processing system. Pecora 15/Land Satellite Information IV/ISPRS Commission I/FIEOS 2002 Conference Proceedings (2002)Google Scholar
  6. 6.
    Kalyan, L.O.: Veeramachaneni, Fusion, Decision-Level, Hindawi Publishing Corporation The Scientific World Journal Volume 2013, Article ID 704504, 19 pagesGoogle Scholar
  7. 7.
    Lahat, D., Adal, T., Jutten, C.: Multimodal data fusion: an overview of methods, challenges and prospects. In: Proceedings OF THE IEEE (2015)Google Scholar
  8. 8.
    Jaimes, A., Sebe, N.: Multimodal human computer interaction: a survey. Comput. Vis. Image Underst. 108(1), 116–134 (2007)CrossRefGoogle Scholar
  9. 9.
    Kashevnika, A.M., Ponomareva, A.V., Smirnov, A.V.: A multi-model context-aware tourism recommendation service: approach and architecture. J. Comput. Syst. Sci. Int. 56(2), 245–258 (2017). (ISSN 1064-2307)CrossRefGoogle Scholar
  10. 10.
    Lahat, D., Adali, T., Jutten, C.: Multimodal data fusion: an overview of methods, challenges, and prospects. Proc. IEEE 103(9) (2015)Google Scholar
  11. 11.
    Hall, D.L., Llinas, J.: An introduction to multi-sensor data fusion. Proc. IEEE 85(1) (1997)Google Scholar
  12. 12.
    Hofmann, M.A.: Challenges of model interoperation in military simulations. Simulation 80(12), 659–667 (2004)CrossRefGoogle Scholar
  13. 13.
    El-Sappagh, S., Ali, F., Elmasri, S., Kim, K., Ali, A., Kwa, K.-S.: Mobile Health Technologies for Diabetes Mellitus: Current State and Future Challenges, pp. 2169–3536 (2018)Google Scholar
  14. 14.
    Žontar, R., Heričko, M., Rozman, I.: Taxonomy of context-aware systems. Elektrotehniški Vestnik 79(1–2), 41–46 (2012). (English Edition)Google Scholar
  15. 15.
    Emmanouilidis, C., Koutsiamanis, R.-A., Tasidou, A.: Mobile guides: taxonomy of architectures, context awareness, technologies and applications. J. Netw. Comput. Appl. 36(1), 103–125 (2013)CrossRefGoogle Scholar
  16. 16.
    Almasri, M., Elleithy, K.: Data fusion in WSNs: architecture, taxonomy, evaluation of techniques, and challenges. Int. J. Sci. Eng. Res. 6(4) (2015)Google Scholar
  17. 17.
    Biancolillo, A., Boqué, R., Cocchi, M., Marini, F.: Data fusion strategies in food analysis (Chap. 10). In: Data Fusion Methodology and Applications, vol. 31, pp. 271–310 (2019)Google Scholar
  18. 18.
    Ferrin, G., Snidaro, L., Foresti, G.L.: Contexts, co-texts and situations in fusion domain. In: 14th International Conference on Information Fusion Chicago, Illinois, USA (2011)Google Scholar
  19. 19.
    den Berg, N., Schumann, M., Kraft, K., Hoffmann, W.: Telemedicine and telecare for older patients—a systematic review. Maturitas 73(2) (2012)Google Scholar
  20. 20.
    Kańtoch, E.: Recognition of sedentary behavior by machine learning analysis of wearable sensors during activities of daily living for telemedical assessment of cardiovascular risk. Sensors (2018)Google Scholar
  21. 21.
    Kang, S.-K., Chung, K., Lee, J.-H.: Real-time tracking and recognition systems for interactive telemedicine health services. Wireless Pers. Commun. 79(4), 2611–2626 (2014)CrossRefGoogle Scholar
  22. 22.
    Gite, S., Agrawal, H.: On context awareness for multisensor data fusion in IoT. In: Proceedings of the Second International Conference on Computer and Communication Technologies, pp. 85–93 (2015)Google Scholar
  23. 23.
    Deshmukh, M., Bhosale, U.: Image fusion and image quality assessment of fused images. Int. J. Image Process. (IJIP) 4(5) (2010)Google Scholar
  24. 24.
    Moravec, J., Šára, R.: Robust maximum-likelihood on-line LiDAR-to-camera calibration monitoring and refinement. In: Kukelová, Z., Skovierovă, J.: (eds.) 23rd Computer Vision Winter Workshop, Český Krumlov, Czech Republic (2018)Google Scholar
  25. 25.
    De Silva, V., Roche, J., Kondoz, A.: Robust fusion of LiDAR and wide-angle camera data for autonomous mobile robots. Sensors (2018)Google Scholar
  26. 26.
    Ghassemian, H.: A review of remote sensing image fusion methods. Inf. Fusion 32(part A) (2016)Google Scholar
  27. 27.
    Palsson, F., Sveinsson, J.R., Ulfarsson, M.O., Benediktsson, J.A.: Model-based fusion of multi- and hyperspectral images using PCA and wavelets. IEEE Trans. Geosci. Remote Sens. 53(5) (2015)Google Scholar
  28. 28.
    Kim, Y.M., Theobalt, C., Diebel, J., Kosecka, J., Miscusik, B.: Sebastian, multi-view image and ToF sensor fusion for dense 3D reconstruction. In: IEEE 12th International Conference on Computer Vision Workshops, ICCV (2009)Google Scholar
  29. 29.
    Choia, J., Radau, P., Xubc, R., Wright, G.A.: X-ray and magnetic resonance imaging fusion for cardiac resynchronization therapy. Med. Image Anal. 31 (2016)Google Scholar
  30. 30.
    Krout, D.W., Okopal, G., Hanusa, E.: Video data and sonar data: real world data fusion example. In: 14th International Conference on Information Fusion (2011)Google Scholar
  31. 31.
    Snidaro, L., Foresti, G.L., Niu, R., Varshney, P.K.: Sensor fusion for video surveillance. Electr. Eng. Comput. Sci. 108 (2004)Google Scholar
  32. 32.
    Heracleous, P., Badin, P., Bailly, G., Hagita, N.: Exploiting multimodal data fusion in robust speech recognition. In: IEEE International Conference on Multimedia and Expo (2010)Google Scholar
  33. 33.
    Boujelbene, S.Z., Mezghani, D.B.A., Ellouze, N.: General machine learning classifiers and data fusion schemes for efficient speaker recognition. Int. J. Comput. Sci. Emer. Technol. 2(2) (2011)Google Scholar
  34. 34.
    Gu, Y., Li, X., Chen, S., Zhang, J., Marsic, I.: Speech intention classification with multimodal deep learning. Adv. Artif. Intell. (2017)Google Scholar
  35. 35.
    Zahavy, T., Mannor, S., Magnani, A., Krishnan, A.: Is a picture worth a thousand words? A deep multi-modal fusion architecture for product classification in E-commerce. Under Review as a Conference Paper at ICLR 2017Google Scholar
  36. 36.
    Gallo, I., Calefati, A., Nawaz, S., Janjua, M.K.: Image and encoded text fusion for multi-modal classification. Published in the Digital Image Computing: Techniques and Applications (DICTA), Australia (2018)Google Scholar
  37. 37.
    Viswanathan, P., Venkata Krishna, P.: Text fusion watermarking in medical image with semi-reversible for secure transfer and authenticationGoogle Scholar
  38. 38.
    Huang, F., Zhang, X., Zhao, Z., Xu, J., Li, Z.: Image-text sentiment analysis via deep multimodal attentive fusion. Knowl.-Based Syst. (2019)Google Scholar
  39. 39.
    Blasch, E., Nagy, J., Aved, A., Pottenger, W.M., et al.: Context aided video-to-text information fusion. In: 17th International Conference on Information Fusion (FUSION) (2014)Google Scholar
  40. 40.
    Video-to-Text Information Fusion Evaluation for Level 5 User Refinement,18th International Conference on Information Fusion Washington, DC, 6–9 July 2015Google Scholar
  41. 41.
    Jain, S., Gonzalez, J.E.: Inter-BMV: Interpolation with Block Motion Vectors for Fast Semantic Segmentation on Video, arXiv:1810.04047v1
  42. 42.
    Gidel, S., Blanc, C., Chateau, T., Checchin, P., Trassoudaine, L.: Non-parametric laser and video data fusion: application to pedestrian detection in urban environment. In: 12th International Conference on Information Fusion Seattle, WA, USA, 6–9 July 2009Google Scholar
  43. 43.
    Katsaggelos, A.K., Bahaadini, S., Molina, R.: Audiovisual fusion: challenges and new approaches. Proc. IEEE 103(9) (2015)Google Scholar
  44. 44.
    Datcu, D., Rothkrantz, L.J.M.: Semantic audio-visual data fusion for automatic emotion recognition, recognition. Emot. Recognit. 411–435 (2015)Google Scholar
  45. 45.
    O’Conaire, C., O’Connor, N.E., Smeaton, A.: Thermo-visual feature fusion for object tracking using multiple spatiogram trackers. Mach. Vis. Appl. 19(5–6), 483–494 (2008)Google Scholar
  46. 46.
    Kumar, P., Gauba, H., Roy, P.P., Dogra, D.P.: Coupled HMM-based multi-sensor data fusion for sign language recognition. Pattern Recogn. Lett. 86 (2017)Google Scholar
  47. 47.
    Chen, C., Liang, J., Zhao, H., Tian, J.: Factorial HMM and parallel HMM for gait recognition. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 39(1), 114–123 (2009)CrossRefGoogle Scholar
  48. 48.
    Cetin, O., Ostendorf, M. and Bernard, G.D.: Multi-rate coupled hidden markov models and their application to machining tool-wear classification. IEEE Trans. Signal Process. 55(6) (2007)Google Scholar
  49. 49.
    Eyigoz, E., Gildea, D., Oflazer, K.: Multi-rate HMMs for word alignment. In: Proceedings of the Eighth Workshop on Statistical Machine Translation, Bulgaria, pp. 494–502 (2013)Google Scholar
  50. 50.
    Zajdel, W., Krijnders, J.D., Andringa, T., Gavrila, D.M.: CASSANDRA: audio-video sensor fusion for aggression detection. In: IEEE International Conference Advanced Video and Signal Based Surveillance (AVSS), London, UK (2007)Google Scholar
  51. 51.
    Kampman, O., Barezi, E.J., Bertero, D., Fung, P.: Investigating audio, video, and text fusion methods for end-to-end automatic personality prediction. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers), pp. 606–611 (2018)Google Scholar
  52. 52.
    Ji, C.B., Duan, G., Ma, H.Y., Zhang, L., Xu, H.Y.: Modeling of image, video and text fusion quality data packet system for aerospace complex products based on business intelligence (2019)Google Scholar
  53. 53.
    Xiong, Y., Wang, D., Zhang, Y., Feng, S., Wang, G.: Multimodal data fusion in text-image heterogeneous graph for social media recommendation. In: International Conference on Web-Age Information Management WAIM, Web-Age Information Management (2014)Google Scholar
  54. 54.
    Naphade, M., Kristjansson, T., Frey, B., Huang, T.S.: Probabilistic multimedia objects (multijects): a novel approach to 9 video indexing and retrieval in multimedia systems. In: Proceedings of IEEE International Conference on Image Processing, vol. 3, pp. 536–540, Chicago, USA (1998)Google Scholar
  55. 55.
    Ellis, D.: Prediction-driven computational auditory scene analysis. Ph.D. thesis, MIT Department of Electrical Engineering and Computer Science, Cambridge, Mass, USA (1996)Google Scholar
  56. 56.
    Adams, W.H., Iyengar, G., Lin, C.-Y., Naphade, M.R., Neti, C., Nock, H.J., Smith, J.R.: Semantic indexing of multimedia content using visual, audio, and text cues. EURASIP J. Appl. Signal Process. (2003)Google Scholar
  57. 57.
    Wu, Z., Cai, L., Meng, H.: Multi-level fusion of audio and visual features for speaker identification. In: International Conference on Biometrics ICB 2006: Advances in Biometrics (2006)Google Scholar
  58. 58.
    Yurur, O., Labrador, M., Moreno, W.: Adaptive and energy efficient context representation framework in mobile sensing. IEEE Trans. Mob. Comput. 13(8) (2014)Google Scholar
  59. 59.
    De Paola, A., Gaglio, S., Re, G.L., Ortolani, M.: Multi-sensor fusion through adaptive Bayesian networks. Congress of the Italian Association for Artificial Intelligence AI*IA 2011: AI*IA 2011: Artificial Intelligence Around Man and Beyond (2011)Google Scholar
  60. 60.
    Hossain, M.A., Atrey, P.K., El Saddik, A.: Learning multi-sensor confidence using a reward-and-punishment mechanism, integrate machine-learning algorithms in the data fusion process. IEEE Trans. Instrum. Meas. 58(5), 1525–1534 (2009)Google Scholar
  61. 61.
    Gite, S., Agrawal, H.: On context awareness for multi-sensor data fusion in IoT. In: Proceedings of the Second International Conference on Computer and Communication Technologies (2016)Google Scholar
  62. 62.
    Malandrakis, N., Iosif, E., Prokopi, V., Potamianos, A., Narayanan, S.: DeepPurple: lexical, string and affective feature fusion for sentence-level semantic similarity estimation. In: Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference, and the Shared Task. ACM (2013)Google Scholar
  63. 63.
    Barzilay, R., McKeown, K.R.: Sentence fusion for multidocument news summarization. Comput. Linguist. 31(3) (2005)Google Scholar
  64. 64.
    Durkan, C., Storkey, A., Edwards, H.: The context-aware learner. In: ICLR 2018Google Scholar
  65. 65.
    Weimer Ariandy, D., Benggolo, Y., Freitag, M.: Context-aware deep convolutional neural networks for industrial inspection. In: Australasian Conference on Artificial Intelligence, Canberra, Australia, Volume: Deep Learning and its Applications in Vision and Robotics (Workshop) (2015)Google Scholar
  66. 66.
    Brenon, A., Portet, F., Vacher, M.: Context feature learning through deep learning for adaptive context-aware decision making in the home. In: The 14th International Conference on Intelligent Environments, Rome, Italy (2018)Google Scholar
  67. 67.
    Kantorov, V., Oquab, M., Cho, M., Laptev, I.: ContextLocNet: context-aware deep network models for weakly supervised localization. ECCV 2016, Oct 2016, Amsterdam, Netherlands. Springer, pp. 350–365 (2016)Google Scholar
  68. 68.
    Savopol, F., Armenakis, C.: Merging of heterogeneous data for emergency mapping: data integration and data fusion? In: Symposium of Geospatial Theory, Processing and Applications (2002)Google Scholar
  69. 69.
    Dong, X.L., Naumann, F.: Data fusion: resolving data conflicts for integration. J. Proc. VLDB 2(2) (2009)Google Scholar
  70. 70.
    Zhu, Y., Song, E., Zhou, J., You, Z.: Optimal dimensionality reduction of sensor data in multisensor estimation fusion. IEEE Trans. Signal Process. 53(5) (2005)Google Scholar
  71. 71.
    Nesa, N., Ghosh, T., Banerjee, I.: Outlier detection in sensed data using statistical learning models for IoT. In: 2018 IEEE Wireless Communications and Networking Conference (WCNC) (2018)Google Scholar
  72. 72.
    Chandola, V., Banerjee, A., Kumar, V.: Outlier detection: a survey. ACM Comput. Surv. 41(3), Article 15 (2009)Google Scholar
  73. 73.
    Aggarwal, C.C.: Outlier Analysis, 2nd edn. Springer, Berlin (2016)Google Scholar
  74. 74.
    Tonjes, R., Ali, M.I., Barnaghi, P., Ganea, S., et al.: Real Time IoT Stream Processing and Large-scale Data Analytics for Smart City Applications (2014)Google Scholar
  75. 75.
    Bonino, D., Rizzo, F., Pastrone, C., Soto, J.A.C., Ahlsen, M., Axling, M.: Block-based realtime big-data processing for smart cities. According to Eurostat, IEEE 2016Google Scholar
  76. 76.
    Cho, K., Hwang, I., Kang, S., Kim, B., Lee, J., Lee, S., Park, S., Song, J., Rhee, Y.: HiCon: a hierarchical context monitoring and composition framework for next-generation context-aware services. IEEE Netw. 22(4) (2008)Google Scholar
  77. 77.
    Padovitz, A., Loke, S.W., Zaslavsky, A., Burg, B., Bartolini, C.: An approach to data fusion for context awareness. In: International and Interdisciplinary Conference on Modeling and Using Context, Modeling and Using Context (2005)Google Scholar
  78. 78.
    Roy, N., Das, S.K., Julien, C.: Resolving and mediating ambiguous contexts in pervasive environments. In: User-Driven Healthcare: Concepts, Methodologies, Tools, and Applications, IGI Global disseminator of knowledge (2013)Google Scholar
  79. 79.
    Roy, N., Das, S.K., Julien, C..: Resource-optimized quality-assured ambiguous context mediation framework in pervasive environment. IEEE Trans. Mob. Comput. 11(2) (2012)Google Scholar
  80. 80.
    De Paola, A., La Cascia, M., Lo Re, G., Ortolani, M.: User detection through multi-sensor fusion in an AmI scenario. In: 2012 15th International Conference on Information Fusion (FUSION) (2012)Google Scholar
  81. 81.
    Roy, N., Pallapa, G.V., Das, S.K.: A middleware framework for ambiguous context mediation in smart healthcare application, user activity recognition. In: Third IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, WiMob 2007, White Plains, New York, USA, 8–10 Oct 2007Google Scholar
  82. 82.
    Nwe, M.S., Tun, H.M.: Implementation of multi-sensor data fusion algorithm. Int. J. Sens. Sens. Netw. (2017)Google Scholar
  83. 83.
    Rahmati, A., Zhong, L.: Context-based network estimation for energy-efficient ubiquitous. IEEE Trans. Mob. Comput. 10(1) (2011)Google Scholar
  84. 84.
    Klein, L., Mihaylova, L., El Faouzi, N.E: Sensor and data fusion: taxonomy challenges and applications. In: Pal, S.K., Petrosino, A., Maddalena, L. (eds.) Handbook on Soft Computing for Video Surveillance. Taylor & Francis. Sensor and Data Fusion: Taxonomy Challenges and applications. Chapman & Hall/CRC (2013)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  • Doaa Mohey El-Din
    • 1
    Email author
  • Aboul Ella Hassanein
    • 1
  • Ehab E. Hassanien
    • 1
  1. 1.Information Systems Department, Faculty of Computers and Artificial IntelligenceCairo UniversityCairoEgypt

Personalised recommendations