Advertisement

Dynamic Data Driven Application Systems (DDDAS) for Multimedia Content Analysis

  • Erik Blasch
  • Alex Aved
  • Shuvra S. Bhattacharyya
Chapter

Abstract

With ubiquitous data acquired from sensors, there is an ever increasing ability to abstract content from the environment. Multimedia content exists in many data forms such as surveillance data from video, reports from documents and twitter, and signals from systems. Current discussions revolve around dynamic data-driven applications systems (DDDAS), big data, cyber-physical systems, and Internet of things (IoT); each of which requires data modeling. Key elements include a computing environment that should match the application, time horizon, and queries for which the data is needed. In this chapter, we discuss the DDDAS paradigm of sensor measurements, statistical processing, environmental modeling, and software implementation to deliver content on demand, given the context of the environment. DDDAS provides a framework to control the information flow for rapid decision making, model updating, and being prepared for the unexpected query. Experimental results demonstrate the DDDAS-based Live Video Computing DataBase Modeling approach to allow data discovery, model updates, and query-based flexibility for awareness of unknown situations.

Notes

Acknowledgements

This work is partly supported by the Air Force Office of Scientific Research (AFOSR) under the Dynamic Data Driven Application Systems program and the Air Force Research Lab. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the United States Air Force.

Bibliography

  1. 1.
    Y. Bazilevs, A.L. Marsdan, et al., Toward a computational steering framework for large-scale composite structures based on continually and dynamically injected sensor data. Procedia Comput. Sci. 9, 1149–1158 (2012)CrossRefGoogle Scholar
  2. 2.
    B. Peherstorfer, K. Willcox, Detecting and adapting to parameter changes for reduced models of dynamic data-driven application systems. Procedia Comput. Sci. 51, 2553–2562 (2015)CrossRefGoogle Scholar
  3. 3.
    S. Imai, A. Galli, C.A. Varela, Dynamic data-driven avionics systems: Inferring failure modes from data stream. Procedia Comput. Sci. 51, 1665–1674 (2015)CrossRefGoogle Scholar
  4. 4.
    R. McCune, R. Purta, M. Dobski, A. Jaworski, G. Madey et al., Investigations of dddas for command and control of uav swarms with agent-based modeling, in Proceedings Winter Simulation Conference (2013) pp. 1467–1478Google Scholar
  5. 5.
    T. Henderson, A. Joshi, W. Wang, N. Tirpankar, et al., Bayesian computational sensor networks: small-scale structural health monitoring. Procedia Comput. Sci. 51, 2603–2612 (2015)CrossRefGoogle Scholar
  6. 6.
    B. Uzkent, M.J. Hoffman, A. Vodacek, J.P. Kerekes, B. Chen, Feature matching and adaptive prediction models in an object tracking DDDAS. Procedia Comput. Sci. 18, 1939–1948 (2013)CrossRefGoogle Scholar
  7. 7.
    R. Fujimoto, A. Guin, M. Hunter, H. Park, G. Kanitkar, A dynamic data driven application system for vehicle tracking. Procedia Comput Sci 29, 1203–1215 (2014)CrossRefGoogle Scholar
  8. 8.
    S.S. Bhattacharyya, M. van der Schaar, O. Atan, C. Tekin, K. Sudusinghe, Sudusinghe, Data-driven stream mining systems for computer vision, Ch12, in Advances in Embedded Computer Vision, ed. by B. Kisacanin, M. Gelautz (Springer International Publishing, Cham, 2014)Google Scholar
  9. 9.
    S. Chakravarthy, A. Aved, S. Shirvani, M. Annappa, et al., Adapting stream processing framework for video analysis. Procedia Comput. Sci. 51, 2648–2657 (2015)CrossRefGoogle Scholar
  10. 10.
    E. Blasch, A.J. Aved, Dynamic data-driven application system (DDDAS) for video surveillance user support. Procedia Comput. Sci. 51, 2503–2517 (2015)CrossRefGoogle Scholar
  11. 11.
    A.J. Aved, E. Blasch, Multi-INT query language for DDDAS designs. Procedia Comput. Sci. 51, 2518–2523 (2015)CrossRefGoogle Scholar
  12. 12.
    V. Maroulas, K. Kang, I.D. Shizas, A learning drift homotopy particle filter, in International Conference on Information Fusion, (2015)Google Scholar
  13. 13.
    I.D. Schizas, V. Maroulas, Dynamic data driven sensor network selection and tracking. Procedia Comput. Sci. 51, 2583–2592 (2015)CrossRefGoogle Scholar
  14. 14.
    E. Blasch, L. Hong, Data association through fusion of object track and identification sets, International Conference on Information Fusion, (2000)Google Scholar
  15. 15.
    N. Virani, S. Marcks, S. Sarkar, K. Mukerjee, A. Ray, S. Phoha, Dynamic data driven sensor Array fusion for object detection and classification. Procedia Comput. Sci. 18, 2046–2055 (2013)CrossRefGoogle Scholar
  16. 16.
    E. Blasch, G. Seetharaman et al., Dynamic data driven applications systems (DDDAS) modeling for automatic object recognition, in Proceedings of SPIE, vol. 8744 (2013)Google Scholar
  17. 17.
    B. Uzkent, M.J. Hoffman, A. Vodacek, Spectral validation of measurements in a vehicle tracking DDDAS. Procedia Comput. Sci. 51, 2493–2502 (2015)CrossRefGoogle Scholar
  18. 18.
    J.B. Weissman, V. Kumar, V. Chandola et al., DDDAS/ITR: A data mining and exploration middleware for grid and distributed computing, in International Conference on Computational Science (2007)Google Scholar
  19. 19.
    B. Liu, Y. Chen, et al., Information fusion in a cloud computing era: A systems-level perspective. IEEE Aerosp. Electron. Syst. Mag. 29(10), 16–24 (2014)CrossRefGoogle Scholar
  20. 20.
    X. Li, J. Dennis, G. Gao, W. Lim, H. Wei, C. Yang, R. Pavel, FreshBreeze: A data flow approach for meeting DDDAS challenges. Procedia Comput. Sci. 51, 2573–2582 (2015)CrossRefGoogle Scholar
  21. 21.
    V. Hebbur, V.S. Rao, A. Sandu, Parallel solution of DDDAS variational inference problems. Procedia Comput. Sci. 51, 2474–2482 (2015)CrossRefGoogle Scholar
  22. 22.
    K. Sudusinghe, Y. Jiao, H.B. Salem, M. van der Schaar, S. Bhattacharyya, Multiobjective design optimization in the lightweight dataflow for DDDAS environment (LiD4E). Procedia Comput. Sci. 51, 2563–2572 (2015)CrossRefGoogle Scholar
  23. 23.
    Z. Liu, E. Blasch, Z. Xue, R. Langaniere, W. Wu, Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative survey. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 94–109 (2012)CrossRefGoogle Scholar
  24. 24.
    E. Blasch, S. Plano, Cognitive fusion analysis based on context, in Proceedings of SPIE, vol. 5434 (2004)Google Scholar
  25. 25.
    E. Blasch, I. Kadar, K. Hintz, J. Biermann, et al., Resource management coordination with level 2/3 fusion issues and challenges. IEEE Aerosp. Electron. Syst. Mag. 23(3), 32–46 (2008)CrossRefGoogle Scholar
  26. 26.
    E. Blasch, E. Bosse, D.A. Lambert, High-Level Information Fusion Management and Systems Design (Artech House, Norwood, 2012)Google Scholar
  27. 27.
    E. Blasch, Y. Al-Nashif, S. Hariri, Static versus dynamic data information fusion analysis using DDDAS for cyber trust. Procedia Comput. Sci. 29, 1299–1313 (2014)CrossRefGoogle Scholar
  28. 28.
    Y. Badr, S. Hariri, Y. Al-Nashif, E. Blasch, Resilient and trustworthy dynamic data-driven application systems (DDDAS) Services for Crisis Management Environments. Procedia Comput. Sci. 51, 2623–2637 (2015)CrossRefGoogle Scholar
  29. 29.
    L. Xiong, V. Sunderam, Security and privacy dimensions in next generation DDDAS/Infosymbiotic systems: A position paper. Procedia Comput. Sci. 51, 2483–2492 (2015)CrossRefGoogle Scholar
  30. 30.
    N. Nguyen, M.M.H. Khan, Context aware data acquisition framework for dynamic data driven applications systems (DDDAS), in IEEE MILCOM (2013) pp. 334–341Google Scholar
  31. 31.
    P. Tagade, H. Seybold, S. Ravela, Mixture ensembles for data assimilation in dynamic data-driven environmental systems. Procedia Comput. Sci. 29, 1266–1276 (2014)CrossRefGoogle Scholar
  32. 32.
    S. Ravela, Dynamic data-driven deformable reduced models for coherent fluids. Procedia Comput. Sci. 51, 2464–2473 (2015)CrossRefGoogle Scholar
  33. 33.
    L. Peng, M. Silic, R. O'Donnell, K. Mohseni, A DDDAS plume monitoring system with reduced Kalman filter. Procedia Comput. Sci. 51, 2533–2542 (2015)CrossRefGoogle Scholar
  34. 34.
    X. Shi, H. Damgacioglu, N. Celik, A dynamic data driven approach for operation planning of microgrids. Procedia Comput. Sci. 51, 2543–2552 (2015)CrossRefGoogle Scholar
  35. 35.
    E. Blasch, Derivation of a Belief Filter for Simultaneous High Range Resolution Radar Tracking and Identification, Ph.D. Thesis, Wright State University, (1999)Google Scholar
  36. 36.
    A.J. Aved, Scene Understanding for Real Time Processing of Queries over Big Data Streaming Video, PhD Dissertation, University of Central Florida, (2013)Google Scholar
  37. 37.
    E. Blasch, G. Seetharaman, K. Reinhardt, Dynamic data driven applications system concept for information fusion. Procedia Comput. Sci. 18, 1999–2007 (2013)CrossRefGoogle Scholar
  38. 38.
    H. Ling, L. Bai et al., Robust infrared vehicle tracking across object pose change using L1 regularization, International Conference on Information Fusion, (2010)Google Scholar
  39. 39.
    E. Blasch, Z. Wang, H. Ling, K. Palaniappan, G. Chen, D. Shen, A Aved, Video-based activity analysis using the L1 tracker on VIRAT data, IEEE Applied Imagery Pattern Recognition Workshop, (2013)Google Scholar
  40. 40.
    G. Chen, D. Shen, C. Kwan, et al., Game theoretic approach to threat prediction and situation awareness. J. Adv. Inf. Fusion 2(1), 1–14 (2007)Google Scholar
  41. 41.
    E. Blasch, Sensor, user, mission (SUM) resource management and their interaction with level 2/3 fusion, in International Conference on Information Fusion, (2006)Google Scholar
  42. 42.
    E. Blasch, A. Steinberg, S. Das, J. Llinas, C-Y. Chong, O. Kessler, E. Waltz, F. White, Revisiting the JDL model for information exploitation, in International Conference on Information Fusion, (2013)Google Scholar
  43. 43.
    E.P. Blasch, S.K. Rogers, H. Holloway, J. Tierno, E.K. Jones, R.I. Hammoud, QuEST for information fusion in multimedia reports. Int. J. Monit. Surveill. Technol. Res. 2(3), 1–30 (2014)Google Scholar
  44. 44.
    G. Klein, B. Moon, R. Hoffman, Making sense of sensemaking 1: Alternative perspectives. IEEE Intell. Syst. 21(4), 70–73 (2006)CrossRefGoogle Scholar
  45. 45.
    L. A. Hendricks, S. Venugopalan, M. Rohrbach et al., Deep compositional captioning: Describing novel object categories without paired training data, arXiv:1511.05284 [cs.CV], Nov. (2015)Google Scholar
  46. 46.
    S.J. Pan, Q. Yang, Survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009)CrossRefGoogle Scholar
  47. 47.
    Department of Defense Science Board, The role of autonomy in DoD systems, July, 2012Google Scholar
  48. 48.
    R.I. Hammoud, C.S. Sahin et al., Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance, Sensors, 14, 19843–19860 (2014)Google Scholar
  49. 49.
    E. Blasch, Decisions-to-data using level 5 information fusion, in Proceedings of SPIE, vol. 9079 (2014)Google Scholar
  50. 50.
    B. Kahler, E. Blasch, Sensor management fusion using operating conditions, in Proceedings of IEEE National Aerospace Electronics Conference (NAECON) (2008)Google Scholar
  51. 51.
    M. Flickner, H. Sawhney, W. Niblack, J. Ashley, Q. Huang, B. Dom, M. Gorkani, J. Hafner, D. Lee, D. Petkovic, Query by image and video content: The QBIC system. Computer 28, 23–32 (1995)CrossRefGoogle Scholar
  52. 52.
    S.F. Chang, A. Eleftheriadis, R. McClintock, Next-generation content representation, creation, and searching for new-media applications in education. Proc. IEEE 86, 884–904 (1998)CrossRefGoogle Scholar
  53. 53.
    Y. Jianfeng, Z. Yang, L. Zhanhuai, A multimedia document database model based on multi-layered description supporting complex multimedia structural and semantic contents, in Proceedings of the International Multimedia Modelling Conference (2004) pp. 33–39Google Scholar
  54. 54.
    J.D.N. Dionisio, A.F. Cárdenas, A unified data model for representing multimedia, timeline, and simulation data. IEEE Trans. Knowl. Data Eng. 10(5), 746–767 (1998)CrossRefGoogle Scholar
  55. 55.
    A. Yoshitaka, T. Ichikawa, A survey on content-based retrieval for multimedia databases. IEEE Trans. Knowl. Data Eng. 11(1), 81–93 (1999)CrossRefGoogle Scholar
  56. 56.
    C. Shen, S. Wu, N. Sane, H. Wu, W. Plishker, S.S. Bhattacharyya, Design and synthesis for multimedia systems using the objected dataflow interchange format. IEEE Trans. Multimedia 14(3), 630–640 (2012)CrossRefGoogle Scholar
  57. 57.
    S.S. Bhattacharyya, E. Deprettere, R. Leupers, J. Takala (eds.), Handbook of Signal Processing Systems, 2nd edn, (Springer, 2013). ISBN: 978-1-4614-6858-5 (Print); 978-1-4614-6859-2 (Online)Google Scholar
  58. 58.
    E. Blasch, T. Connare, Improving track maintenance through group tracking, in Proceedings of the Workshop on Estimation, Tracking, and Fusion; A Tribute to Yaakov Bar Shalom, May (2001) pp. 360–371Google Scholar
  59. 59.
    H. Ling, Y. Wu et al., Evaluation of visual tracking in extremely low frame rate wide area motion imagery, in International Conference on Information Fusion (2011)Google Scholar
  60. 60.
    S.G. Alsing et al., Three-dimensional receiver operating characteristic (ROC) trajectory concepts for the evaluation of object recognition algorithms faced with the unknown object detection problem. Proc. SPIE 3718, 449–458 (1999)CrossRefGoogle Scholar
  61. 61.
    L. Snidaro, J. Garcia, et al. (eds.), Context-Enhanced Information Fusion: Boosting Real-World Performance with Domain Knowledge, Springer, (2016)Google Scholar
  62. 62.
    A. Panasyuk, E. Blasch, S.E. Kase, L. Bowman, Extraction of semantic activities from twitter data, in Proceedings of International Conference on Semantic Technologies for Intelligence, Defense, and Security (STIDS) (2013)Google Scholar
  63. 63.
    R.I. Hammoud, C.S. Sahin, et al., Automatic Association of Chats and Video Tracks for activity learning and recognition in aerial video surveillance. Sensors 14, 19843–19860 (2014)CrossRefGoogle Scholar
  64. 63.
    J. Gao, H. Ling, et al., Pattern of life from WAMI objects tracking based on visual context-aware tracking and infusion network models, in Proceedings of SPIE, vol. 8745 (2013)Google Scholar
  65. 64.
    J. Dunık, O. Straka, M. Simandl, et al., Random-point-based filters in object tracking. IEEE Trans. Aerosp. Electron. Syst. 51(2), 1403–1421 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Erik Blasch
    • 1
  • Alex Aved
    • 2
  • Shuvra S. Bhattacharyya
    • 3
    • 4
  1. 1.Air Force Office of Scientific ResearchAir Force Research LaboratoryArlingtonUSA
  2. 2.Information DirectorateAir Force Research LaboratoryRomeUSA
  3. 3.Department of Electrical and Computer EngineeringUniversity of MarylandCollege ParkUSA
  4. 4.Department of Pervasive ComputingTampere University of TechnologyTampereFinland

Personalised recommendations