Advertisement

Reexamining Computational Support for Intelligence Analysis: A Functional Design for a Future Capability

  • James Llinas
  • Galina Rogova
  • Kevin BarryEmail author
  • Rachel Hingst
  • Peter Gerken
  • Alicia Ruvinsky
Chapter

Abstract

We explore the technological bases for argumentation combined with information fusion techniques to improve intelligence analyses. We review various tools framed by several examples of modern intelligence analyses drawn from different environments. Current tools fail to support computational associations needed for fusion of relations among entities needed for the assembly of an integrated situational picture. Most tools are single-sourced for entity streams, with tools automatically linking analyses between bounded entity-pairs and enabling levels of “data fusion”, but the rigor is limited. Yet these tools often accept the pre-processed extractions from these entities as correct. These tools can identify the intuitive associations among entities, but mostly as if uncertainty did not exist. However, in their attempt to discover relations among entities with little uncertainty and few entity associations, the complexities are left to the human analysts to be resolved. This situation leads to cognitive overloading of the analysts who must manually assemble the selected situational interpretations into a comprehensive narrative. Our goal is automating the integration of complex hypotheses. We review the literature of computational support for argumentation and, for an integrated functional design, as part of a combined approach, we nominate a unique, belief- and story-based subsystem designed to support hybrid argumentation. To deal with the largely textual data foundation of these intelligence analyses, we describe how a previously, author-developed, ‘hard plus soft’ information fusion system (combining sensor/hard and textual/soft information) could be integrated into a functional design. We combine these two unique capabilities into a scheme that arguably overcomes many of the deficiencies we cite to provide considerable improvement in efficiency and effectiveness for intelligence analyses.

Notes

Acknowledgement

This publication results from research supported by the Naval Postgraduate School Assistance Grant No. N00244-15-1-0051 awarded by the NAVSUP Fleet Logistics Center San Diego (NAVSUP FLC San Diego). The views expressed in written materials or publications, and/or made by speakers, moderators, and presenters, do not necessarily reflect the official policies of the Naval Postgraduate School nor does mention of trade names, commercial practices, or organizations imply endorsement by the U.S. Government.

References

  1. Schlacter, J., et al. (2015), Leveraging Topic Models to Develop Metrics for Evaluating the Quality of Narrative Threads Extracted from News Stories, 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015 Procedia Manufacturing, Volume 3Google Scholar
  2. Allen, J. (1995), Natural Language Understanding (2nd ed.). Benjamin-Cummings Publishing Co., Inc., Redwood City, CA, USAGoogle Scholar
  3. Andrews, C. and North, C. (2012), “Analyst’s Workspace: An Embodied Sensemaking Environment For Large, High-Resolution Displays”, Proc. 2012 IEEE Conference on Visual Analytics Science and Technology (VAST), Seattle, WA.Google Scholar
  4. Bex, F., S. van den Braak, H. van Oostendorp, H. Prakken, B. Verheij, and G. Vreeswijk (2007a), “Sense-making software for crime investigation: how to combine stories and arguments?,” Law, Probability and Risk, vol. 6, iss. 1-4, pp. 145-168.Google Scholar
  5. Bex, F, et al. (2007b), Sense-making software for crime investigation: how to combine stories and arguments?, Law, Probability and Risk.Google Scholar
  6. Bex, F. (2013) Abductive Argumentation with Stories. ICAIL-2013, in: Workshop on Formal Aspects of Evidential Inference, 2013Google Scholar
  7. Bier, E.A., Ishak, E.W., and Chi, E. (2006), “Entity Workspace: An Evidence File That Aids Memory, Inference, and Reading”, In ISI, San Diego, CA, 2006, pp. 466-472.Google Scholar
  8. Blei, D.M., Ng, A. Y., and Jordan, M. I. (2003), “Latent Dirichlet allocation,” Journal of Machine Learning Research, vol. 3, pp. 993–1022.Google Scholar
  9. Corner, A. and Hahn, U. (2009)., Evaluating Science Arguments: Evidence, Uncertainty, and Argument Strength,  Journal of Experimental Psychology Applied 15(3):199-212.Google Scholar
  10. Croskerry, P. (2009), A Universal Model of Diagnostic Reasoning, Academic Medicine, Vol 84, No 8, pp1022–8.Google Scholar
  11. Dahl, E. (2013), Intelligence and Surprise Attack: Failure and Success from Pearl Harbor to 9/11 and Beyond, Georgetown University Press.Google Scholar
  12. Date, K., Gross, G. A., Khopkar, S, Nagi, R. and K. Sambhoos (2013a), “Data association and graph analytical processing of hard and soft intelligence data”, Proceedings of the 16th International Conference on Information Fusion (Fusion 2013), Istanbul, TurkeyGoogle Scholar
  13. Date, K., GA Gross, and Nagi R. (2013b), “Test and Evaluation of Data Association Algorithms in Hard+Soft Data Fusion,” Proc.of the 17thInternational Conference on Information Fusion, Salamanca, SpainGoogle Scholar
  14. Djulbegovic, B., et al. (2012), Dual processing model of medical decision-making, BMC Medical Informatics and Decision Making, Vol. 12Google Scholar
  15. Faloutsos, C. KS McCurley, and Tomkins A. (2004), “Fast discovery of connection subgraphs.” Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining 22: 118-127.Google Scholar
  16. Feng, V.W. and Hirst, G.. (2011), Classifying Arguments by Scheme, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 987–996, Portland, Oregon.Google Scholar
  17. Gordon, T.F. (1996), Computational Dialectics, In Hoschka, P., editor, Computers as Assistants - A New Generation of Support Systems, pages 186–203., Lawrence Erlbaum Associates.Google Scholar
  18. Gross, G., et al. (2014), Systemic Test and Evaluation of a Hard+Soft Information Fusion Framework; Challenges and Current Approaches, in: “Fusion2014,” International conference on Information Fusion,Google Scholar
  19. Haenni, R. (2001) Cost-bounded argumentation, International Journal of Approximate Reasoning, 26(2):101–127.Google Scholar
  20. Hastings, A.C. (1963), A Reformulation of the Modes of Reasoning in Argumentation, Ph.D. dissertation, Northwestern University, Evanston, Illinois.Google Scholar
  21. Headquarters, Dept of Army (2010), Army Field Manual 5-0, The Operations ProcessGoogle Scholar
  22. Hossain, MS, M Akbar, and Nicholas F Polys (2012a). “Narratives in the network: interactive methods for mining cell signaling networks.” Journal of Computational Biology 19.9:1043-1059.Google Scholar
  23. Hossain, MS., et al, (2012b), Connecting the dots between PubMed abstracts PloS one 7.1Google Scholar
  24. Hossain, M. S., Butler, P., Boedihardjo, A. P., and Ramakrishnan, N. (2012c). Storytelling in entity networks to support intelligence analysts, Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining.Google Scholar
  25. Klein, G., et al. (2006), Making Sense of Sensemaking 2: A Macrocognitive Model, IEEE Intelligent Systems, Volume:21, Issue: 5.Google Scholar
  26. Kumar, D., Ramakrishnan, N., Helm, R. F., and Potts, M. (2008), Algorithms for storytelling, IEEE Transactions on Knowledge and Data Engineering, 20(6), 736-751.Google Scholar
  27. Llinas, J. (2014a), Reexamining Information Fusion--Decision Making Inter-dependencies, in Proc. of the IEEE Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), San Antonio, TX.Google Scholar
  28. Llinas, J, Nagi, R., Hall, D.L., and Lavery, J. (2010), “A Multi-Disciplinary University Research Initiative in Hard and Soft Information Fusion: Overview, Research Strategies and Initial Results”, Proc. of the International Conference on Information Fusion, Edinburgh, UK.Google Scholar
  29. Llinas, J. (2014b), A Survey of Automated Methods for Sensemaking Support, Proc of the SPIE Conf on Next-Generation Analyst, Baltimore, MDGoogle Scholar
  30. Mani, I. and Klein, G.L. (2005), Evaluating Intelligence Analysis Arguments in Open-ended Situations, Proc of the Intl Conf on Intelligence Analysis, McLean Va.Google Scholar
  31. Mochales, R. and Moens, M. (2008), Study on the Structure of Argumentation in Case Law, Proceedings of the 2008 Twenty-First Annual Conference on Legal Knowledge and Information Systems: JURIX 2008 Google Scholar
  32. Mochales-Palau, R. and Moens, M. (2007), Study on Sentence Relations in the Automatic Detection of Argumentation in Legal Cases, Proceedings of the 2007 Twentieth Annual Conference on Legal Knowledge and Information Systems: JURIX 2007 Google Scholar
  33. Moens, M., et al (2007), Automatic Detection of Arguments in Legal Texts, Proceedings of the 11th international conference on Artificial intelligence and Law.Google Scholar
  34. Moens, M. (2013), Argumentation Mining: Where are we now, where do we want to be and how do we get there?, FIRE '13 Proceedings of the 5th 2013 Forum on Information Retrieval Evaluation.Google Scholar
  35. Ng, H.T., and Mooney, R.J. (1990), On the Role of Coherence in Abductive Explanation, in Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90) Google Scholar
  36. Pirolli, P. and Card, S. (2005), The Sensemaking Process and Leverage Points for Analyst Technology as Identified Through Cognitive Task Analysis, In Proceedings of 2005 International Conference on Intelligence Analysis (McLean, VA, USA, May, 2005). pp.337-342, Boston, MA.Google Scholar
  37. Reed, C. and Rowe, G. (2004), ARAUCARIA: Software for Argument Analysis, Diagramming and Representation, International Journal on AI Tools 13 (4) 961–980.Google Scholar
  38. Schum, D. (2005), Narratives in Intelligence Analysis: Necessary but Often Dangerous, University College London Studies in Evidence Science.Google Scholar
  39. Shahaf, D., and Guestrin, C., (2010), Connecting the dots between news articles, Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining: 623-632.Google Scholar
  40. Shahaf, D, Guestrin, C, and Horvitz, E. (2012) Trains of thought: Generating information maps.” Proceedings of the 21st international conference on World Wide Web: 899-908.Google Scholar
  41. Shahaf, D. et al (2013), Information cartography: creating zoomable, large-scale maps of information. Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining: 1097-1105.Google Scholar
  42. Shapiro, S. (2012), Tractor: Toward Deep Understanding of Short Intelligence Messages, University Seminar, available at: http://studylib.net/doc/10515245/tractor-toward-deep-understanding-of-short-intelligence-m., 2012
  43. Simari, G and Rahwan, I (2009) Argumentation in artificial intelligence, Springer.Google Scholar
  44. Smets, P. (1994), The transferable belief model, Artificial Intelligence, Volume 66, Issue 2, Pages 191-23.Google Scholar
  45. Stasko, J., Gorg, C., Liu, Z., and Singhal, K. (2013), “Jigsaw: Supporting Investigative Analysis through Interactive Visualization”, Proc. 2007 IEEE Conference on Visual Analytics Science and Technology (VAST), Sacramento, CA.Google Scholar
  46. Suthers, D. et al, (1995),, ‘Belvedere: Engaging students in critical discussion of science and public policy issues’, in AI-Ed 95, the 7th World Conference on Artificial Intelligence in Education, pp. 266–273, (1995).Google Scholar
  47. Thagard, P. (2000), Probabilistic Networks and Explanatory Coherence, Cognitive Science Quarterly 1, 93-116Google Scholar
  48. Toniolo, A., Ouyang RW, Dropps T, Allen JA, Johnson DP, de Mel G, Norman TJ, (2014), Argumentation-based collaborative intelligence analysis in CISpaces, in Frontiers in Artificial Intelligence and Applications; Vol. 266, IOS PressGoogle Scholar
  49. Twardy, C. (2004): Argument maps improve critical thinking. Teaching Philosophy 27 (2):95--116Google Scholar
  50. van den Braack, S. W. et al, (2007), AVERs: an argument visualization tool for representing stories about evidence, Proceedings of the 11th international conference on Artificial intelligence and law, Stanford, CA.Google Scholar
  51. van den Braack, S. W. (2010), Sensemaking software for crime analysis, Dissertation, Univ of Utrecht, Holland.Google Scholar
  52. van den Braack, S.W., et al (2006), A critical review of argument visualization tools: do users become better reasoners?, ECAI-06 CMNA Workshop, 2006Google Scholar
  53. Walton, D., Reed, C., and Macagno.F. (2008), Argumentation Schemes, Cambridge University Press.Google Scholar
  54. Walton, D. and Gordon, T.F. (2012), The Carneades Model of Argument Invention, Pragmatics & Cognition, 20(1), Web 2.0, Amsterdam, The NetherlandsGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • James Llinas
    • 1
  • Galina Rogova
    • 1
  • Kevin Barry
    • 2
    Email author
  • Rachel Hingst
    • 2
  • Peter Gerken
    • 2
  • Alicia Ruvinsky
    • 2
  1. 1.Center for Multisource Information Fusion (CMIF)State University of New York at BuffaloBuffaloUSA
  2. 2.Lockheed Advanced Technology Laboratories (ATL)Cherry HillUSA

Personalised recommendations