Skip to main content

Abstract

A concept for characterizing, predicting and recognizing threat situations is developed. The goal is to establish a systematic approach to automation of some of these functions. Approaches are discussed to address the fundamental problems of (a) sparse and ambiguous indicators of potential or actualized threat activity buried in massive background data; and (b) uncertainty in threat capabilities, intent and opportunities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We argue in Steinberg (2015a) and Steinberg (2015b) that it makes more sense to distinguish inference problems on the basis of type of entity state variables rather than by type of entity: a given entity can be addressed at more than one level. For example, a vehicle can be the “target” of a level 1 fusion process if level 1 states (e.g., its location, velocity, size, weight, type, identity, or activity) are being estimated. The same vehicle can be the “target” of a level 2 process if it is considered as a complex or a structure, such that level 2 states (e.g., the relationships among its components or its subassemblies) are being estimated. It could also be the subject of a level 3 process if it is considered as a dynamic process, such that level 3 states (e.g., its course of action and outcome states) are being estimated. It could even be the subject of a level 4 process if it is considered as the system performing the estimation and level 4 states (e.g., the operating conditions and performance relative to users’ objectives) are being estimated.

  2. 2.

    McMichael, Jarrad, and colleagues at CSIRO (Commonwealth Scientific and Industrial Research Organisation) developed methods for efficient storage, search, and manipulation of such dynamic belief networks, employing what they call grammatical methods for representing and reasoning about situations and scenarios (McMichael and Jarrad 2005).

  3. 3.

    Means of threat assessment in the absence of complete, high-confidence models of such factors are addressed in Sect. 15.6.

  4. 4.

    Section 15.6 discusses categories of inference problems in which predictive statistical models of entities and activities of concern are not available.

  5. 5.

    Technology Readiness Levels as defined by the US Department of Defense are (TRL1) basic principles observed and reported; (TRL2) technology concept and/or application formulated; (TRL3) analytical and experimental critical function and or characteristic proof-of-concept; (TRL4) component and/or breadboard validation in laboratory environment; (TRL5) component and/or breadboard validation in relevant environment; (TRL6) system/subsystem model or prototype demonstration in relevant environment; (TRL7) system prototype demonstration in operational environment; (TRL8) actual system completed and qualified; and (TRL9) actual system proven in successful mission operations (paraphrased from Nolte et al. (2003)).

  6. 6.

    Such inferences are examples of Category 3 reasoning as discussed in Sect. 15.7.

  7. 7.

    JPL defined Concept Maturity Levels (CMLs) as (CML1) Cocktail Napkin (i.e., rudimentary understanding of the concept); (CML2) Initial Feasibility; (CML3) Trade Space; (CML4) Point Design; (CML5) Concept Baseline; (CML6) Initial Design; and (CML7) Integrated Baseline (Wessen et al. 2010).

  8. 8.

    Interpersonal interactions and influences add to complexity and model issues. There are well-known issues of emergent group behavior that are arguably not predictable by constructive modeling.

  9. 9.

    Methods for evaluating and compensating for source data quality are discussed in (Snidaro et al. 2015, Steinberg et al. 2014).

References

  • Bennett M, Waltz E (2007) Counterdeception principles and applications for national security. Artech House, Boston, MA

    Google Scholar 

  • Boyd JR (1976) Destruction and creation. U.S. Army Command and General Staff College, 3 Sept 1976

    Google Scholar 

  • de Becker G (1997) The gift of fear (and other survival signals that protect us from violence). Dell Publishing, New York

    Google Scholar 

  • Endsley MR (2000) Theoretical underpinnings of Situation Awareness: a critical review. In: Situation awareness analysis and measurement. Lawrence Erlbaum Associates Inc., Mahwah, NJ

    Google Scholar 

  • Garcia J, Snidaro L, Visentini I (2012) Exploiting context as binding element for multi-level data fusion. Proc., Fifteenth international conference on information

    Google Scholar 

  • Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York

    Google Scholar 

  • Kschischang FR, Frey BJ, Loeliger HA (2001) Factor graph and the sum-product algorithm. IEEE Trans Inf Theory 47:498–519

    Article  MathSciNet  MATH  Google Scholar 

  • Lambert DA (2006) A unification of sensor and higher-level fusion. Proc. 9th international conference on information fusion

    Google Scholar 

  • Little EG, Rogova GL (2006) An ontological analysis of threat and vulnerability. Proc. Ninth international conference on information fusion, Florence

    Google Scholar 

  • McMichael D, Jarrad G (2005). Grammatical methods for situation and threat analysis. Proc. Eighth international conference on information fusion, Philadelphia, June 2005

    Google Scholar 

  • Moore KE (2005) Predictive Analysis for Naval Deployment Activities (PANDA), briefing to industry: PANDA Overview, 16 Sept 2005

    Google Scholar 

  • Nolte WL, Kennedy BC, Dziegiel Jr RJ (2003) Technology readiness level calculator. NDIA Systems Engineering Conf., 20 Oct 2003

    Google Scholar 

  • Rogova G, Nimier V (2004) Reliability in information fusion: literature survey. Proc. Seventh international conference on information fusion, Stockholm

    Google Scholar 

  • Rogova GL, Steinberg AN (2015) Formalization of ‘Context’ for information fusion. Context Enhanced Information Fusion. Springer

    Google Scholar 

  • Salerno JJ (2007) Where’s level 2/3 fusion—a look back over the past 10 years. Proc. Tenth International Conference on Information Fusion, Quebec

    Google Scholar 

  • Snidaro L, Garcia J, Llinas J (2015) Context-based information fusion: a survey and discussion. Inf Fusion 25:16–31

    Article  Google Scholar 

  • Steinberg AN (2005) An approach to threat assessment. Proc., Eighth international conference on information fusion, Philadelphia

    Google Scholar 

  • Steinberg AN (2009) Foundations of situation and threat assessment. In: Liggins ME, Hall DL, Llinas J (eds) Chapter 18 of Handbook of multisensor data fusion. CRC Press, London

    Google Scholar 

  • Steinberg AN (2013) Situation management for counter-piracy. In: Prediction and recognition of piracy efforts using collaborative human-centric information systems. Ios Press, Amsterdam

    Google Scholar 

  • Steinberg AN (2014) Threat assessment with technical intelligence applications. Proc. CogSIMA, San Antonio, TX, Mar 2014

    Google Scholar 

  • Steinberg AN (2015) Levels? Proc. 18th international conference on information fusion, Washington

    Google Scholar 

  • Steinberg AN (2015) Situations and contexts. ISIF Perspectives on Information Fusion (Perspectives), Vol 1(1)

    Google Scholar 

  • Steinberg AN, Bowman CL (2009) Revisions to the JDL data fusion model. In: Liggins ME, Hall DL, Llinas J (eds) Chapter 3 of Handbook of multisensor data fusion. CRC Press, London

    Google Scholar 

  • Steinberg AN, Bowman CL, Blasch E, Morefield C, Morefield M, Haith G (2014) Adaptive context assessment and context management, Proc., Seventeenth International Conference on Information Fusion, Salamanca, Spain, 2014

    Google Scholar 

  • Steinberg AN, Rogova GL (2015) System-level use of contextual information. In: Snidaro L, Garcia J, Llinas J, Blasch E (eds) Context-enhanced information fusion. Springer, Berlin

    Google Scholar 

  • Waltz E (2003) Knowledge management in the intelligence enterprise. Artech House, Boston, MA

    Google Scholar 

  • Wessen RR, Adler M, Leising CJ, Sherwood B (2010) Measuring the maturity of robotic planetary mission concepts II. SpaceOps 2010 Conference, Hosted by NASA Mars, Huntsville, Alabama, 25–30 Apr 2010

    Google Scholar 

  • Whaley B (1969) Strategem: deception and surprise in war. Center for International Studies, MIT, Cambridge, MA

    Google Scholar 

  • White FE (1988) A model for data fusion. Proc. First national symposium on sensor fusion, GACIAC, IIT Research Institute, Chicago

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alan N. Steinberg .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Steinberg, A.N. (2016). A Model for Threat Assessment. In: Rogova, G., Scott, P. (eds) Fusion Methodologies in Crisis Management. Springer, Cham. https://doi.org/10.1007/978-3-319-22527-2_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-22527-2_15

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-22526-5

  • Online ISBN: 978-3-319-22527-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics